GithubHelp home page GithubHelp logo

logdna / logdna-agent Goto Github PK

View Code? Open in Web Editor NEW
138.0 138.0 55.0 501 KB

LogDNA Agent streams from log files to your LogDNA account. Works with Linux, Windows, and macOS Servers

Home Page: https://logdna.com

License: MIT License

JavaScript 77.76% Ruby 1.95% Shell 17.77% PowerShell 2.52%
agent glob-patterns linux log-management logdna logdna-agent logdna-ingestion-key logger logging mac monitor nodejs windows

logdna-agent's People

Contributors

ac427 avatar beefcheeks avatar caarlos0 avatar cjp10 avatar darinspivey avatar dchai76 avatar emily2005 avatar esatterwhite avatar gregswift avatar jakedipity avatar jalenplayvs avatar krazik avatar leeliu avatar lyhuang avatar mikeadamz avatar peterbytes avatar respectus avatar ritazh avatar ryadav95 avatar smusali avatar summatix avatar thulogdna avatar viktorbenei avatar vilyapilya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logdna-agent's Issues

How do you e.g. exclude `kube-system` logs from ingestion?

Apologies if this is more of a support question than a GitHub issue, but I haven't been able to figure out how to do something like the following[1]:

  • Exclude logs if they're coming from a kube-system namespaced container, unless those logs are ERROR level.

Figuring out how to search / filter based on this criteria is easy, but I'm wondering more about ingestion. Our very small K8s cluster is already very noisy :)

Taking a quick look at the agent code, it might be the case that all of the log metadata is extracted by you folks once it hits your servers and not inside the agent, so I'm not sure this kind of filtering is possible?

My temporary workaround was to just exclude all of kube-system and all non-container logs by setting the LOGDNA_EXCLUDE environment variable to /var/log/!(containers)*,/var/log/containers/*_kube-system_*. But ideally we would be able to log kube-system logs if they had e.g. "stream": "stderr".

  1. Note that we're the logdna-agent on Kubernetes using the default setup as described in the documentation. The only change we've made is to also run the agent on the master nodes.

feature add include/exclude file glob options

Instead of "logdir" I would suggest to have a multiline options for include and exclude with glob options

example:

include = /var/log/*.log
include = /var/lib/docker/*.log
exclude = /some/file/name

Alpine based Docker image

Hi,

The current docker image of the agent is using ubuntu as a base image. Even the latest ubuntu image has known vulnerabilities including high level vulnerabilities with packages such as glibc.

It would be good to have an image of the logdna agent that's based on Alpine, which is much more lightweight and hardened.

Thanks

No SECURE way to install Log DNA

echo "deb http://repo.logdna.com stable main" | sudo tee /etc/apt/sources.list.d/logdna.list
wget -O- http://repo.logdna.com/logdna.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install logdna-agent < "/dev/null" # dev/null required for scripting

You suggest to add keys downloaded through HTTP on production server. This is ridiculous.

Support Multiple Windows Eventlog Providers

Multiple windows eventlog providers should be supported. Currently today you can only enter one on the windowseventlogprovider property in the configuration.

Also allowing the logdna-agent to output this config from the command line to the file would be really helpful.

Log filtering does not work as expected at logDNA agent level in Kubernetes environment

I used the following to exclude certain logs from being collected by the logdna-agent in k8s.

- name: LOGDNA_EXCLUDE
          value: /var/log/!(containers)*, /var/log/containers/*_kube-system_*, /var/log/containers/*_kube-public_*, /var/log/containers/*_default_*

With the above env setting, I see that the logs pertaining to kube-system, kube-public and default namespaces are not getting ingested as expected. However, the /var/log/!(containers)* doesn't seem to work well, i.e. I see multiple non-container logs getting pushed to the logDNA server. I tried with both logdna/logdna-agent-v2:stable and logdna/logdna-agent:latest and used the .yaml files as they are (except adding LOGDNA_EXCLUDE env variable to the pod template.

Could anyone please take a look and help me understand what could be wrong here? Apologies, if it is more of a question rather than an issue as such.

Frequent Server Disconnection

Mar 16 10:07:40 logdna-agent-zjqgb logdna-agent Disconnected from server: 1006:
Mar 16 10:07:41 logdna-agent-zjqgb logdna-agent Attempting to connect #1 to logs.logdna.com:443 (SSL) using rkvZlflYOYG...
Mar 16 10:07:42 logdna-agent-zjqgb logdna-agent error Server error: Error: unexpected server response (401)
Mar 16 10:07:42 logdna-agent-zjqgb logdna-agent Got 401 response, reauthenticating...
Mar 16 10:07:42 logdna-agent-zjqgb logdna-agent Authenticating API Key with api.logdna.com (SSL)...
Mar 16 10:07:42 logdna-agent-zjqgb logdna-agent Auth success, got token: r1Ty75pdYG
Mar 16 10:07:42 logdna-agent-zjqgb logdna-agent Using modes: compressed, tailmode: trs, transport: http
Mar 16 10:07:43 logdna-agent-zjqgb logdna-agent Attempting to connect #2 to logs.logdna.com:443 (SSL) using r1Ty75pdYG...
Mar 16 10:07:44 logdna-agent-zjqgb logdna-agent Connected to logs.logdna.com:443 (52.226.69.108) (SSL)
Mar 16 10:07:44 logdna-agent-zjqgb logdna-agent Streaming resumed: 46 file(s)
Mar 16 10:12:47 logdna-agent-zjqgb logdna-agent Disconnected from server: 1006:
Mar 16 10:12:48 logdna-agent-zjqgb logdna-agent Attempting to connect #1 to logs.logdna.com:443 (SSL) using r1Ty75pdYG...
Mar 16 10:12:48 logdna-agent-zjqgb logdna-agent Connected to logs.logdna.com:443 (52.170.85.196) (SSL)
Mar 16 10:12:48 logdna-agent-zjqgb logdna-agent Streaming resumed: 46 file(s)
Mar 16 10:16:11 logdna-agent-zjqgb logdna-agent Sent 2 lines queued from earlier disconnection
Mar 16 10:35:10 logdna-agent-zjqgb logdna-agent Disconnected from server: 1006:
Mar 16 10:35:11 logdna-agent-zjqgb logdna-agent Attempting to connect #1 to logs.logdna.com:443 (SSL) using r1Ty75pdYG...
Mar 16 10:35:11 logdna-agent-zjqgb logdna-agent Connected to logs.logdna.com:443 (13.82.42.30) (SSL)
Mar 16 10:35:11 logdna-agent-zjqgb logdna-agent Streaming resumed: 46 file(s)
Mar 16 14:00:12 logdna-agent-zjqgb logdna-agent error Server error: Error: unexpected server response (401)
Mar 16 14:00:12 logdna-agent-zjqgb logdna-agent Got 401 response, reauthenticating...
Mar 16 14:00:12 logdna-agent-zjqgb logdna-agent Authenticating API Key with api.logdna.com (SSL)...
Mar 16 14:00:10 logdna-agent-zjqgb logdna-agent Disconnected from server: 1006:
Mar 16 14:00:10 logdna-agent-zjqgb logdna-agent Sent 6 lines queued from earlier disconnection
Mar 16 14:00:13 logdna-agent-zjqgb logdna-agent Auth success, got token: SyUgTqg-YYz
Mar 16 14:00:13 logdna-agent-zjqgb logdna-agent Using modes: compressed, tailmode: trs, transport: http
Mar 16 14:00:14 logdna-agent-zjqgb logdna-agent Attempting to connect #2 to logs.logdna.com:443 (SSL) using SyUgTqg-YYz...
Mar 16 14:00:14 logdna-agent-zjqgb logdna-agent Connected to logs.logdna.com:443 (13.90.228.84) (SSL)
Mar 16 14:00:14 logdna-agent-zjqgb logdna-agent Streaming resumed: 46 file(s)
Mar 16 14:00:11 logdna-agent-zjqgb logdna-agent Attempting to connect #1 to logs.logdna.com:443 (SSL) using r1Ty75pdYG...
Mar 16 14:09:48 logdna-agent-zjqgb logdna-agent Log rotated: /var/log/pods/1e7067c5-083f-11e8-8b81-0638e0799fc0/prometheus-operator_0.log by truncation
Mar 16 14:09:48 logdna-agent-zjqgb logdna-agent Log rotated: /var/log/containers/prometheus-operator-77895958c-lcvjf_monitoring_prometheus-operator-b259a8ebe2da3e5b9781c90d1b29371431a92eea5c97650d84545938548bbd6f.log by truncation
Mar 16 15:32:54 logdna-agent-zjqgb logdna-agent Sent 174 lines queued from earlier disconnection
Mar 16 20:44:57 logdna-agent-zjqgb logdna-agent Sent 168 lines queued from earlier disconnection
Mar 16 23:25:40 logdna-agent-zjqgb logdna-agent Disconnected from server: 1006:
Mar 16 23:25:41 logdna-agent-zjqgb logdna-agent Attempting to connect #1 to logs.logdna.com:443 (SSL) using SyUgTqg-YYz...
Mar 16 23:25:41 logdna-agent-zjqgb logdna-agent error Server error: Error: unexpected server response (401)
Mar 16 23:25:41 logdna-agent-zjqgb logdna-agent Got 401 response, reauthenticating...

example docs are wrong

note that this is wrong

sudo node index.js -d /path/to/log/folders -d /path/to/2nd/folder
sudo node index.js -d /var/log/*.txt                    # supports glob patterns
sudo node index.js -d /var/log/**/*.txt                 # *.txt in any subfolder
sudo node index.js -d /var/log/**/myapp.log             # myapp.log in any subfolder
sudo node index.js -d /var/log                          # folder only assumes *.log + extensionless files

bash will expand these and logdna will get first file as parameter, they must be quoted

HTTP_PROXY

Is there a way to specify the proxy? We're behind a coorporate web proxy, and the agent doesn't seem to use the HTTP_PROXY or HTTPS_PROXY environment variables.

Need deb file with below changes

Hi ,

LogDNA agent stopped bceause of the below error
180227 19:16:33] Streaming /var/log: 2 new file(s), 104 total file(s)
[180227 19:18:52] Sent 10001 lines queued from earlier disconnection
[180227 19:18:54] Sent 10001 lines queued from earlier disconnection
[180227 19:18:55] Sent 7146 lines queued from earlier disconnection
[180227 19:18:58] Sent 5000 lines queued from earlier disconnection
[180227 19:18:59] Sent 10001 lines queued from earlier disconnection
[180227 19:19:19] Sent 10001 lines queued from earlier disconnection
[180227 19:19:30] Sent 5012 lines queued from earlier disconnection
[180227 19:19:33] Sent 11922 lines queued from earlier disconnection
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
<--- Last few GCs --->
1208485 ms: Scavenge 1398.9 (1446.7) -> 1398.9 (1446.7) MB, 10.8 / 0 ms (+ 0.8 ms in 1 steps since last GC) [allocation failure] [incremental marking delaying mark-sweep].
1209002 ms: Mark-sweep 1398.9 (1446.7) -> 1393.5 (1441.6) MB, 517.0 / 0 ms (+ 72.6 ms in 1762 steps since start of marking, biggest step 2.0 ms) [last resort gc].
1209510 ms: Mark-sweep 1393.5 (1441.6) -> 1392.9 (1447.6) MB, 507.6 / 0 ms [last resort gc].

we speculated that using this --max-old-space-size=1024 param in the and running Logdna agnet using js script as mentioned in link https://github.com/logdna/logdna-agent/blob/master/logdna-agent:

can you please provide a deb file with below changes in logdna-agent file
#!/bin/sh

node --max-old-space-size=1024 index.js "$@"

Thanks,Amrut

/etc/os-release is not present on all systems

The /etc/os-release file is not present on all systems.

Notable rancher/k3s image, that is gaining popularity does not have this file and deployment fails.

I am using rancher/k3s for testing deployments, but in production this file exists.

So I am not sure how to handle this gracefully.

Mainly opened this up for discussions. I don't think this is a bug in logdna-agent. But if there is a way to adjust logdna-agent, then that would be a welcome change too.

Keep older versions in apt-get repository

Hey,

When ever you are updating the package, you are also deleting old versions from package mangers such as apt-get, which means that if someone relies on a specific version he will have to update which is a bad user experience.

I'd prefer to update when I want, not when you want :)

"File does not exist, stopped tailing" for short lived k8s containers

We've got a k8s cronjob that runs for about 2-3 seconds or so on occasion .. and the containers hang around for around 15 min or so after the job is completed.

We have a few other cronjobs that take a bit longer to run (hours), which seem to make it to logdna without any problems. So thinking that the short-lived nature of the new cronjob is the culprit.

For some reason these logs aren't making it to logdna. Looking at the logs for logdna-agent we see:

[190418 17:24:10] File does not exist, stopped tailing: /[redacted]/8c88fdc57700bfe9e3e1523d0e49067dcbe291ac2ec77b2fab827905f10b82fd.log after 300000ms

Is there anyway to work around this?

K8s (AKS): 1.12.6
logdna-agent 1.5.6

When agent reconnects, it doesn't seem to pick logs again

Working through the trial, we just noticed that the agent does not pick up new lines added to log files after it reconnects:

Logs from the agent:

[190612 01:15:22] Connected to logs.logdna.com:443 (13.82.132.160) (SSL)
[190612 01:15:22] Streaming /var/log/apt/history.log: 1 file(s)
[190612 01:15:22] Streaming /var/log/logdna-agent.log: 1 file(s)
[190612 02:01:16] Disconnected from server: 1006:
[190612 02:01:17] Attempting to connect #1 to logs.logdna.com:443 (SSL) using xxxx:xxxx...
[190612 02:01:18] Connected to logs.logdna.com:443 (52.170.85.196) (SSL)
[190612 02:01:18] Streaming resumed: 2 file(s)

Logs from /var/log/apt/history.log:

root@default-ubuntu-1804:/var/log# tail -f /var/log/apt/history.log
Commandline: apt install ncdu
Requested-By: vagrant (1000)
Install: ncdu:amd64 (1.12-1)
End-Date: 2019-06-12  01:16:59

Start-Date: 2019-06-12  02:03:30
Commandline: apt install --yes iotop
Requested-By: vagrant (1000)
Install: iotop:amd64 (0.6-2)
End-Date: 2019-06-12  02:03:31

You can see a new package was installed at 02:03:31, yet nothing is happening even though agent reconnected at 02:01:18.

Confirmed through UI as well:

Screen Shot 2019-06-11 at 19 05 30

I can connect fine to the endpoint from the box:

root@default-ubuntu-1804:/var/log# date; nc -w 1 -v -z logs.logdna.com 443
Wed Jun 12 02:11:46 UTC 2019
Connection to logs.logdna.com 443 port [tcp/https] succeeded!

Any ideas?

CI

Travis CI & Appveyor

Illegal operation on a directory, read

Starting new pod results in:

2017-04-26T13:51:07.122756255Z [170426 13:51:07] ------------------------------------------------------------------
2017-04-26T13:51:07.125148525Z [170426 13:51:07] Uncaught Error: Error: EISDIR: illegal operation on a directory, read
2017-04-26T13:51:07.125162186Z at Error (native)
2017-04-26T13:51:07.125165815Z at Object.fs.readSync (fs.js:651:19)
2017-04-26T13:51:07.125168470Z at Object.fs.readFileSync (fs.js:472:24)
2017-04-26T13:51:07.125171177Z at nexe.js:164:98
2017-04-26T13:51:07.125178046Z at tryCatcher (nexe.js:12813:23)
2017-04-26T13:51:07.125181216Z at Promise._settlePromiseFromHandler (nexe.js:10850:31)
2017-04-26T13:51:07.125183787Z at Promise._settlePromise (nexe.js:10907:18)
2017-04-26T13:51:07.125186292Z at Promise._settlePromise0 (nexe.js:10952:10)
2017-04-26T13:51:07.125188847Z at Promise._settlePromises (nexe.js:11031:18)
2017-04-26T13:51:07.125191382Z at Promise._fulfill (nexe.js:10976:18)
2017-04-26T13:51:07.125194199Z at Promise._resolveCallback (nexe.js:10771:57)
2017-04-26T13:51:07.125196679Z at Promise._settlePromiseFromHandler (nexe.js:10862:17)
2017-04-26T13:51:07.125199187Z at Promise._settlePromise (nexe.js:10907:18)
2017-04-26T13:51:07.125210652Z at Promise._settlePromise0 (nexe.js:10952:10)
2017-04-26T13:51:07.125213520Z at Promise._settlePromises (nexe.js:11027:18)
2017-04-26T13:51:07.125215983Z at Async._drainQueue (nexe.js:7923:16)
2017-04-26T13:51:07.126816694Z [170426 13:51:07] ------------------------------------------------------------------

Provider: GCP via Container Engine.

Thank you!

Kubernetes logdna-agent-v2:stable does not automatically add k8s pod labels

Unfortunately with new image: logdna/logdna-agent-v2:stable our ingested logs does not seem to be automatically labeled with k8s pod labels. With the earlier image: logdna/logdna-agent:latest it does.

Reproduce:
Configure your k8s cluster with logdna-agent-v2 DaemonSet. Start pod with kubectl run with parameter -l datasetId=nnnnn. Result: Logs from the pod will be added to LogDNA, but without the pod label (datasetId=nnnnn).

Build creation in OSX failed

Hi, I am trying to create a deb file in OSX system using logdna source code and i am following through below doc
https://github.com/logdna/logdna-agent/blob/master/CONTRIBUTING.md

mac

grunt test

8 passing (4s)
1 failing

  1. lib:file-utilities #streamAllLogs() streams file changes to a socket using TAIL_MODE trs:
    Error: Cannot find module 'mock-socket'
    at require (internal/module.js:11:18)
    at Context. (test/lib/file-utilities.js:118:35)

clang: warning: libstdc++ is deprecated; move to libc++ with a minimum deployment target of OS X 10.9
ln -fs out/Release/node node
----> cp /Users/apurandare/tmp/nodejs/5.9.0/node-v5.9.0/out/Release/node /Users/apurandare/GIT/logdna-agent/logdna-agent
----> copied

Running "exec:fpm_rpm" (exec) task
{:timestamp=>"2018-03-21T18:26:23.560684+0530", :message=>"Need executable 'rpmbuild' to convert dir to rpm", :level=>:error}

Exited with code: 1.
Warning: Task "exec:fpm_rpm" failed. Use --force to continue.

Aborted due to warnings.

Build creation fails in linux

Hi, I am trying to create a deb file in linux system using logdna source code and i am following through below doc
https://github.com/logdna/logdna-agent/blob/master/CONTRIBUTING.md

Observed below error. Is it a bug in the source code itself
linux (build creation)

amrut@amrut-virtual-machine:~/LogDNA/logdna-agent$ grunt test

/home/amrut/LogDNA/logdna-agent/node_modules/boom/lib/index.js:5
const Hoek = require('hoek');
^^^^^
Loading "curl.js" tasks...ERROR

SyntaxError: Use of const in strict mode.
Loading "eslint.js" tasks...ERROR
SyntaxError: Use of const in strict mode.
Warning: Task "eslint" not found. Use --force to continue.

Aborted due to warnings.

Update changelog

Hey,

The "releases" changelog is missing for the last 2 releases

Get startup script to read file for additional parameters

I needed to add proxy configuration for our logdna-agent install but there wasn't an obvious place for it. I have modified the startup script to look for the file /etc/sysconfig/logdna-agent and, if there, source it's contents. Please consider adding this functionality to the packaged startup script.

$ diff -u logdna-agent.orig logdna-agent
--- logdna-agent.orig	2018-01-30 04:04:22.843580926 +0000
+++ logdna-agent	2018-01-30 05:12:00.456739197 +0000
@@ -20,6 +20,10 @@
 stdout_log="/var/log/$name.log"
 conf_file="/etc/logdna.conf"
 
+if [ -f /etc/sysconfig/$name ]; then
+    . /etc/sysconfig/$name
+fi
+
 get_pid() {
     cat "$pid_file"
 }
@@ -41,6 +45,7 @@
         chmod 0644 /etc/logrotate.d/logdna-agent > /dev/null 2>&1
 
         echo "Starting $name..."
+        export $PROXY
         $cmd >> "$stdout_log" 2>&1 &
         echo $! > "$pid_file"
         if ! is_running; then

Then I have the following content in /etc/sysconfig/logdna-agent:
PROXY="HTTPS_PROXY=http://<address>:<port>"

---

Seems like you can't delete accidentally created issues... boo

ARM release

I've been attempting to install the agent on a raspberry pi, following the instructions for Debian on the LogDNA website:

echo "deb http://repo.logdna.com stable main" | sudo tee /etc/apt/sources.list.d/logdna.list
wget -O- https://s3.amazonaws.com/repo.logdna.com/logdna.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install logdna-agent < "/dev/null" # this line needed for copy/paste

However, when it gets to the sudo apt-get update step, the command fails with this message:

W: Failed to fetch http://repo.logdna.com/dists/stable/Release  Unable to find expected entry 'main/binary-armhf/Packages' in Release file (Wrong sources.list entry or malformed file)

Would it be possible to get an ARM release to avoid the overhead of downloading nodejs and installing from source?

Auth error: Error: connect ECONNREFUSED

I was trying to use logdna from IBM Cloud Logdna service.

I was provided two commands to run in my cluster

kubectl create secret generic logdna-agent-key --from-literal=logdna-agent-key=***********************

and

kubectl create -f https://repo.logdna.com/ibm/prod/logdna-agent-ds-us-south.yaml

When I opened my logdna dashboard I saw nothing.

I looked at the logs of the logdna

[190220 03:09:40] logdna-agent 1.5.6 started on kube-%#%#%#%#%#%#%-w1.cloud.ibm (172.30.218.28)
[190220 03:09:40] Authenticating API Key with api.us-south.logging.cloud.ibm.com (SSL)...
[190220 03:09:40] Error connecting to /var/run/docker.sock: Error: connect ECONNREFUSED /var/run/docker.sock
[190220 03:09:40] Auth error: Error: connect ECONNREFUSED 160.**.**.**:443

I am not sure if this is a logdna specific problem or problem with the service in IBM Cloud.

Use a data serialization language for the config file or support double quotes

Hi folks,

We just tried to test drive LogDNA and first issue we ran into is that the config file is some kind of INI-like (bad idea).

Second issue is that if you have globs in your logdir, then you need to use double quotes which is fine but you can't exactly use double quotes everywhere say for example the key which just bit us:

root@default-ubuntu-1804:/var/log# cat /etc/logdna.conf
# This file is handled by Chef in wo-logdna::config

logdir = "/var/log/apt/history.log,/var/log/logdna-agent.log"
key = "mykey"
tags = "kitchen"
autoupdate = 0
hostname = "default-ubuntu-1804"

Which leads to the agent taking the double quotes:

[190612 00:50:16] logdna-agent 1.5.6 started on "default-ubuntu-1804" (10.0.2.15)
[190612 00:50:16] Authenticating API Key with api.logdna.com (SSL)...
[190612 00:50:16] Auth error: 401: {"error":"Unable to authenticate key: \"a8316685af8efac342a846494aab5dcf\"","code":"Rejected","status":"error"}

Any plans to move say to TOML, JSON or anything like? It'll make it much easier to dump the config through configuration management as well so that instead of something like this:

# This file is handled by Chef in <%= @cookbook %>::<%= @recipe %>

<% @config.each do |k, v| %>
<% if v.is_a?(Integer) %>
<%= k %> = <%= v %>
<% else %>
<%= k %> = "<%= v %>"
<% end %>
<% end %>

We can simply JSON.dump(fancy_logdna_config) for the config template.

Thanks!

How can I set "app name"?

Hello,

LogDNA lets me filter by "app", but I don't see any way to set that value for this agent. Is there anyway I'm not seeing to do this? I can set tags and hostname, but I believe those are different

Kubernetes cluster with logdna - ingress controller log?

HI there,

We have a 6 nodes Kubernetes cluster deployed to Azure via AKS. We have the deploy the standard logdna agent with some customised tagging.

It works great except we don't see any nginx ingress logs coming through. Is there way to include ingress controller logs please?

Kubernetes integration: Duplicate logs?

I am trying out logdna and the kubernetes integration on a 5 node coreos cluster (2 master 3 workers).

Cluster is setup mostly as per instructions from coreos, logdna DS is setup as per instructions in readme.

I see for each log line emitted by by a pod, logdna shows me 2 entries. One with hostname=podname and another with hostname= . This seams to happen for each entry from each pod.

Screenshot: https://db.sajalkayan.com/logdna.png

Does this mean I am consuming 2x of the storage I ought to? Is there something I can tune to not make it happen? Or is this just a UI thing and its not actually consuming 2x storage.

Ideally the agent should emit a single entry, and add fields for node hostname, podname and other kubernetes fields if possible.

Add tags to config?

I would like to run LogDNA agent on multiple Kubernetes clusters. Is there a way to add custom tags where I can specify something like env: staging for my staging Kubernetes cluster?

Thanks!

How to determine log ingestion completion for a file?

Is there anyway to see the amount of bytes the logdna agent has ingested from a log file? I need to be able to tell if the agent has finished reading all the logs in a file, so that I can rotate log files that have been fully processed.

runtime memory usage

what is the runtime memory usage of this daemon if its left running for dozens of days?
could there be a mechanism to run the log collection in a worker process and periodically respawn that process to make sure there's no long term memory leak?

logdna-agent crashing because of restart request

With the recent logdna-agent version on Ubuntu 16.04 I noticed it sometimes suddenly just dies off. The logs claim an agent restart was requested, yet this was not the case (or is something in the logdna-agent triggering this?)

[180119 10:06:00] Connected to logs.logdna.com:443 (52.168.140.197) (SSL)
[180119 10:06:00] Streaming /var/log/php7.0-fpm.log: 1 file(s)
[180119 10:07:09] Streaming /var/log/php7.0-fpm.log: 1 file(s)
[180119 10:07:28] Streaming /srv/http/**/logs/*_error.log: 16 file(s)
[180119 10:12:11] Agent restart request received, restarting...
Not stopped; may still be shutting down or shutdown may have failed

A simple sudo service logdna-agent restart doesn't bring it back up. This requires a sudo service logdna-agent stop; sudo service logdna-agent start to get it back online.

API-Client interface about get_token and websockets

Some question about followed code:

var postdata = {
        hostname: config.hostname
        , mac: config.mac
        , ip: config.ip
        , tags: config.tags
        , platform: config.platform
        , agentname: agentName
        , agentversion: pkg.version
        , osdist: config.osdist
        , awsid: config.awsid
        , awsregion: config.awsregion
        , awsaz: config.awsaz
        , awsami: config.awsami
        , awstype: config.awstype
    };

In some case, use logdna-agent may not satisfy some needs,write one other agent may solve the question, but when I try to write my own agent, questions comes up.

These is no clear description of this API interface to get token from server, if all of fields above needed in post data or some are not optional?, and more, just give the agent code and not expose the API interface of the server is really confused me.

In addition, there is no description on API interface used by agent to send and receive the data with the server.

I remember your doc says: we could even write our own agent, but the fact really disappoint me.

Agent doesn't work if hostname is number

logdna.conf

hostname=123456789

error:

[180215 15:37:50] Auth error: 500: {"error":"TypeError: req.params.hostname.split is not a function","code":"ServerError","status":"error"}

adding a alpha letter to the hostname will solve problem. but probably should consider casting value to string if you're gonna split on it.

๐Ÿ‘

Ability to exlude entire directory

I tried passing the directory to the --exclude parameter but it's not really working. The documentation only says it's used to exclude a file. I want to exclude an entire directory in my /var/log.

VS2017 Support

At the moment it is not possible to build the agent on windows with Visual Studio 2017.

Therefore the Text in the CONTRIBUTING.md seems to be wrong: Ensure you have Visual Studio 2015 or newer installed. should be Ensure you have Visual Studio 2015 or older installed.

image

Support for inexistent files

If we specify a file that does not exist, the logdna-agent will not keep running waiting the file to be created. That is really common when working with deploys.

We have a environment using a deploy tool. Every time a new deploy is made, a new log file is created inside the project, and the old one is moved. What happens is if we make a deploy, the logs will not be sent anymore, because the file was moved the and logdna-agent will not find it.

So we need to create a empty log file by default in the VCS repository. What is not a recommended practice.

Example:
Default environment:

The logs are in /deploys/current/var/logfile.log.

A new deploy is made:
The file /deploys/current/var/logfile.log is moved to another folder.
And the logdna-agent stops working.

And the new logfile.log is created only when any information is logged.

So, would be great if the agent keeps running even if the file does not exists. This kind of situation is really common.

Logdna-agent suddenly switches to 127.0.0.1?

A few days ago we noticed the logs stopped coming in. The logdna-agent was still running, but its logs showed:

[180115 21:30:19] Agent restart request received, restarting...
[180115 21:30:22] logdna-agent 1.4.7 started on web01.example.com (10.10.10.10)
[180115 21:30:22] Authenticating API Key with api.logdna.com (SSL)...
[180115 21:30:22] Auth success, got token: XXXXXXXX
[180115 21:30:22] Using modes: compressed, tailmode: trs, transport: http
[180115 21:30:22] Server error: Error: connect ECONNREFUSED 127.0.0.1:443
[180115 21:30:23] Attempting to connect #1 to :443 (SSL) using XXXXXXXX...
[180115 21:30:23] Server error: Error: connect ECONNREFUSED 127.0.0.1:443

This went on for some time until we noticed in the interface that the logs stopped coming in. A manual restart of the agent resolved the issue and logs are coming in again.

Its unclear what requested a restart. It wasn't a manual action nor provisioning since this was outside of our normal office hours so something server-side must have likely triggered it (and logdna temporarily pointed logs.logdna.com to 127.0.0.1?)

How to exclude some log files?

Is it possible to exclude some services from having their logs collected? We are using k8s with the deamonset like the tutorial suggested but one of our services is producing a lot of logs that we don't actually need. How do we add the exclude to certain containers?

hostname variable

If i enter hostname = www.example.com it just display www as hostname.
Almost the same happens if i enter hostname = "www.example.com", though it shows as "www instead.

How should I use the hostname variable correctly in /etc/logdna.conf?

Filter by namespace using Kubernetes

The readme suggests that namespace is picked up from the kubernetes environment, however, I can't seem to be able to filter on it. Is this available as a filter?

I just deleted and applied the current logdna-agent-ds.yaml and the only tag I seem to have available is k8s. I noticed the LOGDNA_TAGS env variable, but I'm assuming thats a cluster wide setting, am i right?

Support for milliseconds format

It seems the agent doesn't support the ingestion of timestamps with resolution smaller than one second.

We have a service running on k8s that logs at the microsecond level what ends up in the logdna UI is a truncated version everything down to the second level and apparently doesn't handle things after that level very well as we're seeing truncated versions of our timestamp as log line data for a given process.

For example below is the contents of the file prior to logdna agent ingest on the k8s worker node:

{
  "log": "2019-02-20 00:31:40.643278,               '', (LWP  324), webrtc_cb      , observers.h:190:OnRenegotiationNeeded   , Info: \"Renegotiation needed.\"\n",
  "stream": "stdout",
  "time": "2019-02-20T06:31:40.64450639Z"
}
{
  "log": "2019-02-20 00:31:40.643796,            18-84, (LWP   30), rmq_jsonproc1  , webrtc_bridge.cpp:631:ensure_task_done_in_webrtc_signal_thread, \"Signaled by webrtc thread that task done\"\n",
  "stream": "stdout",
  "time": "2019-02-20T06:31:40.644508798Z"
}
{
  "log": "2019-02-20 00:31:40.643817,            18-84, (LWP   30), rmq_jsonproc1  , webrtc_bridge.cpp:649:webrtc_bridge     , \"Closing the peer connection\"\n",
  "stream": "stdout",
  "time": "2019-02-20T06:31:40.644511864Z"
}

And the corresponding line in the logdna UI shows something slightly different..

screen shot 2019-02-20 at 5 44 02 pm

Does anyone have a good solution to this?

Tag & hostname not working for AWS Elastic Beanstalk

Hi,

I tried to add tags and change the hostname on AWS Elastic Beanstalk.
In the logs I found Hostname: FSDEV saved to config. Tags: FSDEV has been saved to config.

It looks like it worked but I don't see anything on the dashboard, does it take a long time to appear maybe?

Here is my config file:

files:
  "/home/ec2-user/logdna.sh" :
    mode: "000777"
    owner: root
    group: root
    content: |
      #!/bin/sh
      echo "[logdna]
      name=LogDNA packages
      baseurl=http://repo.logdna.com/el6/
      enabled=1
      gpgcheck=0" | tee /etc/yum.repos.d/logdna.repo
      yum -y install logdna-agent
      logdna-agent -k MYKEY # this is your unique Ingestion Key
      logdna-agent --hostname FSDEV
      logdna-agent -t FSDEV
      # /var/log is monitored/added by default (recursively), optionally add more dirs here
      logdna-agent -d /var/app/current/logs
      chkconfig logdna-agent on
      service logdna-agent start
commands:
  logdna:
    command: "/home/ec2-user/logdna.sh"

Agent keeps too many files open while reading

Tail error: Y:/CIS/Interfaces/log/cisp/201801/#######.log: Error: EMFILE: too many open files, open 'Y:\CIS\Interfaces\log\cisp\201801#######.log'

Discussed here:
https://stackoverflow.com/questions/8965606/node-and-error-emfile-too-many-open-files

I am trying to monitor many files and probably reaching an upper limit of open handles from the operating system. In my application file handles should not be left open. Some delay on a live Tail is acceptable. Perhaps this could be an option in the agent? Regardless of the approach I think that read operations above a certain limit should queue instead of endlessly opening file handles.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.