GithubHelp home page GithubHelp logo

blackesk's People

Contributors

amitn322 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

blackesk's Issues

geopoint mapping

I tried to use mapping below, so that geopoint could be shown on a map, but it did not work. As it was the first time I used Elastic 7.8 also first time using access control, I'm not sure the the problem is. It would be nice to have mapping working out of box for geopoint.

PUT syslog-ng
{
   "mappings" : {
         "properties" : {
            "geopoint" : {
                "type" : "geo_point"
            }
         }
   }
}

Timezone hardcoded in syslog-ng

In /syslog-ng/conf/syslog-ng.conf you have the following defined twice:
time-zone("MST")

In my case, I ran the uninstall script, updated these entries to PST and then re-ran the install script.
I found out that PST was not included in /usr/share/zoneinfo/ and had to update once again to PST8PDT.
Also, as a newer Docker user, I learned that I had to trash the image and rebuild after making these changes for them to take effect.

Would it be possible to switch to GMT as the default, both in the environment variables file as well as well as the syslog-ng config?

Making the two time-zone statements within /syslog-ng/conf/syslog-ng.conf use an environment variable would certainly help, but a note in the Readme would suffice.

java, etc.

First of all: thanks, tested blackesk, and it works great.

Checking the scripts, I see that syslog-ng/build.sh also adds Java support. It increases both to the build time and the size of the container considerably. And as the syslog-ng.conf shows, it is not used at all. It should be removed.

The syslog-ng.conf has many commented out parts. It makes a bit hard to read.

No logs except internal syslog-ng logs are coming through Elasticsearch

No logs except internal syslog-ng logs are coming through Elasticsearch

What I did:

I ran loggen.sh to generate 100 random log entries
image

I've installed syslog-ng as a client on a Ubuntu 20.04 and configured it to connect with the remote syslog-ng server dockerfile. This is part of my syslog-nf configuration:
image

Full configuration:

@version: 3.25
@include "scl.conf"

# Syslog-ng configuration file, compatible with default Debian syslogd
# installation.

# First, set some global options.
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
          dns_cache(no); owner("root"); group("adm"); perm(0640);
          stats_freq(0); bad_hostname("^gconfd$");
};

########################
# Sources
########################
# This is the default behavior of sysklogd package
# Logs may come from unix stream, but not from another machine.
#
source s_src {
       system();
       internal();
};

destination d_blackesk {
    tcp("10.8.10.29" port("514"));
};

destination d_local {
    file("/var/log/messages" persist-name("messages"));
};

#log {source(s_src); destination(d_local);};
log {source(s_src); destination(d_local); destination(d_blackesk);};

# If you wish to get logs from remote machine you should uncomment
# this and comment the above source line.
#
#source s_net { tcp(ip(127.0.0.1) port(1000)); };

########################
# Destinations
########################
# First some standard logfile
#
destination d_auth { file("/var/log/auth.log"); };
destination d_cron { file("/var/log/cron.log"); };
destination d_daemon { file("/var/log/daemon.log"); };
destination d_kern { file("/var/log/kern.log"); };
destination d_lpr { file("/var/log/lpr.log"); };
destination d_mail { file("/var/log/mail.log"); };
destination d_syslog { file("/var/log/syslog"); };
destination d_user { file("/var/log/user.log"); };
destination d_uucp { file("/var/log/uucp.log"); };

# This files are the log come from the mail subsystem.
#
destination d_mailinfo { file("/var/log/mail.info"); };
destination d_mailwarn { file("/var/log/mail.warn"); };
destination d_mailerr { file("/var/log/mail.err"); };

# Logging for INN news system
#
destination d_newscrit { file("/var/log/news/news.crit"); };
destination d_newserr { file("/var/log/news/news.err"); };
destination d_newsnotice { file("/var/log/news/news.notice"); };

# Some 'catch-all' logfiles.
#
destination d_debug { file("/var/log/debug"); };
destination d_error { file("/var/log/error"); };
destination d_messages { file("/var/log/messages"); };

# The root's console.
#
destination d_console { usertty("root"); };

# Virtual console.
#
destination d_console_all { file(`tty10`); };

# The named pipe /dev/xconsole is for the nsole' utility.  To use it,
# you must invoke nsole' with the -file' option:
#
#    $ xconsole -file /dev/xconsole [...]
#
destination d_xconsole { pipe("/dev/xconsole"); };

# Send the messages to an other host
#
#destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); };

# Debian only
destination d_ppp { file("/var/log/ppp.log"); };

########################
# Filters
########################
# Here's come the filter options. With this rules, we can set which
# message go where.

filter f_dbg { level(debug); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_err { level(err); };
filter f_crit { level(crit .. emerg); };

filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); };
filter f_error { level(err .. emerg) ; };
filter f_messages { level(info,notice,warn) and
                    not facility(auth,authpriv,cron,daemon,mail,news); };

filter f_auth { facility(auth, authpriv) and not filter(f_debug); };
filter f_cron { facility(cron) and not filter(f_debug); };
filter f_daemon { facility(daemon) and not filter(f_debug); };
filter f_kern { facility(kern) and not filter(f_debug); };
filter f_lpr { facility(lpr) and not filter(f_debug); };
filter f_local { facility(local0, local1, local3, local4, local5,
                        local6, local7) and not filter(f_debug); };
filter f_mail { facility(mail) and not filter(f_debug); };
filter f_news { facility(news) and not filter(f_debug); };
filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); };
filter f_user { facility(user) and not filter(f_debug); };
filter f_uucp { facility(uucp) and not filter(f_debug); };

filter f_cnews { level(notice, err, crit) and facility(news); };
filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); };

filter f_ppp { facility(local2) and not filter(f_debug); };
filter f_console { level(warn .. emerg); };

########################
# Log paths
########################
log { source(s_src); filter(f_auth); destination(d_auth); };
log { source(s_src); filter(f_cron); destination(d_cron); };
log { source(s_src); filter(f_daemon); destination(d_daemon); };
log { source(s_src); filter(f_kern); destination(d_kern); };
log { source(s_src); filter(f_lpr); destination(d_lpr); };
log { source(s_src); filter(f_syslog3); destination(d_syslog); };
log { source(s_src); filter(f_user); destination(d_user); };
log { source(s_src); filter(f_uucp); destination(d_uucp); };

log { source(s_src); filter(f_mail); destination(d_mail); };
#log { source(s_src); filter(f_mail); filter(f_info); destination(d_mailinfo); };
#log { source(s_src); filter(f_mail); filter(f_warn); destination(d_mailwarn); };
#log { source(s_src); filter(f_mail); filter(f_err); destination(d_mailerr); };

log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); };
log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); };
log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); };
#log { source(s_src); filter(f_cnews); destination(d_console_all); };
#log { source(s_src); filter(f_cother); destination(d_console_all); };

#log { source(s_src); filter(f_ppp); destination(d_ppp); };

log { source(s_src); filter(f_debug); destination(d_debug); };
log { source(s_src); filter(f_error); destination(d_error); };
log { source(s_src); filter(f_messages); destination(d_messages); };

log { source(s_src); filter(f_console); destination(d_console_all);
                                    destination(d_xconsole); };
log { source(s_src); filter(f_crit); destination(d_console); };

# All messages send to a remote site
#
#log { source(s_src); destination(d_net); };

###
# Include all config files in /etc/syslog-ng/conf.d/
###
@include "/etc/syslog-ng/conf.d/*.conf"

Result expected:

The logs should be sent to the syslog-ng server inside the docker. I've also written the results to /var/log/message.
Here is a part of that file:
image
As you can see there are some entries for auth.log and some other stuff

Actual results:

None of the random logs generated or auth logs from my client are getting through Elasticsearch.
My Elasticsearch looks like this:
image
That's if I use extras/loggen.sh from the host that I've installed the dockerfiles on and also the log files that are sent from the Ubuntu client.
As you can see the Ubuntu client still connects to the syslog-ng server.

Am I doing something wrong?

Error related to Syslog during install install - Cent OS 8

Getting the following error. Installing the same on Cent OS 8. Should I try a different version. I am attaching a screenshot too.

mkdir -p modules/java/syslog-ng-core/classes
GEN modules/java/syslog-ng-core/classes/org/syslog_ng/LogMessage.class
/bin/sh: /tmp/syslog-ng-3.34.1/: Permission denied
Capture

GEN modules/java/org_syslog_ng_LogMessage.h
make: *** [Makefile:28678: modules/java/org_syslog_ng_LogMessage.h] Error 126
ERROR: Service 'syslog-ng' failed to build: The command '/bin/sh -c /build.sh' returned a non-zero code: 2

Error with openssl-dev and libressl-dev during install

First off thanks for this great repository. It's really neat

Docker Version: 20.10.21
Docker-compose version 1.29.2
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"

I am however running into some issues when attempting to use the install.sh. The issue occurs for both the single-node and multi-node.

image

This error leads to the syslog-ng failing to build.

image

Any help would be greatly appreciated!

calling loggen.sh from install.sh fails on Debian

When running install.sh on Debian, calling loggen.sh fails:

Generating Some Fake Logs, you can delete the index and start over..
./extras/loggen.sh: 13: ./extras/loggen.sh: arithmetic expression: expecting primary: "  % 65535 + 2000 "

The problem seems to be, that loggen.sh needs bash, but sh on Debian is dash. Replacing:

sh ./extras/loggen.sh 10

with

bash ./extras/loggen.sh 10

seems to solve the problem.

=> ERROR [internal] load metadata for docker.io/library/alpine:latest

Hi.

This looks glorious. However:

failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch oauth token: Post "https://auth.docker.io/token": dial tcp: lookup auth.docker.io on [::1]:53: read udp [::1]:39458->[::1]:53: read: connection refused

This is on Manjaro . I do have a firewall configured using ufw, but the issue is the same if I disable it. I have a custom dns server (raspberry pi with pihole), which acts as a thin cache in front of google dns (and the issue is the same if I disable it).

Docker version 20.10.18, build b40c2f6b5d
Manjaro 22.0.0
Kernel 5.15.74-3-MANJARO

I am logged in to docker.io, my docker daemon requires sudo. Hence, I run the install using sudo.

My timezone is correctly set in the .env.

extras/loggen.sh logs to the host

I tried to find the 10 log messages generated during installation, but found only syslog-ng internal() messages. Taking a closer look at extras/loggen.sh I found, that:

logger -t sshd -f /tmp/ssh_logs.txt 

Which means, that logs are sent to the local syslog instead of the one just installed in Docker. The above command only works, if logs from localhost are forwarded to port 514 even before installation.

logger -T --rfc3164 -n 127.0.0.1 -P 514 -t sshd -f /tmp/ssh_logs.txt

The above variant should resolve this problem.

UStill waiting for Elasticsearch to be ready, 1018 seconds elapsed

For proper issue reporting, where can I check where Elasticsearch is stuck?

[+] Running 3/4t of type volume should not define bind option
[+] Running 5/5yslog Created 0.2s
✔ Network es_syslog Created 0.2s
✔ Volume "es_data01" Created 0.0s
✔ Container syslog01 Started 0.1s
✔ Container es01 Started 0.2s
✔ Container kib01 Started 0.1s
Waiting for Elasticsearch to be ready...
Still waiting for Elasticsearch to be ready , 1018 seconds elapsed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.