GithubHelp home page GithubHelp logo

elk-docker's Introduction

Elasticsearch, Logstash, Kibana (ELK) Docker image

This Docker image provides a convenient centralised log server and log management web interface, by packaging Elasticsearch (version 1.6.0), Logstash (version 1.5.2), and Kibana (version 4.1.1), collectively known as ELK.

Contents

Installation

Install Docker, either using a native package (Linux) or wrapped in a virtual machine (Windows, OS X – e.g. using Boot2Docker or Vagrant).

To pull this image from the Docker registry, open a shell prompt and enter:

$ sudo docker pull sebp/elk

Note – This image has been built automatically from the source files in the source Git repository. If you want to build the image yourself, see the Building the image section below.

Note – The size of the virtual image (as reported by docker images) is 1,076 MB.

Usage

Run the container from the image with the following command:

$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -it --name elk sebp/elk

This command publishes the following ports, which are needed for proper operation of the ELK stack:

  • 5601 (Kibana web interface).
  • 9200 (Elasticsearch JSON interface).
  • 5000 (Logstash server, receives logs from logstash forwarders – see the Forwarding logs section below).

Note – The image also exposes Elasticsearch's transport interface on port 9300. Use the -p 5300:5300 option with the docker command above to publish it.

Note – Logstash includes a web interface, but it is not started in this Docker image.

The figure below shows how the pieces fit together.

-                                +------------------------------------------------+
                                 |                      ELK server (Docker image) |
+----------------------+         |                                                |
|                      |    +-----> port 5601 - Kibana web interface              |
|  Admin workstation   +----+    |                                                |
|                      |    +-----> port 9200 - Elasticsearch JSON interface      |
+----------------------+         |                                                |
                                 |  port 9292 - Logstash web interface (unused)   |
+----------------------+         |                                                |
| Server               |         |  port 9300 - Elasticsearch transport interface |
| +------------------+ |         |                                                |
| |logstash forwarder+------------> port 5000 - Logstash server                   |
| +------------------+ |         |                                                |
+----------------------+         +------------------------------------------------+

Access Kibana's web interface by browsing to http://<your-host>:5601, where <your-host> is the hostname or IP address of the host Docker is running on (see note), e.g. localhost if running a local native version of Docker, or the IP address of the virtual machine if running a VM-hosted version of Docker (see note).

Note – To configure and/or find out the IP address of a VM-hosted Docker installation, see https://docs.docker.com/installation/windows/ (Windows) and https://docs.docker.com/installation/mac/ (OS X) for guidance if using Boot2Docker. If you're using Vagrant, you'll need to set up port forwarding (see https://docs.vagrantup.com/v2/networking/forwarded_ports.html.

You can stop the container with ^C, and start it again with sudo docker start elk.

As from Kibana version 4.0.0, you won't be able to see anything (not even an empty dashboard) until something has been logged (see the Creating a dummy log entry sub-section below on how to test your set-up, and the Forwarding logs section on how to forward logs from regular applications).

Running the image using Docker Compose

If you're using Docker Compose (formerly known as Fig) to manage your Docker services (and if not you really should as it will make your life much easier!), then you can create an entry for the ELK Docker image by adding the following lines to your docker-compose.yml file:

elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5000:5000"

You can then start the ELK container like this:

$ sudo docker-compose up elk 

Creating a dummy log entry

If you haven't got any logs yet and want to manually create a dummy log entry for test purposes (for instance to see the dashboard), first start the container as usual (sudo docker run ... or docker-compose up ...).

In another terminal window, find out the name of the container running ELK, which is displayed in the last column of the output of the sudo docker ps command.

$ sudo docker ps
CONTAINER ID        IMAGE                  ...   NAMES
86aea21cab85        elkdocker_elk:latest   ...   elkdocker_elk_1

Open a shell prompt in the container and type (replacing <container-name> with the name of the container, e.g. elkdocker_elk_1 in the example above):

$ sudo docker exec -it <container-name> /bin/bash 

Note - If you're running a pre-1.4 version of Docker (before the exec command was introduced) then:

  • Run the container interactively:

    • With the regular docker command use sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -it --name elk sebp/elk /bin/bash – note the extra /bin/bash at the end compared to the usual command line
    • With Compose use sudo docker-compose run --service-ports elk /bin/bash.
  • At the container's shell prompt, type start.sh& to start Elasticsearch, Logstash and Kibana in the background, and wait for everything to be up and running (wait for {"@timestamp":... ,"message":"Listening on 0.0.0.0:5601",...})

Now enter:

# /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

And then type some dummy text followed by Enter to create a log entry:

this is a dummy entry

Note - You can create as many entries as you want. Use ^C to go back to the bash prompt.

After a few seconds if you browse to http://:9200/_search?pretty (e.g. http://localhost:9200/_search?pretty for a local native instance of Docker) you'll see that Elasticsearch has indexed the entry:

{
  ...
  "hits" : {
    ...
    "hits" : [ {
      "_index" : "logstash-...",
      "_type" : "logs",
	  ...
      "_source":{"message":"this is a dummy entry","@version":"1","@timestamp":...}
    } ]
  }
}

You can now browse to Kibana's web interface at http://:5601 (e.g. http://localhost:5601 for a local native instance of Docker).

From the drop-down "Time-field name" field, select @timestamp, then click on "Create", and you're good to go.

Forwarding logs

Forwarding logs from a host relies on a Logstash forwarder agent that collects logs (e.g. from log files, from the syslog daemon) and sends them to our instance of Logstash.

Install Logstash forwarder on the host you want to collect and forward logs from (see the References section below for links to detailed instructions).

Here is a sample configuration file for Logstash forwarder, that forwards syslog and authentication logs, as well as nginx logs.

{
  "network": {
    "servers": [ "elk:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/syslog",
        "/var/log/auth.log"
       ],
      "fields": { "type": "syslog" }
    },
    {
      "paths": [
        "/var/log/nginx/access.log"
       ],
      "fields": { "type": "nginx-access" }
    }
   ]
}

By default (see /etc/init.d/logstash-forwarder if you need to tweak anything):

  • The Logstash forwarder configuration file must be located in /etc/logstash-forwarder.
  • The Logstash forwarder needs a syslog daemon (e.g. rsyslogd, syslog-ng) to be running.

In the sample configuration file, make sure that you:

  • Replace elk in elk:5000 with the hostname or IP address of the ELK-serving host.
  • Copy the logstash-forwarder.crt file (which contains the Logstash server's certificate) from the ELK image to /etc/pki/tls/certs/logstash-forwarder.crt.

Note – The ELK image includes configuration items (/etc/logstash/conf.d/11-nginx.conf and /opt/logstash/patterns/nginx) to parse nginx access logs, as forwarded by the Logstash forwarder instance above.

Linking a Docker container to the ELK container

If you want to forward logs from a Docker container to the ELK container, then you need to link the two containers.

Note – The log-emitting Docker container must have a Logstash forwarder agent running in it for this to work.

First of all, give the ELK container a name (e.g. elk) using the --name option:

$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -it --name elk sebp/elk

Then start the log-emitting container with the --link option (replacing your/image with the name of the Logstash-forwarder-enabled image you're forwarding logs from):

$ sudo docker run -p 80:80 -it --link elk:elk your/image

From the perspective of the log emitting container, the ELK container is now known as elk, which is the hostname to be used in the logstash-forwarder configuration file.

With Compose here's what example entries for a (locally built log-generating) container and an ELK container might look like in the docker-compose.yml file.

yourapp:
  image: your/image
  ports:
    - "80:80"
  links:
    - elk

elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5000:5000"

Building the image

To build the Docker image from the source files, first clone the Git repository, go to the root of the cloned directory (i.e. the directory that contains Dockerfile), and:

  • If you're using the vanilla docker command then run sudo docker build . -t <repository-name>, where <repository-name> is the repository name to be applied to the image, which you can then use to run the image with the docker run command.

  • If you're using Compose then run sudo docker-compose build elk, which uses the docker-compose.yml file from the source repository to build the image. You can then run the built image with sudo docker-compose up.

Extending the image

To extend the image, you can either fork the source Git repository and hack away, or – more in the spirit of the Docker philosophy – use the image as a base image and build on it, adding files (e.g. configuration files to process logs sent by log-producing applications, plugins for Elasticsearch) and overwriting files (e.g. configuration files, certificate and private key files) as required.

To create a new image based on this base image, you want your Dockerfile to include:

FROM sebp/elk

followed by instructions to extend the image (see Docker's Dockerfile Reference page for more information).

Making log data persistent

If you want your ELK stack to keep your log data across container restarts, you need to create a Docker data volume inside the ELK container at /var/lib/elasticsearch, which is the directory that Elasticsearch stores its data in.

One way to do this with the docker command-line tool is to first create a named container called elk_data with a bound Docker volume by using the -v option:

$ sudo docker run -p 5601:5601 -p 9200:9200 -5000:5000 -v /var/lib/elasticsearch -it --name elk_data sebp/elk

You can now reuse the persistent volume from that container using the --volumes-from option:

$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 --volumes-from elk_data -it --name elk sebp/elk

Alternatively, if you're using Compose, then simply add the two following lines to your docker-compose.yml file, under the elk: entry:

  volumes:
    - /var/lib/elasticsearch

Then start the container with sudo docker-compose up as usual.

Note – By design, Docker never deletes a volume automatically (e.g. when no longer used by any container). Whilst this avoids accidental data loss, it also means that things can become messy if you're not managing your volumes properly (i.e. using the -v option when removing containers with docker rm to also delete the volumes... bearing in mind that the actual volume won't be deleted as long as at least one container is still referencing it, even if it's not running). As of this writing, managing Docker volumes can be a bit of a headache, so you might want to have a look at docker-cleanup-volumes, a shell script that deletes unused Docker volumes.

See Docker's page on Managing Data in Containers and Container42's Docker In-depth: Volumes page for more information on managing data volumes.

Security considerations

As it stands this image is meant for local test use, and as such hasn't been secured: access to the ELK services is not restricted, and a default authentication server certificate (logstash-forwarder.crt) and private key (logstash-forwarder.key) are bundled with the image.

To harden this image, at the very least you would want to:

References

About

Written by Sébastien Pujadas, released under the Apache 2 license.

elk-docker's People

Contributors

dborzov avatar eamonnfaherty avatar spujadas avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.