Using docker, setup three instances that communicate to each other.
- a docker instance running rsyslog (centralized logging server)
- a docker instance running rsyslog which forwards to the centralized logging server (logging agent)
- a docker instance running nginx serving static content (app server)
Ensure that the app server's access logs are forwarded to the centralized logging server via the logging agent.
The ideal solution here should be re-usable. In production we would want to run one logging agent and many app server instances together on any host. This would provide us with an easy way to ship logs for each container from each docker host to the centralized logging service.
- Include the ability to easily configure the address of the centralized logging server (it could change so just re-configuring the logging agent should be easy).
- Be able to easily choose what files we want to ship to the logging service.
Currently 3 services are merged into one yaml for convenience. It has comments about how it would be split in a real-world scenario.
- Start:
docker-compose -f rsyslog.yml up -d
- Generate web logs:
curl localhost:8080
- Observe log server logs:
docker logs centralizedcontainerlogging_log-server_1
- Observe locally saved logs:
cat ./logs/*
- Stop
docker-compose -f rsyslog.yml down
- Log server is linked via docker built-in service discovery, e.g. referred by name "log-server".
- Logspout used as a log-agent is only capturing container's stdout/stderr, which I believe is correct because a container should run no more than 1 process. Unlike a "Sidecar" approach you cannot list specific files here. However multiple message-level filters are available:
- Logspout filters (which I used to select _web_)
- Logspout ignore
- Rsyslog selectors
Centralized logging is available out-of-box in kubernetes: fluentd + elasticsearch + grafana/kibana