GithubHelp home page GithubHelp logo

graylog2 / graylog-ansible-role Goto Github PK

View Code? Open in Web Editor NEW
209.0 36.0 129.0 380 KB

Ansible role which installs and configures Graylog

License: Apache License 2.0

Python 20.38% Groovy 2.88% Jinja 76.74%
graylog playbook ansible ansible-role ansible-playbook ansible-galaxy log-analysis log-management logging

graylog-ansible-role's Introduction

Galaxy CI Ansible Ansible Ansible

Graylog Ansible Role

Requirements

  • Ansible (> 2.5.0)
  • At least 4gb of memory on the target instance.
    • Linux
      • Currently tested against:
        • Ubuntu 18.04
        • Ubuntu 20.04
        • Centos 7
        • Centos 8

To install the role, run:

ansible-galaxy install graylog2.graylog

Dependencies

Graylog has the following dependencies:

See the official Graylog documentation for more details on these requirements.

Be certain you are running a supported version of Elasticsearch. You can configure what version of Elasticsearch Ansible will install with the es_version variable. Running Graylog against an unsupported version of Elasticsearch can break your instance!

Compatibility Matrix

Graylog version 3.x 4.x
Elasticsearch 5-6 6.8 - 7.10

You will need to these Ansible role dependencies:

To install them, run:

ansible-galaxy install -r <GRAYLOG ROLE_DIRECTORY>/requirements.yml

Example Playbook

Here is an example playbook that uses this role. This is a single-instance configuration. It installs Java, MongoDB, Elasticsearch, and Graylog onto the same server.

- hosts: "all"
  remote_user: "ubuntu"
  become: True
  vars:
    #Elasticsearch vars
    es_major_version: "7.x"
    es_version: "7.10.2"
    es_enable_xpack: False
    es_instance_name: "graylog"
    es_heap_size: "1g"
    es_config:
      node.name: "graylog"
      cluster.name: "graylog"
      http.port: 9200
      transport.tcp.port: 9300
      network.host: "127.0.0.1"
      discovery.seed_hosts: "localhost:9300"
      cluster.initial_master_nodes: "graylog"
    oss_version: True
    es_action_auto_create_index: False

    #Graylog vars
    graylog_version: 4.2
    graylog_install_java: True
    graylog_password_secret: "" # Insert your own here. Generate with: pwgen -s 96 1
    graylog_root_password_sha2: "" # Insert your own root_password_sha2 here.
    graylog_http_bind_address: "{{ ansible_default_ipv4.address }}:9000"
    graylog_http_publish_uri: "http://{{ ansible_default_ipv4.address }}:9000/"
    graylog_http_external_uri: "http://{{ ansible_default_ipv4.address }}:9000/"

  roles:
    - role: "graylog2.graylog"
      tags:
        - "graylog"

Remember to generate a unique password_secret and root_password_sha2 for your instance.

To generate password_secret:

pwgen -s 96 1

To generate root_password_sha2:

  echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1

Example Playbook - Cluster

Here is an example that deploys a Graylog cluster, like the one mentioned on the architecture page of our documentation.

In our Ansible hosts file, we have 3 instances for the Elasticsearch cluster and 3 instances for the Graylog cluster:

[elasticsearch]
elasticsearch01
elasticsearch02
elasticsearch03

[graylog]
graylog01
graylog02
graylog03

First, we deploy the Elasticsearch cluster. Note that this doesn't configure authentication or HTTPS. For a production instance, you would likely want that.

- hosts: "elasticsearch"
  vars:
    es_major_version: "7.x"
    es_version: "7.10.2"
    es_enable_xpack: False
    es_instance_name: "graylog"
    es_heap_size: "1g"
    es_config:
      node.name: "{{ ansible_hostname }}"
      cluster.name: "graylog"
      http.port: 9200
      transport.port: 9300
      network.host: "0.0.0.0"
      discovery.seed_hosts: "elasticsearch01:9300, elasticsearch02:9300, elasticsearch03:9300"
      cluster.initial_master_nodes: "elasticsearch01, elasticsearch02, elasticsearch03"
    oss_version: True
    es_action_auto_create_index: False

  roles:
    - role: "elastic.elasticsearch"

Next, we'll deploy three MongoDB instances and configure them as a Replica Set. This is done with the MongoDB community collection.

These MongoDB instances will live on the Graylog servers, as they are not expected to consume much resources.

Again, this doesn't configure authentication in MongoDB. You may want that for a production cluster.

- hosts: "graylog"
  vars:
    mongodb_version: "4.4"
    bind_ip: "0.0.0.0"
    repl_set_name: "rs0"
    authorization: "disabled"
  roles:
    - community.mongodb.mongodb_repository
    - community.mongodb.mongodb_mongod
  tasks:
    - name: "Start MongoDB"
      service:
        name: "mongod"
        state: "started"
        enabled: "yes"

- hosts: "graylog01"
  tasks:
    - name: "Install PyMongo"
      apt:
        update_cache: yes
        name: "python3-pymongo"
        state: "latest"
    - name: Configure replicaset
      community.mongodb.mongodb_replicaset:
        login_host: "localhost"
        replica_set: "rs0"
        members:
        - graylog01
        - graylog02
        - graylog03

Finally, we install Graylog.

- hosts: "graylog"
  vars:
    graylog_is_master: "{{ True if ansible_hostname == 'graylog01' else False }}"
    graylog_version: 4.2
    graylog_install_java: False
    graylog_install_elasticsearch: False
    graylog_install_mongodb: False
    graylog_password_secret: "" # Insert your own here. Generate with: pwgen -s 96 1
    graylog_root_password_sha2: "" # Insert your own root_password_sha2 here.
    graylog_http_bind_address: "{{ ansible_default_ipv4.address }}:9000"
    graylog_http_publish_uri: "http://{{ ansible_default_ipv4.address }}:9000/"
    graylog_http_external_uri: "http://{{ ansible_default_ipv4.address }}:9000/"
    graylog_elasticsearch_hosts: "http://elasticsearch01:9200,http://elasticsearch02:9200,http://elasticsearch03:9200"
    graylog_mongodb_uri: "mongodb://graylog01:27017,graylog02:27017,graylog03:27017/graylog"

  roles:
    - role: "graylog2.graylog"

We set graylog_install_elasticsearch: False and graylog_install_mongodb: False so the Graylog role doesn't try to install Elasticsearch and MongoDB. Those flags are intended for single-instance installs.

The full example can be seen here. Our documentation has more in-depth advice on configuring a multi-node Graylog setup.

Role Variables

A list of all available role variables is documented here.

Testing

We run smoke tests for Graylog using this role. Documentation on that can be found here

Author Information

Author: Marius Sturm ([email protected]) and contributors

License

Apache 2.0

graylog-ansible-role's People

Contributors

bbaassssiiee avatar bernd avatar danvaida avatar dependabot[bot] avatar f9n avatar hooksie1 avatar jccomputing avatar joschi avatar joshbeard avatar jrunu avatar kmcgovern-apixio avatar kmonticolo avatar lerminou avatar lucasmaurice avatar mariussturm avatar martinnowak avatar mehrenreich avatar mhavas avatar mika avatar mlintern avatar pablodav avatar pdesgarets avatar rfdrake avatar sahya avatar smutel avatar terwey avatar veger avatar xtruthx avatar zanchin avatar zealot128 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graylog-ansible-role's Issues

'dict object' has no attribute 'ipv4'

Hi,

It looks like one variable is missing:

TASK: [f500.elasticsearch | write elasticsearch.yml] **************************
fatal: [logs-lab-01-ber] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ipv4'", 'failed': True}
fatal: [logs-lab-01-ber] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ipv4'", 'failed': True}

I followed the example of usage given in the README file. Am I missing something ?

OS: Debian Jessie 8.2

Thank you

interactive auth needed for ES reload handler

Running the long example gives me below output against an Ubuntu 16.04 server. Is there anything i did wrong?

RUNNING HANDLER [elastic.elasticsearch : reload systemd configuration] *********12:49:58
541
fatal: [node01]: FAILED! => {"changed": true, "cmd": ["systemctl", "daemon-reload"], "delta": "0:00:00.007948", "end": "2018-01-31 11:49:59.168730", "msg": "non-zero return code", "rc": 1, "start": "2018-01-31 11:49:59.160782", "stderr": "Failed to execute operation: Interactive authentication required.", "stderr_lines": ["Failed to execute operation: Interactive authentication required."], "stdout": "", "stdout_lines": []}
542

Bug: Elasticsearch config in README is invalid

Provided variables in README coming from ES role/playbook are invalid.

These variables are used by Elasticsearch role to build elasticsearch.yml file, however Graylog playbook creates its own file in its own defined directory, why is that?

As a result i cannot change es_heap_size or anything the way it's provided in README as it will not be put in main config created by Graylog playbook.
Also there are now 2 elasticsearch.yml configs created, one by Graylog in /etc/graylog/server/elasticsearch.yml and one in /etc/elasticsearch/<NODENAME>/elasticsearch.yml.

So basically This Task is quite useless and confusing.

In order to make it work properly i had to:

  1. Change graylog_elasticsearch_config_file to path used by Elasticsearch playbook.
  2. Then it turns out that is not enough as well due to permissions, so had to set es_user and es_group to graylog in order to have graylog access elasticsearch config.

I propose adding graylog_elasticsearch_config_file, es_group and es_user to README with explanation on how to use them, as well as removing or changing.
Otherwise it's impossible to understand/conf it properly due to README in examples having ES config options that in will not be used in the end-result with same README example.

Keep getting failed to send ping

I followed the instructions for the full playbook
I can login to the application, however I keep getting the following error

[graylog-server] failed to send ping to [[#zen_unicast_1#][localhost][inet[/172.20.35.169:9300]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
        at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:178)
        at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:130)
        at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
        at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
        at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)

I also tried on 127.0.0.1 with the same error

and I do have the latest elastic search

root@ip-172-20-35-169:/home/ubuntu# curl -XGET 'localhost:9200'
{
  "name" : "Sharon Friedlander",
  "cluster_name" : "graylog2",
  "version" : {
    "number" : "2.3.3",
    "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",
    "build_timestamp" : "2016-05-17T15:40:04Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

I am at a loss please help

2.0.0 Incompatibilities

Unfortunately this breaks in 2.0.0 now due to a mis-formatted meta manifest in the mongodb dependency.

Pipeline rules to_map() function still alive?

HI,
i just wanted to handle some Fields containing json-formated values like it is described in the following Test:
https://github.com/Graylog2/graylog2-server/blob/master/graylog2-server/src/test/resources/org/graylog/plugins/pipelineprocessor/functions/json.txt

But the actual stable version of Graylog v.2.5 is printing the error in the rule editor that the function to_map() not exists?
If thats the truth than it would be better to delete thoses tests nor?

Understanding comment "use 2.x" graylog3 will be compatible and es_major_version 5.x

The README.md comment and ES version variables in

    # Graylog2 is not compatible with elasticsearch 5.x, so ensure to use 2.x (graylog3 will be compatible)
    # Also use version 0.2 of elastic.elasticsearch (ansible role), because vars are different
    es_major_version: "5.x"

from https://github.com/Graylog2/graylog-ansible-role/blob/2.3.0/README.md confuses me but makes sense in https://github.com/Graylog2/graylog-ansible-role/blob/2.2.3/README.md where it says:

    # Graylog2 is not compatible with elasticsearch 5.x, so ensure to use 2.x (graylog3 will be compatible)
    # Also use version 0.2 of elastic.elasticsearch (ansible role), because vars are different
    es_major_version: "2.x"
    es_version: "2.4.3"

Is Graylog3 actually Graylog2.3? I think even Graylog2.3 is still referred to commonly as Graylog2, and no mention of Graylog3, proper, is out.

Support for OpenJDK 8

I am willing to add support for OpenJDK 8 (comes with Ubuntu 16.04), next to the Orcale Java 8 support.

I'd like to know, as I don't want to unnecessary waste my time... ;)

Error graylog2 with elasticsearch

Hi,

I installed graylog2 from playbook example, but is error:

TASK [elastic.elasticsearch : Install templates with auth] ***********************************************************************************************************************
skipping: [localhost] => (item=/etc/ansible/roles/elastic.elasticsearch/files/templates/basic.json)
ERROR! No file was found when using with_first_found. Use the 'skip: true' option to allow this task to be skipped if no files are found
my playbook:
- hosts: server
  become: True
  vars:
    es_major_version: "5.x"
    es_instance_name: 'graylog'
    es_scripts: False
    es_templates: False
    es_version_lock: False
    es_heap_size: 1g
    es_config: {
      node.name: "graylog",
      cluster.name: "graylog",
      http.port: 9200,
      transport.tcp.port: 9300,
      network.host: 0.0.0.0,
      node.data: true,
      node.master: true,
    }

    graylog_java_install: False

    graylog_install_mongodb: True

    graylog_web_endpoint_uri: 'http://localhost:9000/api/'

    nginx_sites:
      graylog:
        - listen 80
        - server_name "{{ansible_host}}"
        - location / {
          proxy_pass http://localhost:9000/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_pass_request_headers on;
          proxy_connect_timeout 150;
          proxy_send_timeout 100;
          proxy_read_timeout 100;
          proxy_buffers 4 32k;
          client_max_body_size 8m;
          client_body_buffer_size 128k; }

Error when running the ansible Galaxy role

When installing with ansible-galaxy install Graylog2.graylog-ansible-role, I'm getting the following error when attempting to use the role:

The error appears to have been in '/private/etc/ansible/roles/Graylog2.graylog-ansible-role/meta/main.yml': line 4, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  - { role: lesmyrmidons.mongodb, when: graylog_install_mongodb }
  - { role: ansible-elasticsearch, when: graylog_install_elasticsearch }
    ^ here

It seems like that the version, published on Ansible Galaxy, is missing the single quotes that would fix this bug and are already present here.

Could you update the published version? :)

Old value batch size

The batch size in the defaults of the role is an old value:

graylog_elasticsearch_output_batch_size: 25

In the default configuration is it:

output_batch_size = 500

No package matching 'mongodb_packages' is available

Hello.

Could you please help me. How can I fix it?

TASK [lesmyrmidons.mongodb : Debian | Install Packages] ************************
failed: [server] (item=[u'mongodb_packages']) => {"failed": true, "item": ["mongodb_packages"], "msg": "No package matching 'mongodb_packages' is available"}

Make web_endpoint_uri configurable

See the documentation here.

This option is needed when using a reverse proxy and mapping the backend to a subdomain, e.g. using example.com/api instead of example.com:12900.

The graylog_rest_transport_uri variable is not sufficient to accomplish this.

graylog-server.service not enabled at boot

Can this feature be added ? Or is there a reason why it is not in this role ?

The only code that refers to 'boot' in this role is at tasks/server.yml

- name: Graylog server should start after reboot
  file:
    path: '/etc/init/graylog-server.override'
    state: absent

And it seems outdated, the correct block should be (on systemd based OS) the following

- name: enabling graylog-server at boot
  service:
    name: graylog-server.service
    enabled: yes

Thanks.

Nginx version in requirements.yml

The Nginx version in requirements.yml is too old for newer Ansible versions and needs to be updated.

ERROR! 'always_run' is not a valid attribute for a Handler

The error appears to have been in '/home/username/ansible/roles/jdauphant.nginx/handlers/main.yml': line 17, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  • name: check nginx configuration
    ^ here

This error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration

This role doesn't work for production deploys

I've noticed this isn't production ready deployment of graylog:

  • There are assumptions in the role that software runs on localhost (I have PR to fix that for one part, but others like mongodb are still under the assumption of localhost)
  • Even when ignoring the tests of localhost, the deploy doesn't work in multi-node.
    I've followed this: http://docs.graylog.org/en/2.4/pages/configuration/web_interface.html#using-a-layer-3-load-balancer-forwarding-tcp-ports . But the only way I can reach a page after authentication on the graylog web interface is to put 2 of my 3 servers into maintenance mode in my Load Balancer. I suspect that the authentication information are not shared between backends.

node-id

Hello.

After installation using ansible file /etc/graylog/server/node-id file is missing and graylog cant start up.

Ubuntu Xenial: ElasticSearch install fails when es_version isn't defined

Hi,
I'm using the quickstart example playbook and the ElasticSeach install fails when es_version isn't defined.

TASK [elastic.elasticsearch : Debian - Ensure elasticsearch is installed] *********************************************************************************************************
task path: /xxx/xxx/ansible/playbooks/roles/elastic.elasticsearch/tasks/elasticsearch-Debian.yml:30
fatal: [xxx.xxxx.com]: FAILED! => {"cache_update_time": 1518618255, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\"     install 'elasticsearch=6.1.3'' failed: E: Version '6.1.3' for 'elasticsearch' was not found\n", "rc": 100, "stderr": "E: Version '6.1.3' for 'elasticsearch' was not found\n", "stderr_lines": ["E: Version '6.1.3' for 'elasticsearch' was not found"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}

Adding the es_version variable fixes the issue.

Ubuntu - No Graylog servers available. Cannot log in.

Hi, I can't get this role to work on a Ubuntu machine. I use the Quickstart and the More detailed example configuration from the README.

At first, I was getting an error in MongoDB installation, but I fixed it (see my pull request here: lesmyrmidons/ansible-role-mongodb#2).

Now, I have no error when running my playbook, but when I want to log in to Graylog web interface, I have this following error:
No Graylog servers available. Cannot log in.

When I click Check server connections:
The web interface was unable to connect to any Graylog node in the cluster so far.
Please check that the configured nodes shown on the left hand side are correct and that the servers are reachable.

In Configured nodes:
http://127.0.0.1:12900
Never connected!

Thanks for your help.

Issues upgrading from 2.0.3 to version 2.1.0

I'm trying to upgrade from version 2.0.3 to version 2.1.0:

My 2.0.3 install was done with the Graylog2.graylog-ansible-role and the following config:

- name: set up graylog server
  hosts: graylog
  become: true

  vars:
    elasticsearch_timezone: 'America/New_York'
    elasticsearch_version: '2.x'
    elasticsearch_cluster_name: 'graylog'
    elasticsearch_network_host: '0.0.0.0'
    elasticsearch_gateway_type: ''
    elasticsearch_gateway_expected_nodes: 1
    graylog_install_elastocsearch: yes
    graylog_rest_transport_uri: http://0.0.0.0:12900/
    graylog_password_secret: <removed>
    graylog_root_password_sha2: <removed>
    nginx_sites:
      default:
        - listen 80
        - server_name logs
        - location / {
          proxy_pass http://localhost:9000/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_pass_request_headers on;
          proxy_connect_timeout 150;
          proxy_send_timeout 100;
          proxy_read_timeout 100;
          proxy_buffers 4 32k;
          client_max_body_size 8m;
          client_body_buffer_size 128k; }
  roles:
    - Graylog2.graylog-ansible-role

I am then upgrading Graylog2.graylog-ansible-role to version 2.1.0 and removing graylog_rest_transport_uri and adding graylog_web_endpoint_uri like so:

- name: set up graylog server
  hosts: graylog
  become: true

  vars:
    elasticsearch_timezone: 'America/New_York'
    elasticsearch_version: '2.x'
    elasticsearch_cluster_name: 'graylog'
    elasticsearch_network_host: '0.0.0.0'
    elasticsearch_gateway_type: ''
    elasticsearch_gateway_expected_nodes: 1
    graylog_install_elastocsearch: yes
    graylog_web_endpoint_uri: http://127.0.0.1:9000/api/
    graylog_password_secret: <removed>
    graylog_root_password_sha2: <removed>
    nginx_sites:
      default:
        - listen 80
        - server_name logs
        - location / {
          proxy_pass http://localhost:9000/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_pass_request_headers on;
          proxy_connect_timeout 150;
          proxy_send_timeout 100;
          proxy_read_timeout 100;
          proxy_buffers 4 32k;
          client_max_body_size 8m;
          client_body_buffer_size 128k; }
  roles:
    - Graylog2.graylog-ansible-role

I get the following errors after running the playbook which indicate an issue with starting grizzly and the port already in use:

2016-09-15T15:48:07.215Z INFO  [node] [graylog-server] starting ...
2016-09-15T15:48:07.223Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlertScannerThread] periodical in [10s], polling every [60s].
2016-09-15T15:48:07.224Z INFO  [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2016-09-15T15:48:07.224Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [0s], polling every [20s].
2016-09-15T15:48:07.225Z INFO  [Periodicals] Starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical, running forever.
2016-09-15T15:48:07.226Z INFO  [Periodicals] Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2016-09-15T15:48:07.227Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2016-09-15T15:48:07.228Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2016-09-15T15:48:07.229Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2016-09-15T15:48:07.229Z INFO  [Periodicals] Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2016-09-15T15:48:07.229Z INFO  [Periodicals] Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2016-09-15T15:48:07.230Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2016-09-15T15:48:07.230Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2016-09-15T15:48:07.233Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [300s].
2016-09-15T15:48:07.233Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterIdGeneratorPeriodical] periodical, running forever.
2016-09-15T15:48:07.234Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesMigrationPeriodical] periodical, running forever.
2016-09-15T15:48:07.235Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2016-09-15T15:48:07.245Z INFO  [IndexerClusterCheckerThread] Indexer not fully initialized yet. Skipping periodic cluster check.
2016-09-15T15:48:07.258Z INFO  [connection] Opened connection [connectionId{localValue:7, serverValue:66}] to 127.0.0.1:27017
2016-09-15T15:48:07.261Z INFO  [connection] Opened connection [connectionId{localValue:8, serverValue:67}] to 127.0.0.1:27017
2016-09-15T15:48:07.264Z INFO  [connection] Opened connection [connectionId{localValue:4, serverValue:64}] to 127.0.0.1:27017
2016-09-15T15:48:07.265Z ERROR [ContentPackLoaderPeriodical] Couldn't list content packs
java.nio.file.NoSuchFileException: data/contentpacks
    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:1.8.0_101]
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_101]
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:1.8.0_101]
    at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427) ~[?:1.8.0_101]
    at java.nio.file.Files.newDirectoryStream(Files.java:525) ~[?:1.8.0_101]
    at org.graylog2.periodical.ContentPackLoaderPeriodical.getFiles(ContentPackLoaderPeriodical.java:214) [graylog.jar:?]
    at org.graylog2.periodical.ContentPackLoaderPeriodical.doRun(ContentPackLoaderPeriodical.java:121) [graylog.jar:?]
    at org.graylog2.plugin.periodical.Periodical.run(Periodical.java:77) [graylog.jar:?]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
2016-09-15T15:48:07.278Z INFO  [IndexRetentionThread] Elasticsearch cluster not available, skipping index retention checks.
2016-09-15T15:48:07.285Z INFO  [connection] Opened connection [connectionId{localValue:6, serverValue:65}] to 127.0.0.1:27017
2016-09-15T15:48:07.286Z INFO  [connection] Opened connection [connectionId{localValue:5, serverValue:68}] to 127.0.0.1:27017
2016-09-15T15:48:07.351Z INFO  [PeriodicalsService] Not starting [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not configured to run on this node.
2016-09-15T15:48:07.351Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlarmCallbacksMigrationPeriodical] periodical, running forever.
2016-09-15T15:48:07.356Z INFO  [Periodicals] Starting [org.graylog2.periodical.ConfigurationManagementPeriodical] periodical, running forever.
2016-09-15T15:48:07.365Z INFO  [Periodicals] Starting [org.graylog2.periodical.LdapGroupMappingMigration] periodical, running forever.
2016-09-15T15:48:07.371Z INFO  [Periodicals] Starting [org.graylog.plugins.usagestatistics.UsageStatsNodePeriodical] periodical in [300s], polling every [21600s].
2016-09-15T15:48:07.371Z INFO  [Periodicals] Starting [org.graylog.plugins.usagestatistics.UsageStatsClusterPeriodical] periodical in [300s], polling every [21600s].
2016-09-15T15:48:07.371Z INFO  [Periodicals] Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2016-09-15T15:48:07.555Z INFO  [transport] [graylog-server] publish_address {127.0.0.1:9350}, bound_addresses {127.0.0.1:9350}
2016-09-15T15:48:07.563Z INFO  [discovery] [graylog-server] graylog/bG2gqKimQu6vQ9eEPa-DKw
2016-09-15T15:48:07.934Z INFO  [AbstractJerseyService] Enabling CORS for HTTP endpoint
2016-09-15T15:48:10.570Z WARN  [discovery] [graylog-server] waited for 3s and no initial state was set by the discovery
2016-09-15T15:48:10.570Z INFO  [node] [graylog-server] started
2016-09-15T15:48:10.696Z INFO  [service] [graylog-server] detected_master {Grenade}{u81OqBeoQJeAHgX77XJl_Q}{10.0.2.15}{10.0.2.15:9300}{master=true}, added {{Grenade}{u81OqBeoQJeAHgX77XJl_Q}{10.0.2.15}{10.0.2.15:9300}{master=true},}, reason: zen-disco-receive(from master [{Grenade}{u81OqBeoQJeAHgX77XJl_Q}{10.0.2.15}{10.0.2.15:9300}{master=true}])
2016-09-15T15:48:12.548Z INFO  [NetworkListener] Started listener bound to [0.0.0.0:9000]
2016-09-15T15:48:12.549Z INFO  [HttpServer] [HttpServer] Started.
2016-09-15T15:48:12.550Z INFO  [WebInterfaceService] Started Web Interface at <http://0.0.0.0:9000/>
2016-09-15T15:48:17.101Z ERROR [ServiceManager] Service RestApiService [FAILED] has failed in the STARTING state.
javax.ws.rs.ProcessingException: Failed to start Grizzly HTTP server: Address already in use
    at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory.createHttpServer(GrizzlyHttpServerFactory.java:299) ~[graylog.jar:?]
    at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory.createHttpServer(GrizzlyHttpServerFactory.java:163) ~[graylog.jar:?]
    at org.graylog2.shared.initializers.AbstractJerseyService.setUp(AbstractJerseyService.java:160) ~[graylog.jar:?]
    at org.graylog2.shared.initializers.RestApiService.startUp(RestApiService.java:65) ~[graylog.jar:?]
    at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:60) [graylog.jar:?]
    at com.google.common.util.concurrent.Callables$3.run(Callables.java:100) [graylog.jar:?]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_101]
    at sun.nio.ch.Net.bind(Net.java:433) ~[?:1.8.0_101]
    at sun.nio.ch.Net.bind(Net.java:425) ~[?:1.8.0_101]
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:1.8.0_101]
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) ~[?:1.8.0_101]
    at org.glassfish.grizzly.nio.transport.TCPNIOBindingHandler.bindToChannelAndAddress(TCPNIOBindingHandler.java:131) ~[graylog.jar:?]
    at org.glassfish.grizzly.nio.transport.TCPNIOBindingHandler.bind(TCPNIOBindingHandler.java:88) ~[graylog.jar:?]
    at org.glassfish.grizzly.nio.transport.TCPNIOTransport.bind(TCPNIOTransport.java:248) ~[graylog.jar:?]
    at org.glassfish.grizzly.nio.transport.TCPNIOTransport.bind(TCPNIOTransport.java:228) ~[graylog.jar:?]
    at org.glassfish.grizzly.nio.transport.TCPNIOTransport.bind(TCPNIOTransport.java:219) ~[graylog.jar:?]
    at org.glassfish.grizzly.http.server.NetworkListener.start(NetworkListener.java:714) ~[graylog.jar:?]
    at org.glassfish.grizzly.http.server.HttpServer.start(HttpServer.java:278) ~[graylog.jar:?]
    at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpServerFactory.createHttpServer(GrizzlyHttpServerFactory.java:296) ~[graylog.jar:?]
    ... 6 more
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.AlertScannerThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.AlertScannerThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.ClusterHealthCheckThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.ClusterHealthCheckThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.IndexerClusterCheckerThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.IndexerClusterCheckerThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.IndexRetentionThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.IndexRetentionThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.IndexRotationThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.IndexRotationThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.VersionCheckThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.VersionCheckThread] complete, took <0ms>.
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.ThrottleStateUpdaterThread].
2016-09-15T15:48:17.121Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.ThrottleStateUpdaterThread] complete, took <0ms>.
2016-09-15T15:48:17.122Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.events.ClusterEventPeriodical].
2016-09-15T15:48:17.123Z INFO  [LogManager] Shutting down.
2016-09-15T15:48:17.126Z INFO  [JournalReader] Stopping.
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.events.ClusterEventPeriodical] complete, took <0ms>.
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.events.ClusterEventCleanupPeriodical].
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.events.ClusterEventCleanupPeriodical] complete, took <0ms>.
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog2.periodical.IndexRangesCleanupPeriodical].
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog2.periodical.IndexRangesCleanupPeriodical] complete, took <0ms>.
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog.plugins.usagestatistics.UsageStatsNodePeriodical].
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog.plugins.usagestatistics.UsageStatsNodePeriodical] complete, took <0ms>.
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog.plugins.usagestatistics.UsageStatsClusterPeriodical].
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog.plugins.usagestatistics.UsageStatsClusterPeriodical] complete, took <0ms>.
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutting down periodical [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread].
2016-09-15T15:48:17.127Z INFO  [PeriodicalsService] Shutdown of periodical [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] complete, took <0ms>.
2016-09-15T15:48:17.132Z ERROR [InputSetupService] Not starting any inputs because lifecycle is: Uninitialized?[LB:DEAD]
2016-09-15T15:48:17.133Z INFO  [WebInterfaceService] Shutting down Web Interface at <http://0.0.0.0:9000/>
2016-09-15T15:48:17.137Z INFO  [Buffers] Waiting until all buffers are empty.
2016-09-15T15:48:17.138Z INFO  [Buffers] All buffers are empty. Continuing.
2016-09-15T15:48:17.142Z INFO  [node] [graylog-server] stopping ...
2016-09-15T15:48:17.142Z INFO  [OutputSetupService] Stopping output org.graylog2.outputs.BlockingBatchedESOutput
2016-09-15T15:48:17.176Z INFO  [node] [graylog-server] stopped
2016-09-15T15:48:17.176Z INFO  [node] [graylog-server] closing ...
2016-09-15T15:48:17.179Z INFO  [node] [graylog-server] closed
2016-09-15T15:48:17.203Z INFO  [LogManager] Shutdown complete.
2016-09-15T15:48:17.223Z INFO  [NetworkListener] Stopped listener bound to [0.0.0.0:9000]
2016-09-15T15:48:17.224Z ERROR [ServerBootstrap] Graylog startup failed. Exiting. Exception was:
java.lang.IllegalStateException: Expected to be healthy after starting. The following services are not running: {FAILED=[RestApiService [FAILED]]}
    at com.google.common.util.concurrent.ServiceManager$ServiceManagerState.checkHealthy(ServiceManager.java:713) ~[graylog.jar:?]
    at com.google.common.util.concurrent.ServiceManager$ServiceManagerState.awaitHealthy(ServiceManager.java:542) ~[graylog.jar:?]
    at com.google.common.util.concurrent.ServiceManager.awaitHealthy(ServiceManager.java:299) ~[graylog.jar:?]
    at org.graylog2.bootstrap.ServerBootstrap.startCommand(ServerBootstrap.java:129) [graylog.jar:?]
    at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:209) [graylog.jar:?]
    at org.graylog2.bootstrap.Main.main(Main.java:44) [graylog.jar:?]
2016-09-15T15:48:17.224Z INFO  [ServiceManagerListener] Services are now stopped.
2016-09-15T15:48:17.242Z INFO  [Server] SIGNAL received. Shutting down.
2016-09-15T15:48:17.250Z INFO  [GracefulShutdown] Graceful shutdown initiated.
2016-09-15T15:48:17.251Z WARN  [DeadEventLoggingListener] Received unhandled event of type <org.graylog2.plugin.lifecycles.Lifecycle> from event bus <AsyncEventBus{graylog-eventbus}>
2016-09-15T15:48:17.251Z INFO  [GracefulShutdown] Node status: [Halting?[LB:DEAD]]. Waiting <3sec> for possible load balancers to recognize state change.
2016-09-15T15:48:23.829Z INFO  [CmdLineTool] Loaded plugin: Collector 1.0.3 [org.graylog.plugins.collector.CollectorPlugin]
2016-09-15T15:48:23.829Z INFO  [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 1.0.3 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin]
2016-09-15T15:48:23.830Z INFO  [CmdLineTool] Loaded plugin: MapWidgetPlugin 1.0.3 [org.graylog.plugins.map.MapWidgetPlugin]
2016-09-15T15:48:23.830Z INFO  [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 1.0.0-beta.5 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin]
2016-09-15T15:48:23.830Z INFO  [CmdLineTool] Loaded plugin: Anonymous Usage Statistics 2.0.3 [org.graylog.plugins.usagestatistics.UsageStatsPlugin]
2016-09-15T15:48:23.975Z INFO  [CmdLineTool] Running with JVM arguments: -Djava.net.preferIPv4Stack=true -Xms1500m -Xmx1500m -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb
2016-09-15T15:48:26.920Z INFO  [InputBufferImpl] Message journal is enabled.
2016-09-15T15:48:27.184Z INFO  [LogManager] Loading logs.
2016-09-15T15:48:27.315Z INFO  [LogManager] Logs loading complete.
2016-09-15T15:48:27.316Z INFO  [KafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal

Upgrade to 2.x

Hello,
Is it safe to upgrade to 2.x branch with this ansible role?
Is any user intervention needed when I currently use 1.3.4 ?

CentOS 7: Issue with MongoDB timing out SystemD

In the Systemd service file for RHEL it has this line.

"PIDFile=/var/run/mongodb/mongod.pid"

Which means the mongod.conf needs to have a line to create that pid otherwise systemd will time out and fail to start the service.

pidFilePath: "/var/run/mongodb/mongod.pid"

Error when (disabled) dependencies are not installed

I get an error that the depend roles (ES, MongoDB, nginx) are not installed, even when I disable those functionalities using:

  roles:
  - role: Graylog2.graylog-ansible-role
    graylog_install_mongodb: false
    graylog_install_elasticsearch: false
    graylog_install_nginx: false

I would expect that by disabling them, the dependent roles do not have to be installed?

Gzip defaults are wrong / mixed up

Here in:

# Enable/disable GZIP support for the web interface. This compresses HTTP responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.
web_enable_gzip = {{ graylog_enable_gzip }}

It's said that web_enable_gzip is enabled by default.

And here:

## Enable GZIP support for REST api. This compresses API responses and therefore helps to reduce
## overall round trip times. This is disabled by default. Uncomment the next line to enable it.
rest_enable_gzip = {{ graylog_rest_enable_gzip }}

It's said that rest_enable_gzip is disabled by default.

But looking in https://github.com/Graylog2/graylog-ansible-role/blob/f1fc74b7a826d8c74efe669ff0d28e301d656e26/defaults/main.yml
we can clearly see that it's actually the other way around, graylog_rest_enable_gzip defaults to true, and graylog_enable_gzip defaults to false.

I don't know if the comments were the intended option or if the code was, so I can't really attach a PR.

Embedded or external ElasticSearch? Embedded or external webserver?

Hello! Looks like this playbook configures embedded ElasticSearch, but at same time it installs external ElasticSearch as a dependency. Is that correct?

Also I've been using Graylog without any webserver (it has embedded one I believe) - are there any benefits from using nginx instead?

templates/graylog.web.default.j2 variable names

Hi and first of all thanks for this awesome role. It helped me a lot.
I just ran into an issue while trying to configure https for the web interface. Changes to the config had no effect. Turns out that all variable names in templates/graylog.web.default.j2 are prefixed GRAYLOG2. The init script /etc/init/graylog-web.conf of the web interface looks like this:

description "Graylog web"
author "TORCH GmbH <[email protected]>"

start on runlevel [2345]
stop on runlevel [!2345]

respawn
respawn limit 10 5

setuid graylog-web
setgid graylog-web
console log

script
  GRAYLOG_WEB_HTTP_ADDRESS="0.0.0.0"
  GRAYLOG_WEB_HTTP_PORT="9000"

  test -f /etc/default/graylog-web && . /etc/default/graylog-web

  export JAVA_OPTS="$GRAYLOG_WEB_JAVA_OPTS"

  exec $GRAYLOG_COMMAND_WRAPPER /usr/share/graylog-web/bin/graylog-web-interface \
    -Dlogger.file=/etc/graylog/web/logback.xml \
    -Dconfig.file=/etc/graylog/web/web.conf \
    -Dpidfile.path=/var/lib/graylog-web/application.pid \
    -Dhttp.address=$GRAYLOG_WEB_HTTP_ADDRESS \
    -Dhttp.port=$GRAYLOG_WEB_HTTP_PORT \
    $GRAYLOG_WEB_ARGS
end script

As you can see all variables are prefixed GRAYLOG in the init script without the 2, thus the default is not effective.
I'm using graylog-web version 1.0.2-1.

Version 2.3.0: Breaking Change Somewhere

@mariussturm With the latest release of 2.3.0, it seems as if the update has broken production system. Still debugging as to what exactly happened, but I wanted to let you know about what appears to be a breaking change with the latest release.

Updating to new graylog version fails because of stale graylog_repository.deb file

Hello everyone,

we just experienced an edge case that happens when one tries to update graylog from the 2.1 repository to the 2.2 repository.

We have been running a testing instance of graylog 2.1.2 for some time and I wanted to rerun the ansible role to update the machine.

Inside the setup-Debian.yml file a task Graylog repository package should be downloaded is called:

- name: Graylog repository package should be downloaded
  get_url:
    url: "{{ graylog_apt_deb_url }}"
    dest: '/tmp/graylog_repository.deb'

The problem we ran into: Because the machine has never been rebooted, the destination file still existed but was the deb file for the 2.1 repository. So get_url did not overwrite it and the job then installed the 2.1 repository again and then tried to install graylog-server=2.2.3-1
This fails, of course.

I will shortly send a merge request that fixes this by saving the repository file with its base version added.

I cannot test if a similar issue exists with rpm distributions.

Example Playbooks do not include es_api_basic_auth_username and es_api_basic_auth_password

The ElasticSearch role from Ansible-Galaxy now requires
es_api_basic_auth_username and es_api_basic_auth_password in order to configure Elastic Search.

The playbook.yml example needs to be updated to include these vars
Example:

- hosts: "server"
  become: True
  vars:
    #Graylog is compatible with elasticsearch 5.x since version 2.3.0, so ensure to use the right combination for your installation
    #Also use the right branch of the Elasticsearch Ansible role, master supports 5.x.
    es_major_version: "5.x"
    es_version: "5.6.7"
    # Install Elasticsearch via repository or direct package download
    #es_use_repository: False
    #es_custom_package_url: "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.7.rpm"
    es_instance_name: "graylog"
    es_scripts: False
    es_templates: False
    es_version_lock: False
    es_heap_size: "1g"
    **es_api_basic_auth_username: graylog
    es_api_basic_auth_password: somepassword**
    es_config:
      node.name: "graylog"
      cluster.name: "graylog"
      http.port: 9200
      transport.tcp.port: 9300
      network.host: "0.0.0.0"
      node.data: True
      node.master: True

    # Elasticsearch role already installed Java
    graylog_java_install: False

    graylog_install_mongodb: True

    # For Vagrant installations make sure port 9000 is forwarded
    graylog_web_endpoint_uri: "http://localhost:9000/api/"
    # For other setups, use the external IP of the Graylog server
    # graylog_web_endpoint_uri: "http://{{ ansible_host }}:9000/api/"

    nginx_sites:
      graylog:
        - "listen 80"
        - "server_name graylog"
        - |
          location / {
            proxy_pass http://localhost:9000/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass_request_headers on;
            proxy_connect_timeout 150;
            proxy_send_timeout 100;
            proxy_read_timeout 100;
            proxy_buffers 4 32k;
            client_max_body_size 8m;
            client_body_buffer_size 128k;
          }

  roles:
    - role: "Graylog2.graylog-ansible-role"
      tags:
        - "graylog"

Error:

TASK [elastic.elasticsearch : fail when api credentials are not declared when using security] ***
fatal: [default]: FAILED! => {"changed": false, "msg": "Enabling security requires an es_api_basic_auth_username and es_api_basic_auth_password to be provided to allow cluster operations"}

Feature Request: offer choice to install nginx from source

Idea is to compile NGINX with ngx_http_proxy_module to be able to make use of proxy_http_version: 1.1 so that you don't get

The response to this request is chunked and hence requires HTTP 1.1 to be sent, but this is a HTTP 1.0 request.

This happens when you want to export to CSV from within GL's web interface.

The choice of modules to be compiled should be given to the user, in the form of a list var. I'll try to submit a PR for this soon.

Cannot use with Ansible 1.9.4

ERROR: package is not a legal parameter in an Ansible task or handler

vagrant@hostname:/var/www/vhosts/mothership$ ansible --version
ansible 1.9.4
  configured module search path = None

Ansible file:


---
- hosts: all
  roles:
    - { role: graylog2.graylog }

Error: No package matching 'mongodb_packages_dependencies

Hi I have followed the steps, everything is downloaded in roles, but while running on vagrant there is dependecy related issue. Do I need to manually install the dependecy first and then run the playbook?
Please refer the error output below:

[vagrant@localhost testGraylog_role]$ ansible-playbook your_playbook.yml -i "127.0.0.1:22,"

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [127.0.0.1]

TASK [lesmyrmidons.mongodb : Include OS-specific variables.] *******************
ok: [127.0.0.1]

TASK [lesmyrmidons.mongodb : RedHat Install dependencies packages] *************
failed: [127.0.0.1] (item=[u'mongodb_packages_dependencies']) => {"changed": false, "failed": true, "item": ["mongodb_packages_dependencies"], "msg": "No package matching 'mongodb_packages_dependencies' found available, installed or updated", "rc": 126, "results": ["No package matching 'mongodb_packages_dependencies' found available, installed or updated"]}
        to retry, use: --limit @/home/vagrant/testGraylog_role/your_playbook.retry

PLAY RECAP *********************************************************************
127.0.0.1                  : ok=2    changed=0    unreachable=0    failed=1

===============================================================
Detailed issue summary:
TASK [lesmyrmidons.mongodb : RedHat | Install dependencies packages] ***********
task path: /home/vagrant/testGraylog_role/roles/lesmyrmidons.mongodb/tasks/setup-RedHat.yml:3
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/packaging/os/yum.py
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892 `" && echo ansible-tmp-1488959605.12-14485318799892="` echo ~/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892 `" ) && sleep 0'"'"''
<127.0.0.1> PUT /tmp/tmpUlJk_e TO /home/vagrant/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892/yum.py
<127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]'
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892/ /home/vagrant/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892/yum.py && sleep 0'"'"''
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-pzcgadtwszahupzxbcakaoixfvitdbai; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892/yum.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1488959605.12-14485318799892/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
failed: [127.0.0.1] (item=[u'mongodb_packages_dependencies']) => {
    "changed": false,
    "failed": true,
    "invocation": {
        "module_args": {
            "conf_file": null,
            "disable_gpg_check": false,
            "disablerepo": null,
            "enablerepo": null,
            "exclude": null,
            "install_repoquery": true,
            "list": null,
            "name": [
                "mongodb_packages_dependencies"
            ],
            "state": "installed",
            "update_cache": false,
            "validate_certs": true
        },
        "module_name": "yum"
    },
    "item": [
        "mongodb_packages_dependencies"
    ],
    "msg": "No package matching 'mongodb_packages_dependencies' found available, installed or updated",
    "rc": 126,
    "results": [
        "No package matching 'mongodb_packages_dependencies' found available, installed or updated"
    ]
}

Java 8 is installed twice

One of the dependencies, F500.elasticsearch, is dependent on geerlingguy.java. Installing Java8 in tasks/prepare.yml can be avoided. (could be considered a side-effect).

Not running after install.

I've run the ansible playbook and all has completed successfully.

I got to log into the graylog server and get the following:

Error message
Bad request
Original Request
GET http://127.0.0.1:12900/system/sessions
Status code
undefined
Full error message
Error: Request has been terminated Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.

Any ideas?

Ubuntu 14.04.2
UFW is off.

Running on Hyper-v.

Defaults can't be overwritten by vars

Hello,
in tasks/prepare.yml I've found

- name: Include distro-based vars
  include_vars: ../defaults/{{ ansible_distribution }}.yml

which overwrites roles or playbooks vars. Defaults has the lowest priority and mustn't be used in include_vars. Please, fix it.

Replace MongoDB dependency

Currently the tests are failing with:

TASK [elastic.elasticsearch : set_fact] ****************************************

fatal: [localhost]: FAILED! => {"failed": true, "msg": "'redhat_elasticsearch_install_from_repo' is undefined"}   

I think this is because of the Ansible version we use for running the tests. Elasticsearch needs Ansible version >= 2.2.0 and the test suite uses 2.1.

For updating the test suite to 2.2 we need a MongoDB role that is compatible with it.

https://github.com/lesmyrmidons/ansible-role-mongodb unfortunately doesn't work. Since a fix is on master but never pushed to Galaxy I consider that role as unmaintained.

So to get out of this we need a maintained MongoDB role that supports Ansible 2.2 as well as Debian/Ubuntu platform and RedHat/Centos.

Make installation of Java optional

It'd be nice if the installation of Java could be disabled (like with graylog_install_nginx: false).

In my particular case I'm deploying both Graylog and Icinga2 with Ansible. Graylog installs Oracle's Java 8 and Icinga installs the OpenJDK 7 (via its dependency to geerlingguy.java), neither of which can be disabled.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.