ansibleshipyard / ansible-zookeeper Goto Github PK
View Code? Open in Web Editor NEWThis project forked from kpx-dev/ansible-zookeeper
Ansible playbook for ZooKeeper
License: MIT License
This project forked from kpx-dev/ansible-zookeeper
Ansible playbook for ZooKeeper
License: MIT License
Can i push code?
defaults/main.yml
zookeeper_base_url: http://www.us.apache.org/dist/zookeeper/zookeeper
zookeeper_url: "{{ zookeeper_base_url }}-{{zookeeper_version}}/zookeeper-{{zookeeper_version}}.tar.gz"
I was looking into this and the upstart script was the cause; using su -c means that when the daemon is killed or restarted, only the 'su' process gets killed but the daemon itself does not, so it sits hanging on the :2181 port and then the new daemon instance can't run.
The fix I used which requires installing an the daemonize package was this change to the end of zookeeper.conf.j2:
...
expect daemon
script
exec daemonize -u zookeeper -p /var/run/zookeeper.pid -o /var/log/zookeeper/init-zookeeper.log {{zookeeper_dir}}/bin/zkServer.sh start-foreground
end script
Although it would require tasks/RedHat.yml to be changed to add 'daemonize' to the 'Install OS Packages' list.
I actually fixed it with my own tasks overriding the defaults.
Main motivation: reduce testing boilerplate and hacks..
Here's my initial work (travis not yet updated to use molecule):
https://github.com/AnsibleShipyard/ansible-zookeeper/compare/master...teralytics:molecule_docker_tests?expand=1
PR: planned soon, after resolving tasks:
molecule test
and the 3 containers will be created, java,zookeeper deployed and will stop after the above ansible-lint warnings
By default Zookeeper writes WALs to dataDir, but this role puts them in /var/log. I propose having log_dir split into log_dir and data_log_dir to reduce confusion, and default data_log_dir to /var/lib/zookeeper
We need to setup at least v3.4.6, better latest 3.4.8 as in Ubuntu 14.04 the motivation seems low to provide a recent apt package, see: https://answers.launchpad.net/ubuntu/+source/zookeeper/+question/289577
Intrestingly, an older version of this script, supported later zookeeper version, by installing from the official binaries (like it's still active for the redhat tasks)
Commit that changed this: a6160aa
Is there a chance to go back to using the official archive? (or to accept such a PR ?)
Hello!
ansible-galaxy install AnsibleShipyard.ansible-zookeeper
Run playbook
- hosts: zookeeper
become: yes
roles:
- role: AnsibleShipyard.ansible-zookeeper
But dependencies dont installed automatically.
Please add Automatically Install for Dependencies
Nov 20 10:16:34 localhost zkServer.sh: Using config: /opt/zookeeper-3.4.12/bin/../conf/zoo.cfg
Nov 20 10:16:34 localhost zkServer.sh: /opt/zookeeper-3.4.12/bin/zkServer.sh: line 170: exec: java: not found
Nov 20 10:16:34 localhost systemd: zookeeper.service: main process exited, code=exited, status=127/n/a
Hi, I have a problem with the ansible-zookeeper installation. The logs are below:
fatal: [humio1]: FAILED! => {"msg": "The conditional check 'not zookeeper_debian_systemd_enabled' failed. The error was: An unhandled exception occurred while templating '{{ _ubuntu_1504 or _debian_8 }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ ansible_distribution == 'Debian' and ansible_distribution_version|version_compare(8.0, '>=') }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: no filter named 'version_compare'. String: {{ ansible_distribution == 'Debian' and ansible_distribution_version|version_compare(8.0, '>=') }}\n\nThe error appears to be in '/root/.ansible/roles/AnsibleShipyard.ansible-zookeeper/tasks/upstart.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Check if /etc/init exists\n ^ here\n"}
My upstart.yaml :
name: Check if /etc/init exists
stat: path=/etc/init/
register: etc_init
name: Upstart script.
template: src=zookeeper.conf.j2 dest=/etc/init/zookeeper.conf
when:
lsb_release -a results:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial
I am waiting for your help thanks in advance
Hi,
how can it be managed to setup a cluster with e.g. 5 Zookeeper instances (everyone having a different id)?
regards
guenther
I'm getting this error:
fatal: [ec2-54-204-214-172.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: ERROR! 'unicode object' has no attribute 'host'"}
...
My hosts file is:
[mesos_primaries]
ec2-54-204-214-172.compute-1.amazonaws.com zoo_id=1 consul_bootstrap=true
ec2-54-235-59-210.compute-1.amazonaws.com zoo_id=2
ec2-54-83-161-83.compute-1.amazonaws.com zoo_id=3
How do I write my hosts file so it acknowledges the host name?
Just found out on a deployment on Ubuntu 16.04
I will def. provide a PR fix ASAP, by extracting the working task (from tarball.yml) out into common-config.yml
- name: "Create zookeeper {{item}} directory."
file: path={{item}} state=directory owner=zookeeper group=zookeeper
tags: bootstrap
with_items:
- "{{data_dir}}"
- "{{log_dir}}"
Actually I'ld like todo the same with (not too old) task/feature (also in tarball):
- name: Add zookeeper's bin dir to the PATH
copy: content="export PATH=$PATH:{{zookeeper_dir}}/bin" dest="/etc/profile.d/zookeeper_path.sh" mode=755
when: zookeeper_register_path_env
@ernestas-poskus ok for you todo in the same PR ?
ps: Finally we would like to apply more cleanups in the role (sep. PR!), especially renaming those vars without zookeeper_
prefix.
Again the question howto best keep downward compatibility (after renaming).
I'ld propose to add an info block in the README incl. howto map from new->old vars (rather than add an automatic mapping in the default vars)
data_dir: zookeeper_data_dir
log_dir: zookeeper_ log_dir
I use the 0.9.2 version installed with ansible-galaxy.
I am trying to create an image with Packer and then run deploy stage when deploying servers.
I skip the deploy tag when creating the image and run deploy tag only when running servers.
Running deploy tag results in the following error:
FAILED! => {"failed": true, "msg": "The conditional check 'etc_init.stat.exists == true' failed. The error was: error while evaluating conditional (etc_init.stat.exists == true): 'etc_init' is undefined\n\nThe error appears to have been in '/etc/ansible/roles/AnsibleShipyard.ansible-zookeeper/tasks/RedHat.yml': line 37, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Upstart script.\n ^ here\n"}
I checked, the reason for that is the etc_init task is not tagged with deploy, so it does not get executed when running with deploy tag only. I changed it locally and the problem was gone. I had to add the tag to two tasks (I link to the release I am using):
https://github.com/AnsibleShipyard/ansible-zookeeper/blob/v0.9.2/tasks/RedHat.yml#L33
https://github.com/AnsibleShipyard/ansible-zookeeper/blob/v0.9.2/tasks/RedHat.yml#L44
I have noticed that newer version does not perform that check for systemd, so I think it could be a good idea to remove that check for upstart as well, since in upstart case, I think the error will remain in the newer versions as well.
@antonlindstrom @mhamrah How does the below work?where does {{ zoo_id }} gets the value from?
Template myid.j2 has the below
{{ zoo_id }}
we don't use /opt (instead another special subdir in /usr/local), so we need this to be configurable.
I already fixed this locally and ready to make a PR (after #34 got merged)
An example would be here: https://github.com/AnsibleShipyard/ansible-nodejs/tree/master/tests
HOWEVER, the file integration-tests.yml
should now be main.yml
.
Things will be very clear in a day or two why.
add debian and fedora support
Need to update the zookeeper version to something other than 3.4.9. This version is unavailable. and the role fails when we run the playbook.
Please update to the latest (3.4.11) or provide a way to override the version.
Created a pull req: #74
One of the tasks fails. I'm using the role like this:
- { role: 'ansible-zookeeper', zookeeper_hosts: "{{ groups.mesos_masters }}", tags: ['zookeeper'] }
TASK: [ansible-zookeeper | Overwrite default config file] *********************
fatal: [mesos-master] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'str object' has no attribute 'host'", 'failed': True}
The issue lies within groups.mesos_masters which contains the following list: ['mesos-master', 'mesos-master-0c1', 'mesos-master-7e0']
so it seems we have to call the role with different parameters.
Still investigating more.
Edit I changed the code in zoo.cfg.j2 to:
from {{ server.host }}
to {{ hostvars[server]['ansible_hostname'] }}
ansible-galaxy install AnsibleShipyard.ansible-zookeeper
- downloading role 'ansible-zookeeper', owned by AnsibleShipyard
- downloading role from https://github.com/AnsibleShipyard/ansible-zookeeper/archive/v0.22.0.tar.gz
- extracting AnsibleShipyard.ansible-zookeeper to /home/user/.ansible/roles/AnsibleShipyard.ansible-zookeeper
- AnsibleShipyard.ansible-zookeeper (v0.22.0) was installed successfully
Hi,
I'm running the playbook.rml for ansible-mesos-playbook which installs ansible-zookeeper as one of the roles. It runs through the Redhat.yml file creating data folders, log folders, setting the upstart script etc. When it comes to running/starting the zookeeper service it errors out as below. Any idea why this is?
TASK: [ansible-zookeeper | Update apt cache] **********************************
skipping: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Apt install required system packages.] *************
skipping: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Overwrite myid file.] ******************************
skipping: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Overwrite default config file] *********************
skipping: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Restart zookeeper] *********************************
skipping: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | file path=/opt/src state=directory] ****************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | file path={{zookeeper_dir}} state=directory] *******
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Download zookeeper version.] ***********************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Install OS Packages] *******************************
ok: [svdpdac015.techlabs.accenture.com] => (item=libselinux-python)
TASK: [ansible-zookeeper | Unpack tarball.] ***********************************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | group name=zookeeper system=yes] *******************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | user name=zookeeper group=zookeeper system=yes] ****
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Change ownership on zookeeper directory.] **********
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Create zookeeper data folder.] *********************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Create zookeeper logs folder.] *********************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Upstart script.] ***********************************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Write myid file.] **********************************
ok: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Configure zookeeper] *******************************
changed: [svdpdac015.techlabs.accenture.com]
TASK: [ansible-zookeeper | Start zookeeper] ***********************************
failed: [svdpdac015.techlabs.accenture.com] => {"failed": true}
msg: zookeeper: unrecognized service
zookeeper: unrecognized service
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/playbook.retry
svdpdac015.techlabs.accenture.com : ok=21 changed=1 unreachable=0 failed=1
svdpdac016.techlabs.accenture.com : ok=6 changed=0 unreachable=0 failed=0
svdpdac017.techlabs.accenture.com : ok=6 changed=0 unreachable=0 failed=0
seems variable log_dir
should be used instead. PR is created BTW.
Thanks
-cl
During my work of integrating molecule testing (see #60), I got following 'ansible-lint' warnings (that need to be fixed, before molecule runs the actual tests):
➜ ansible-zookeeper git:(master) ✗ molecule verify
--> Executing ansible-lint...
[ANSIBLE0002] Trailing whitespace
/Users/lhoss/IdeaProjects/ansible-zookeeper/meta/main.yml:10
# the ones that apply to your role. If you don't see your
[ANSIBLE0002] Trailing whitespace
/Users/lhoss/IdeaProjects/ansible-zookeeper/meta/main.yml:116
[ANSIBLE0006] tar used in place of unarchive module
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:12
Task/Handler: Unpack tarball.
[ANSIBLE0006] tar used in place of unarchive module
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:12
Task/Handler: Unpack tarball.
[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:16
Task/Handler: group name=zookeeper system=yes
[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:16
Task/Handler: group name=zookeeper system=yes
[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:17
Task/Handler: user system=yes name=zookeeper group=zookeeper
[ANSIBLE0011] All tasks should be named
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:17
Task/Handler: user system=yes name=zookeeper group=zookeeper
[ANSIBLE0009] Octal file permissions must contain leading zero
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:42
Task/Handler: Add zookeeper's bin dir to the PATH
[ANSIBLE0009] Octal file permissions must contain leading zero
/Users/lhoss/IdeaProjects/ansible-zookeeper/tasks/tarball.yml:42
Task/Handler: Add zookeeper's bin dir to the PATH
Note: Even though these warnings might be sometimes of 'subjective' nature, I aim to follow them (on the roles I will gradually move to molecule).
I'ld work on fixing myself (but only later this week), unless somebody is faster :)
[WARNING]: - ansibleshipyard.ansible-zookeeper was NOT installed successfully: Unable to compare role versions (v0.0.3, v0.0.4, v0.0.5, v0.0.6, v0.9.0, v0.9.1, v0.9.1.1,
v0.9.2, v0.10.0, v0.11.0, v0.12.0, v0.13.0, v0.14.0, v0.15.0, v0.16.0, v0.16.1, v0.16.2, v0.18.0, v0.19.0, v0.20.0, v0.21.0, v0.22.0, 0.23.0, v0.17.0) to determine the most
recent version due to incompatible version formats. Please contact the role author to resolve versioning conflicts, or specify an explicit role version to install.
A very useful zookeeper feature, to avoid filling up too much your zookeeper data+log dirs:
A reasonable config to add in zoo.cfg would be for ex:
+autopurge.purgeInterval=24
+autopurge.snapRetainCount=10
I propose to make this configurable, introducing 2 new role variables (that keep the current default, to not use autopurge):
zookeeper_autopurge_purgeInterval=0
zookeeper_autopurge_snapRetainCount=10
Again, I'm ready to contribute this. Before creating the PR, happy for comments?
@ernestas-poskus ?
Can you start tagging releases in this role like you do the others? Thanks!
In ansible-zookeeper/templates/zookeeper.service.j2
It might be a good idea to change the following:
[Unit]
Description=ZooKeeper
To:
[Unit]
Description=ZooKeeper
After=network.target
Wants=network.target
Everything works fine with a server that is already up and running, but in the case of rebooting a server, ZK starts too early, before the network is even available, leading to errors about not being able to resolve other cluster member hostnames, or timing out while trying to reach them.
In template zoo.cfg.j2 there is no way of passing autopurage configuration.
It can be simple added by :
{% if zookeeper_autopurge_purgeInterval > 0 %}
autopurge.purgeInterval={{ zookeeper_autopurge_purgeInterval }}
autopurge.snapRetainCount={{ zookeeper_autopurge_snapRetainCount }}
{% endif %}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.