geerlingguy / ansible-role-varnish Goto Github PK
View Code? Open in Web Editor NEWAnsible Role - Varnish HTTP accelerator
Home Page: https://galaxy.ansible.com/geerlingguy/varnish/
License: MIT License
Ansible Role - Varnish HTTP accelerator
Home Page: https://galaxy.ansible.com/geerlingguy/varnish/
License: MIT License
There is a check for ansible_os_family that restricts this working on Ubuntu bionic
- name: Ensure Varnish services are started and enabled on startup.
service:
name: "{{ item }}"
state: started
enabled: true
with_items: "{{ varnish_enabled_services | default([]) }}"
when: >
varnish_enabled_services and
(ansible_os_family != 'Debian' and ansible_distribution_release != "xenial")
Should just be:
- name: Ensure Varnish services are started and enabled on startup.
service:
name: "{{ item }}"
state: started
enabled: true
with_items: "{{ varnish_enabled_services | default([]) }}"
when: >
varnish_enabled_services and (ansible_os_family != 'Debian' or
(ansible_os_family == 'Debian' and ansible_distribution_release != "xenial"))
When building a large number of Varnish servers last week, I noticed that one of the servers wasn't picking up my custom varnish_listen_port
. As it turns out, running systemctl status varnish
revealed that the systemd unit file being used was /lib/systemd/system/varnish.service
.
This role currently stores a varnish unit file at /etc/systemd/system/varnish.service
, but it looks like it would be more correct to store it at the /lib
path. I'm going to do a test on a branch and see if it works the same—if so, I'll move the file there in this role's configuration, and then make sure the /etc
file is symlinked to the /lib
one.
Systemd configuration currently gets overwritten with a dnf update (on CentOS 8 at least)
https://varnish-cache.org/docs/6.2/tutorial/putting_varnish_on_port_80.html
Suggests adding a custom conf with the ExecStart, is it possible to implement this?
There are new repo starting from v6 with these instructions: https://packagecloud.io/varnishcache/varnish60lts/install#manual-deb
Hello,
Reload does not work anymore on Debian Stretch with varnish 4.1.10 (and I think any version < 6.1).
In commit e0b2412 ExecReload has been set to /usr/sbin/varnishreload
instead of /usr/share/varnish/reload-vcl
. With Varnish 4.1.10 on Debian Stretch, varnishreload
does not exists.
I think varnishreload
has been added in packages for the 6.1 version and the ExecReload line should be:
ExecReload={% if varnish_version | version_compare('6.1', '<') %}/usr/share/varnish/reload-vcl{% else %}/usr/sbin/varnishreload{% endif %}
Thank you
The Varnish apt repositories lack builds for Utopic Unicorn, Ubuntu 14.10, so after obtaining the key the build of the box fails.
I fixed this locally by simply installing the version of Varnish 4.0 that is in the Ubuntu repositories.
Should I do this programmatically and submit a PR?
I had varnish 5.0. I've tried to purge varnish from the vm. set varnish_version: "4.1"
in config.yml and run vagrant provision
but anyway I get the fifth version.
Thanks!
After running the role Varnish get's installed and running, but it does not start on system boot. You must start it manually (service varnish start
). The NCSA and log services do not start on boot either.
Apparently, varnish-cache.org's package does not enable them on systemd config.
See a failed build with 4.1: https://travis-ci.org/geerlingguy/ansible-role-varnish/builds/83678271
What's new: https://www.varnish-cache.org/docs/trunk/whats-new/changes.html (no mention of the different CLI parameters though... need to dig more into it).
As the title says... 6.1 is not available on Debian 10.
At least in my Docker container build: geerlingguy/drupal-vm#1451 (comment)
Because of this, on a fresh Debian 9 installation of Varnish, the role's configuration for ports (and some other things) have no effect on the running instance of Varnish!
The test is a lie...
From a failed build:
TASK [role_under_test : Ensure Varnish is started and set to run on startup.] **
changed: [localhost]
RUNNING HANDLER [role_under_test : restart varnish] ****************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "/etc/init.d/varnish: 36: ulimit: error setting limit (Operation not permitted)\n/etc/init.d/varnish: 36: ulimit: error setting limit (Operation not permitted)\n"}
Locally, I can see that varnish is starting, but something's not working correctly with the restart:
$ docker exec --tty a21a930e env TERM=xterm ps -ax
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /sbin/init
960 ? Ss 0:00 /usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl
967 ? Sl 0:00 /usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl
I ran into this problem:
http://www.varnish-cache.org/docs/trunk/tutorial/putting_varnish_on_port_80.html
But I can't fix with that blog post.
It seems need a patch to this role.
OS: Ubuntu Server 16.04 LTS x86_64
See title. Should be relatively simple...
The version_compare
filter was removed in 2.9 after being deprecated in 2.5. I need to update this role so it works correctly with Ansible 2.9.0 and later, and bump the minimum compatible version to 2.5.
First identified in failed Drupal VM run: https://travis-ci.org/geerlingguy/drupal-vm/jobs/606106643#L2184
I get the following error trying to provision a CentOs 7 machine:
TASK [geerlingguy.varnish : Ensure Varnish services are started and enabled on startup.] ***
failed: [lsv3] (item=varnish) => {"changed": false, "item": "varnish", "msg": "Unable to start service varnish: Job for varnish.service failed because the control process exited with error code. See \"systemctl status varnish.service\" and \"journalctl -xe\" for details.\n"}
Stub issue. I'm looking at this now, and will write what I find here for future people.
Is it possible to configure the package state to latest to make it possible to update varnish?
Following what you have outlined in your book
[root@ansible1 lamp-infrastructure]# tree
.
├── configure.yml
├── index.php.j2
├── inventories
├── playbooks
│ ├── db
│ │ ├── main.yml
│ │ └── vars.yml
│ ├── memcached
│ │ ├── main.yml
│ │ └── vars.yml
│ ├── varnish
│ │ ├── main.yml
│ │ ├── templates
│ │ │ └── default.vcl.j2
│ │ └── vars.yml
│ └── www
│ ├── index.php.j2
│ ├── main.yml
│ └── vars.yml
├── provisioners
├── provision.yml
└── Vagrantfile
9 directories, 13 files
[root@ansible1 lamp-infrastructure]# ansible-playbook configure.yml
ERROR! variable files must contain either a dictionary of variables, or a list of dictionaries. Got: firewall_all
owed_tcp_ports - "22" - "80" (<class 'ansible.parsing.yaml.objects.AnsibleUnicode'>)
[root@ansible1 lamp-infrastructure]#
Testing from Google Cloud Compute instance - ansinble1
no matter what I seem to do, ansible won't copy my custom default.vcl.js file
My ansible playbook and custom default.vcl is included (just remove .txt).
It would be great to be able to specify the listening protocol of varnishd (-a switch).
A new variable would be necessary : varnish_listen_protocol and the ExecStart line of systemd service would be modified as :
varnish.service.j2 :
ExecStart=/usr/sbin/varnishd -a {{ varnish_listen_address }}:{{ varnish_listen_port }},{{ varnish_listen_protocol }} -T {{ varnish_admin_listen_host }}:{{ varnish_admin_listen_port }}{% if varnish_pidfile %} -P {{ varnish_pidfile }}{% endif %} -f {{ varnish_config_path }}/default.vcl -S {{ varnish_config_path }}/secret -s {{ varnish_storage }} {{ varnishd_extra_options }}
Also in varnish.params.j2 : a new option :
VARNISH_LISTEN_PROTOCOL={{ varnish_listen_protocol }}
5.x doesn't even have a Bionic release, so it would be better to default the role to 6.1.
After using the defaults and deploying to an Ubuntu 16.04 instance, I'm seeing:
# systemctl status varnish
...
/etc/systemd/system/varnish.service; disabled
And if I reboot, Varnish is not started after boot... therefore I have to manually start it. Maybe this is an issue with the service
module and systemd, and the Varnish repo?
A linting issue:
TASK [geerlingguy.varnish : Add Varnish repository.] ***************************
changed: [192.168.2.2]
[WARNING]: Consider using yum module rather than running rpm
The filemode of varnish.service generates a Warning in daemon.log:
Aug 10 11:20:21 debian systemd[1]: Configuration file /etc/systemd/system/varnish.service is marked executable. Please remove executable permission bits. Proceeding anyway.
The role sets 0655 when it should be 0644
The template file for varnish.service
has the storage config hardcoded.
-ExecStart=/usr/sbin/varnishd -a :{{ varnish_listen_port }} -T {{ varnish_admin_listen_host }}:{{ varnish_admin_listen_port }} -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
So, it always use malloc
and 256 Mb
.
The path of default.vcl and secret are hardcoded two.
Right now, with this role's templates, you can't pass in extra Varnishd flags like -p http_max_hdr=128
(which, incidentally, I need to pass on one of my projects... therefore I'm going to fix this in a few minutes).
Could you make it so that the TTL is configurable and not just 120 seconds?
This role does not fully support the -a parameter, which accepts a wide range of IP address and port formats. Instead, this role limits the listen address configuration to port customization only, by hard-coding the :
prefix for the -a
switch, forcing the default IP bind address to be used.
It should be possible to configure the listen IP in addition to the port. I would suggest not having separate settings for IP and port, since this would still only support one binding but it's clear from the documentation that multiple bindings can be specified with comma (,
) separators.
For tracking, the -P parameter used by default seems to be broken on Ubuntu 14.04 systems, see: varnishcache/pkg-varnish-cache#72
When installing the role on Debian Jessie, the logrotation seems to have a bug:
The cron complains of the following issue:
/etc/cron.daily/logrotate:
error: error running non-shared postrotate script for /var/log/varnish/varnishncsa.log of '/var/log/varnish/varnishncsa.log '
run-parts: /etc/cron.daily/logrotate exited with return code 1
When looking at the configuration, it looks like invoke.rc has some issue with restarting the varnishncsa service. By updating it simply with:
/var/log/varnish/varnishncsa.log {
daily
rotate 7
compress
delaycompress
missingok
postrotate
if [ -d /run/systemd/system ]; then
systemctl -q is-active varnishncsa.service || exit 0
fi
- /usr/sbin/invoke-rc.d varnishncsa reload > /dev/null
+ systemctl reload varnishncsa.service > /dev/null
endscript
}
Seem to solve this issue. Is it something you already encounter?
OS: Ubuntu 16.04
systemd Reload configuration calls a script that looks eerily like old init.d script.
ExecReload=/usr/share/varnish/reload-vcl
This script in turn gets params from here:
/etc/default/varnish
which is not touched by this role.
When calling an ansible systemd reload handler (use case: update default.vcl but don't want to lose memory contents)
- name: reload varnish
systemd: name=varnish state=reloaded
This fails because admin host is defined as "localhost" in untouched params file and "127.0.0.1" in ExecStart
I am happy to roll a patch for this.
For scenarios where someone just wants to drop in their own VCL template instead of the extremely simplistic default.vcl.j2
template included with the role, the role should allow the local path to the default VCL to be configured as a variable.
I'd like this in particular to make the deployment of Varnish on Drupal VM (per geerlingguy/drupal-vm#97) much simpler (no need for any extra copy/restart tasks in the playbook since the role will take care of everything).
Varnish has the ability to create a PID file when it starts, if you use the -P parameter when starting the service.
This enhancement implies the addition of a couple variables (1 to enable the use of the PID and another for the PID path), and the modification of the different service templates (varnish.j2 and varnish.service.j2)
When installing on CentOS 8, I get:
No package pygpgme available.
Using the role with an Amazon Linux AMI I get this error;
TASK [ansible-role-varnish : Ensure Varnish services are started enabled on startup.] ******************************************************************************************************************************************************************************************
failed: [172.20.30.251] (item=varnish) => {"failed": true, "item": "varnish", "msg": "Starting Varnish Cache: [FAILED]\r\n"}
Trying to start and stop the service manually I get:
Stopping Varnish Cache: [FAILED]
Starting Varnish Cache: [FAILED]
In my configuration I use Varnish 4.1
See similar: geerlingguy/ansible-role-apache#60
While setting up tests on Drupal VM geerlingguy/drupal-vm#412, I found a few required packages that caused failures if not availsble, so I thought I'd take note of them for when you add the setup here:
logrotate
logrotate
and initscripts
TASK [geerlingguy.varnish : include_tasks] *************************************
skipping: [drupalvm]
TASK [geerlingguy.varnish : include_tasks] *************************************
included: /vagrant/provisioning/roles/geerlingguy.varnish/tasks/setup-Debian.yml for drupalvm
TASK [geerlingguy.varnish : Ensure APT HTTPS Transport is installed.] **********
changed: [drupalvm]
TASK [geerlingguy.varnish : Add packagecloud.io Varnish apt key.] **************
changed: [drupalvm]
TASK [geerlingguy.varnish : Add packagecloud.io Varnish apt repository.] *******
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: apt.cache.FetchFailedException: E:The repository 'https://packagecloud.io/varnishcache/varnish51/ubuntu bionic Release' does not have a Release file.
fatal: [drupalvm]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/tmp/ansible_ARMCpS/ansible_module_apt_repository.py", line 551, in \n main()\n File "/tmp/ansible_ARMCpS/ansible_module_apt_repository.py", line 543, in main\n cache.update()\n File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 505, in update\n raise FetchFailedException(e)\napt.cache.FetchFailedException: E:The repository 'https://packagecloud.io/varnishcache/varnish51/ubuntu bionic Release' does not have a Release file.\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Hi, I am not able to get this to install on Ubuntu 20.04
Below error:
Ign:5 https://packagecloud.io/varnishcache/varnish64/ubuntu focal InRelease
Err:6 https://packagecloud.io/varnishcache/varnish64/ubuntu focal Release
404 Not Found [IP: 2600:1f1c:2e5:6900:4e24:4dad:908b:18c6 443]
Reading package lists... Done
E: The repository 'https://packagecloud.io/varnishcache/varnish64/ubuntu focal Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Hi
I was attempting to install Varnish on an Amazon Linux 2 AMI and it failed.
TASK [mwp.varnish : Ensure Varnish 6.1 is installed.] **********************************************************************************************************************************************************************************************************
fatal: [10.202.1.164]: FAILED! => {"changed": false, "msg": "Failure talking to yum: failure: repodata/repomd.xml from varnishcache_varnish61: [Errno 256] No more mirrors to try.\nhttps://packagecloud.io/varnishcache/varnish61/el/2/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found"}
The problem relates to
varnish_yum_repo_baseurl: https://packagecloud.io/varnishcache/{{ varnish_packagecloud_repo }}/el/{{ ansible_distribution_major_version|int }}/$basearch
When using Amazon Linux 2 ansible_distribution is
ansible tag_Name_${linuxdistro}_${role}_base_build_${date}_01 -m setup | grep ansible_distribution_major_version
"ansible_distribution_major_version": "2",
The role ran fine on a RHEL7 instance,
To get around the issues I hard coded the EP version for the moment.
The question I suppose it, would you recommend an alternative solution?
Kind regards
See: http://varnish.org/docs/5.0/whats-new/upgrading-5.0.html#whatsnew-upgrading-5-0
Still need to fix up a couple things for 4.1 as well, but it looks like 5.0 won't be as far-reaching as 4.0 was in terms of VCL changes.
Hi
you already added a variable for default VCL varnish_default_vcl_template_path
.
It would be nice to have variables for all templated files, like varnish.service.j2 or varnish.params.j2
Thanks for your work !
See example: https://travis-ci.org/geerlingguy/drupal-vm/jobs/652146023#L2202
TASK [geerlingguy.varnish : Ensure Varnish services are started enabled on startup (Xenial specific)] ***
[DEPRECATION WARNING]: evaluating [u'varnish'] as a bare variable, this
behaviour will go away and you might need to add |bool to the expression in the
future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
skipping: [localhost] => (item=varnish)
Was using this role after following the Highly Available Infrastructure cookbook in Ansible for DevOps, however I was installing against CentOS 7 in Vagrant.
The Install Varnish step failed as it requires redhat-rpm-config & gcc
Error: Package: varnish-4.0.3-3.el7.x86_64 (epel)
Requires: redhat-rpm-config
Error: Package: varnish-4.0.3-3.el7.x86_64 (epel)
Requires: gcc
If I remove the disablerepo attribute (line 14 - tasks/setup-RedHat.yml) then Varnish installs correctly. This does seem to be a CentOS 7 specific issue as it installed without any errors on CentOS 6 with the repos disabled
As the title says. This will allow things like using file/persistent storage instead of malloc, or customizing the malloc limit of varnish.
I'm getting the following:
TASK [geerlingguy.varnish : Add Varnish apt repository.] ***********************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: apt.cache.FetchFailedException: E:The repository 'https://repo.varnish-cache.org/debian stretch Release' does not have a Release file.
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_5C07Yh/ansible_module_apt_repository.py\", line 565, in <module>\n main()\n File \"/tmp/ansible_5C07Yh/ansible_module_apt_repository.py\", line 553, in main\n cache.update()\n File \"/usr/lib/python2.7/dist-packages/apt/cache.py\", line 464, in update\n raise FetchFailedException(e)\napt.cache.FetchFailedException: E:The repository 'https://repo.varnish-cache.org/debian stretch Release' does not have a Release file.\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 0}
Simple/quick fix would be to switch to the system package (which is 5.0.0 right now anyways). But I think I may be able to switch repos at some point and this problem will just go away.
Using /etc/defaults/varnish
will not work for Debian 8.1 as they have switched to systemd. See this report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=749272
Support for systemd should be added somehow.
When called from a playbook, varnish failed to start, however no error was detected as ansible failed to detect this.
https://github.com/geerlingguy/ansible-role-varnish/blob/master/tasks/main.yml#L32
I had to login to the box and debug for a while, as service varnish start
was just returning fail
repeatedly. (I'm actually still not sure what the problem was.)
Right now, this role assumes all Varnish installations listen on a single host and port. Nevertheless, Varnish actually supports the following additional -a listening features:
The last point is crucial: the -a
switch can be specified multiple times to listen on multiple interfaces but this role does not support this whatsoever.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.