jaymon / chef-cookbooks Goto Github PK
View Code? Open in Web Editor NEWVarious Chef cookbooks
License: MIT License
Various Chef cookbooks
License: MIT License
I have a vagrantfile like this:
"python" => {
"common" => {
"user" => "vagrant",
"requirements" => [
"pyt",
"psycogreen==1.0",
"psycopg2==2.8.5",
],
},
"environments" => {
"py27" => {
"version" => "2.7.17",
"requirements" => [
"gevent==1.2.1",
],
},
"py38" => {
"version" => "3.8.3",
"requirements" => [
"gevent==20.6.2",
],
},
},
},
And it didn't seem like it installed all the common => requirements
dependencies correctly, so I should look into it
some quick thoughts I had open in a text file:
I think it might be worth adding a python cookbook that is basically the repo cookbook (so we'll deprecate that cookbook) combined with pipenv stuff, so basically if there is a requirements.txt then do one thing, if there is Pipfile then activate it with pipenv, stuff like that
When Chef runs the nginx cookbook it creates configurations based on the 'server' variable in the chef/environments/*.rb files.
If that server name changes, the file related to the previous domain is not deleted and causes nginx to fail on startup.
Suggested solution is to have chef clear out those files before creating the necessary ones.
From Nov 2017 issue in private repo by @sqpierce
All of these are from PRIVATE_SERVER_ISSUE##49 from Aug - Dec 2016, I just wanted to have a record of this here
somewhere between our current chef version (12.7.2) and the latest version 12.13.11 chef switched chef-solo to use chef-client in standalone mode behind the scenes and chef-client requires nodes to be set in the solo.rb
file, the latest Vagrant handles this, but our deployment scripts will need to account for this at some point in the future, I'll need to do a bit more investigating to figure out the best, repeatable, way to handle this.
This was in a text file I had opened:
for the new vagrant, and probably new chef stuff, we evidently need a node_name in the solo file, this is what vagrant does automatically, so I can customize for our boxes, this also means ops now needs a chef/nodes directory
vagrant@vagrant:/tmp/vagrant-chef$ cat solo.rb
node_name "vagrant-9a7febe3"
file_cache_path "/var/chef/cache"
file_backup_path "/var/chef/backup"
cookbook_path ["/tmp/vagrant-chef/f2bd98ffb21596b0d00b027a13dad947/cookbooks"]
role_path "/tmp/vagrant-chef/b1a8f62622b0b409b1f7fc1e72fac096/roles"
log_level :debug
verbose_logging false
encrypted_data_bag_secret nil
environment_path "/tmp/vagrant-chef/2d33d24d6c96b256ccc0ab6334e28def/environments"
environment "dev"
http_proxy nil
http_proxy_user nil
http_proxy_pass nil
https_proxy nil
https_proxy_user nil
https_proxy_pass nil
no_proxy nil
vagrant@vagrant:/tmp/vagrant-chef$
vagrant@vagrant:/tmp/vagrant-chef$
vagrant@vagrant:/tmp/vagrant-chef$
vagrant@vagrant:/tmp/vagrant-chef$ cd /tmp
vagrant@vagrant:/tmp$ ls
apiserver-stats.sock install.sh.32494 install.sh.531 ssh-04jWwQp8b1 vagrant-chef vagrant-shell
vagrant@vagrant:/tmp$ sudo find / -name "vagrant-9a7febe3"
vagrant@vagrant:/tmp$ sudo find / -name "*vagrant-9a7febe3*"
/tmp/vagrant-chef/f2bd98ffb21596b0d00b027a13dad947/nodes/vagrant-9a7febe3.json
vagrant@vagrant:/tmp$ cat /tmp/vagrant-chef/f2bd98ffb21596b0d00b027a13dad947/nodes/vagrant-9a7febe3.json
{
"name": "vagrant-9a7febe3",
"chef_environment": "dev"
}vagrant@vagrant:/tmp$
It looks like there doesn't need to be a node file for 12.13.37
but we do have to change the fab stuff to use chef-client command in local mode:
$ chef-client --config solo.rb --override-runlist "role[prod]" --environment prod --local-mode
That should allow us to upgrade chef. We also can switch Vagrant over to use the built in vagrant stuff instead of our bootstrap script if we want
These are some browser tabs I had opened, which I now want to close, they are related to this change:
this is the actual chef config code
Turns out, for some reason, vagrant requires a nodes_path when using the chef zero (chef-client --local-mode
), so this is only a problem with vagrant configured chef, the current solution to this problem is in server's vagrant file:
config.vm.provision :chef_zero do |chef|
chef.version = env.get("CHEF_VERSION")
chef.verbose_logging = false
# overcome bug in Vagrant that devs don't think is bug
require 'tmpdir'
node_path = ::File.join(::Dir.tmpdir, box_name)
if !::Dir.exist?(node_path)
::Dir.mkdir(node_path)
end
chef.nodes_path = node_path
I'm not a huge fan of making a fake directory in temp and sharing it to the vagrant box, but it works.
if you have a config like this:
"<SERVICE_NAME>" => {
"desc" => "...",
"command" => "/bin/sleep 600",
"count" => 0,
},
It won't create a <SERVICE_NAME>.target
that will work because the file will have this:
Requires =
Instead of:
Requires = <SERVICE_NAME>@1.service
Allow it to look at a file, and if that file changes it will run things.
We now handle the git pull externally, so that means things don't restart, we can solve this in one of two ways:
I like option 2
while closing PRIVATE_REPO_ISSUE#71 I saw that our config files already have timeouts but when I queried Redis directly timeout came back 0, which means it wasn't set, so we should make sure the config files are being correctly loaded since we upgraded to 3.2.8
From private repo issue Mar 2017
I think 3.8, and possibly 3.7 need some packages to compile good enough for pip
to update itself:
I think just libffi-dev
might be needed, but it might be nice to inspect the python version being installed and also install the correct python-dev
version. The error I was getting was:
ModuleNotFoundError: No module named '_ctypes'
when everyone was banging on the test server testing the app, the server eventually got into a state where it was printing something like this in the logs:
*** uWSGI listen queue of socket "x.x.x.x:xxxxx" (fd: 3) full !!! (101/100) ***
This looks like a problem with the SOMAXCON
setting in Ubuntu, and the --listen
flag on the uWSGI server. You can read more about it:
http://comments.gmane.org/gmane.comp.python.wsgi.uwsgi.general/6829
http://stackoverflow.com/questions/12340047/uwsgi-your-server-socket-listen-backlog-is-limited-to-100-connections
And you can just read more about graceful reloading, which might be the best way to tackle this issue since when the server gets into this state it doesn't seem to ever get out of it until it is restarted.
If you want to manually hose the server, you can run this:
#!/bin/bash
# http://serverfault.com/a/273241/190381
host="localhost"
host="148.251.132.12"
for x in {0..100}; do
curl -u GeQzZRzQ:8aBMuyFj2fwTfgW31kMK15Z8lWFfwnfz "http://$host:4000/test/idle?timeout=100" &
echo $x;
done
exit
search string: uWSGI listen queue of socket
from oct 2015 PRIVATE_REPO_ISSUE#17
We have a chicken/egg problem with new servers, our webserver cookbooks (nginx and uwsgi) don't actually start the servers until the end of the chef run to give chef time to install the code and get everything in place, etc.
This works great normally but doesn't work when Let's Encrypt expects a running server to create ssl certificates.
To get around this issue, we might try doing what standalone does, start a little mini server, we could use a chef ruby script to run a server in the root
:
require 'webrick'
s = WEBrick::HTTPServer.new({"BindAddress" => "0.0.0.0", "Port" => 80, "DocumentRoot" => root})
s.start
And then run the Let's Encrypt command to create the certificates, and then kill the server.
How the http recipe would do it is first it would check if requests were being answered on port 80, if they weren't, then it would fire up the server, otherwise it would just use the currently running server.
Moving to here from PRIVATE_REPO_ISSUE#61 from Dec 2016 but I think this might be outdated now
it thinks it starts and stops, convert this to a task so you can just run start on it all the time
from May 2015 PRIVATE_REPO_ISSUE#10
I should make sure the switch to systemd does this and then close this issue
so you can restart the server on a repo
cookbook git[name]
change, for example. Right now you have to put the restart on the repo
notifies block, but it makes sense to have a subscribes config block here also
For example, if you wanted the weekend to be different than monday-friday, you could do something like (instead of creating a whole new block):
"foo-cron" => {
"command" => "/some/command",
"schedule" => [
"0 12 * * 1-5", # run at noon monday-friday
"0 23 * * 0,6", # run at 11pm on the weekend
]
}
From December 2017 issue in private repo
I removed being able to start and stop all configured services from uwsgi and daemon, it would be nice to put those back in at some point
pyenv-virtualenv supports environment variables that could be used to update both virtualenv and pip.
PIP_VERSION
VIRTUALENV_VERSION
Links
See https://github.com/voi-inc/ops/commit/0e0623ca7ddd5277e1571ead6ae01cc6664392b7
# SSL Certificates
certificate_names = Dir.glob("/opt/ops/certs/*.crt").map {|f| File.basename(f)}
certificate_names.each do |file|
self["locations"]["users"]["root"].merge!({
file => {
"src" => self.in_ops("certs/") + file,
"dest" => "/etc/ssl/certs/" + file,
"mode" => "0664",
}
})
end
# SSL Certificate Keys
key_names = Dir.glob("/opt/ops/certs/*.key").map {|f| File.basename(f)}
key_names.each do |file|
self["locations"]["users"]["root"].merge!({
file => {
"src" => self.in_ops("certs/") + file,
"dest" => "/etc/ssl/private/" + file,
"mode" => "0640",
"group" => "ssl-cert",
}
})
end
From PRIVATE_REPO_ISSUE#31 by @fotopher on mar 2016
Looks like chef has built-in support for manipulating files. This could maybe replace our custom conf file code.
chef-apply can run one cookbook, might be worth looking into for debugging/testing and running things like our security cookbook. More info.
built-in libraries might save us some work in places.
It might also be worth exploring PyChef to see if we can use it in places.
From PRIVATE_REPO_ISSUE#40 from May 2016
Error message was:
ubuntu 18.04 not supported by certbot-auto
Links:
the pip
resource should default to whatever the system default python is (or be configurable) so you can set what you want and then have other cookbooks that rely on pip (like uwsgi) install the correct value
Let's say we have two subdomains:
And they both are pointing to box XX.XXX.XXX.XXX, but that box is only configured to answer requests for foo.example.com.
If you request https://bar.example.com nginx will actually answer the certificate with foo.example.com's ssl certificate:
$ curl -v "https://bar.example.com"
* Rebuilt URL to: https://bas.example.com/
* Trying XX.XXX.XXX.XXX...
...
* Connected to bar.example.com (XX.XXX.XXX.XXX) port 443 (#0)
...
* SSL connection using TLSv1.2 ...
...
* Server certificate:
* subject: CN=foo.example.com
...
* subjectAltName does not match bar.example.com
* SSL: no alternative certificate subject name matches target host name 'bar.example.com'
* stopped the pause stream!
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, Client hello (1):
curl: (51) SSL: no alternative certificate subject name matches target host name 'bar.example.com'
This seems like unexpected behavior, but I guess it makes sense, nginx is going to get the request because nginx is listening on port 443 and so any request will get to nginx and then when nginx goes through its rules, it can't find one to handle it so it just sends the first ssl rule it has because why not?
This is the best link I found on it: Nginx. How do I reject request to unlisted ssl virtual server?
And this is kind of a solution we could implement in the nginx/templates/default/server.conf.erb template:
if ($host != <%= @host %>) {
return 444;
#return 301 https://<%= @host %>$request_uri;
}
I guess the 444 return code is an Nginx code that just means ignore the request and Nginx will just drop it if you use curl's --insecure
flag, the redirect also fails because bar.example.com doesn't match the supplied ssl certificate so it fails certificate validation but curl --insecure
would return the 301 with the redirect.
We decided not to do anything about this because it seems more trouble than it would be worth and the fix has basically the same functionality as leaving everything alone, I just wanted to write all this down so we're aware of it in the future.
Currently, each virtual environment is unique, that means you can't update the python version of a certain virtual environment, that's something that will need to be thought out.
Currently, in order to update, you would have to completely rebuild the virtual environment and change the name, that might be annoying long term.
uWSGI had
start on ((local-filesystems and runlevel [2345]) or vagrant-mounted)
in the upstart scripts, the vagrant-mounted
was there because otherwise on vagrant reload
uwsgi will be started but Vagrant hasn't mounted the shared directories yet so uwsgi errors out because it can't find the uwsgi file if it is in a shared directory, this might not work in systemd, so verify everything is working and then this issue can be closed
See also PRIVATE_REPO_ISSUE#65 Put vagrant-mounted back into uwsgi cookbook from Dec 2016
These are the cookbooks that need to be switched from Upstart:
These are the cookbooks that need to be audited because they reference a service in a recipe:
Adding users and databases to a postgres db expects that postgres db to be on the local box that is running the provision. It would be great to add support for adding users and databases for a remote box. I started this work in the postgres-remote branch.
Basically, my idea is to add a client
key that has a dict value with configuration for a remote db server, it will default to localhost but can be configured to point to a remote box, and then the PostgresUser helper needs to be updated to set things like the host and port of the psql
command.
The hiccup is the password. I need to be able to bypass the password prompt for remote boxes, I think the solution is to set a temp pgpass file and have psql use that to run the commands so the password will be handle automatically.
But doing some research, I think I might be able to use the PGPASSWORD environment variable instead.
latest PGBouncer has SSL support now, with Topher working on turning on SSL for postgres, we should switch PGBouncer over to use ssl and ditch spiped for the PG database
https://pgbouncer.github.io/faq.html#how-to-use-ssl-connections-with-pgbouncer
This is from a March 2016 issue in a private repo, I'm moving it here
Right now package::update
creates a temp file to decide if it should run again, today I found out apt-get
already does that with /var/lib/apt/periodic/update-success-stamp
, you can check the modified date to see when it was last run (the file itself is empty).
It would also be cool if package::update
could monitor /etc/apt/sources.list
or /etc/apt/sources.list.d
and if those were changed since it was last run it would go ahead and run it apt-get update
anyway regardless of when it was last run, this could be done by keeping my sentinel file but adding values like how many files are in sources.list.d
and the md5 hash of sources.list
and if either of those are different it would run it again
http://manpages.ubuntu.com/manpages/trusty/man1/add-apt-repository.1.html
Error will look like this:
Expected process to exit with [0], but received '1'
out: ---- Begin output of cp -R "/var/chef/cache/uwsgi-2.0.18/uwsgi-2.0.18"/* "/opt/uwsgi" ----
out: STDOUT:
out: STDERR: cp: cannot create regular file '/opt/uwsgi/uwsgi': Text file busy
out: ---- End output of cp -R "/var/chef/cache/uwsgi-2.0.18/uwsgi-2.0.18"/* "/opt/uwsgi" ----
out: Ran cp -R "/var/chef/cache/uwsgi-2.0.18/uwsgi-2.0.18"/* "/opt/uwsgi" returned 1
Solution, to get the provision to work is to stop the server:
$ sudo systemctl stop <SERVER NAME>
But it would be better to just stop the server in the chef run if it sees it needs to upgrade.
http://redis.io/topics/encryption
Now that PGBouncer has SSL support, we only need Redis SSL support to ditch Spiped, this is just so we can remember and check up on Redis ssl progress
Redis might finally have SSL support
see also:
From a March 2016 and November 2019 comment in a private repo, I'm moving it here
Vagrant boxes are getting out of sync, but prod boxes aren't, I think it's because vagrant boxes have different server than the prod boxes, add all these:
server ntp.ubuntu.com
server 0.north-america.pool.ntp.org
server 1.north-america.pool.ntp.org
server 2.north-america.pool.ntp.org
server 3.north-america.pool.ntp.org
server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
server pool.ntp.org
from feb 2016 PRIVATE_REPO_ISSUE#20
Many times I want to install Python 2.7.*
or 3.8.*
. It would be great to have some basic semver stuff, or even specifying a regex or something, so you could do something like:
$ pyenv install --list | grep "^\s*2.7.*"
and take the last matching line. So specifying:
"version" => "2.7.*"
would get checked against the output of pyenv install --list
and the last matching value would be the version that is installed.
If I wanted to use semver syntax, Gem::Version might be useful, to find the latest version
Gem::Version.new('0.3.2') < Gem::Version.new('0.10.1')
This is the best overview of the different challenge types and their pros and cons
If we need the certs for a server that can serve files, we should be able to configure a path for let's Encrypt so it can add the filepath /.well-known/acme-challenge/<TOKEN>
to validate the domain without having to restart the server.
In order to do wildcard certificates, we have to do dns challenges:
This challenge asks you to prove that you control the DNS for your domain name by putting a specific value in a TXT record under that domain name. It is harder to configure than HTTP-01, but can work in scenarios that HTTP-01 can’t. It also allows you to issue wildcard certificates.
This should be an option if you use a DNS provider that integrate with Let's Encrypt: DNS providers who easily integrate with Let’s Encrypt DNS validation.
I will have to look into how the automation is handled and add configuration hooks so the cookbook can set it up.
Right now logrotate uses either :merge
or :set
and I think these should be combined to just basically act like merge. The idea being if you want to replace existing configuration you should just set the values to empty or nil
.
Also, right now the logrotate recipe writes to a temp file, but it would be better to just add a to_s
method to the supporting library class and then use Chef's file
resource instead.
It would be cool to be able to set like a paths
value:
"python" => {
"environments" => {
"backend38" => {
"version" => "3.8",
"user" => "<USERNAME>",
"paths" => [
"/SOME/DIRECTORY/PATH"
]
},
}
},
And at each of the specified paths it would create:
/SOME/DIRECTORY/PATH/.python-version
With the contents:
backend38
Only if there isn't a /python-version
file that already exists.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.