GithubHelp home page GithubHelp logo

geerlingguy / ansible-for-kubernetes Goto Github PK

View Code? Open in Web Editor NEW
660.0 47.0 306.0 107 KB

Ansible and Kubernetes examples from Ansible for Kubernetes Book

Home Page: https://www.ansibleforkubernetes.com

License: MIT License

Go 13.04% Shell 79.98% Dockerfile 2.51% Ruby 4.47%
ansible kubernetes devops book automation python go infrastructure scalability

ansible-for-kubernetes's People

Contributors

geerlingguy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-for-kubernetes's Issues

[section_request] Use with k8s-compatible software like k3s

A reader asked the following:

Wanted to recommend maybe adding a chapter or two on k3s and Rio? Maybe Rancher?

It might be good to at least give an example of use with k3s, since it's slightly easier to deploy in some environments, and is lighter weight.

I haven't looked at Rio at all, but depending on its features it could be interesting. I used Rancher a little a couple years ago so I'm guessing a lot has changed there, but it seems to be more a 'service mesh' type tool, and I'm not 100% sure where Ansible would fit into that scenario (maybe just managing Rancher).

Alternative for virtualbox

All the examples assume vagrant will use virtualbox.
However, Debian has decided against including virtualbox in it's version 10 (https://wiki.debian.org/VirtualBox , https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=794466).

The solution is fairly simple, just make vagrant use virt-manager, so I think it's worth at least a mention.

https://computingforgeeks.com/using-vagrant-with-libvirt-on-linux/
https://stackoverflow.com/questions/42155213/using-vagrant-to-set-up-a-vm-with-kvm-qemu-without-virtualbox

[section_request] AWS EKS cluster with CloudFormation

Just adding an issue here so I can better track any follow-up work. I'm trying to finish up this section (corresponding to the cluster-aws-eks example in this repository) in chapter 5 this week, and I like having an issue open to commit changes against.

Hash mismatch when downloading docker

Hi, I'm in chapter 4 of this book. I have created 3 VM's using Vagrant. I have downloaded the source code from this repo for chapter 4. When running ansible-playbook -i inventory main.yml I see the below error:

image

I couldn't copy-paste from my terminal window, sorry. This is the only part the playbook seems to fail on. The other steps are successful.

[section_request] Operator SDK examples

See the Ansible User Guide for the Operator SDK. (Looks like the docs link might be changing to https://sdk.operatorframework.io/docs/).

I would like to have a very basic Ansible-based Operator walkthrough (similar in style to how you generate a memcached operator using Operator SDK), and the hello-go app should be easy enough for this use case.

Later on I could make a hello-go-advanced app that deals with a database or some other persistent data, and has some sort of scheduled backup-to-S3 functionality, that sort of thing, and the operator could evolve into an 'advanced hello go operator' in a separate example.

I could also walk through how a more advanced Operator like one of the following is built:

[section_request] Drupal + MariaDB Ansible-based operators working in tandem

I have a Drupal Operator and a MariaDB Operator, both of which can function independently without issue. But for one example, I want to show how two operators can work together to build a more scalable system, streamlining the maintenance of each independent component (database and application), especially since most organizations may be able to use something like a database operator (MariaDB) across multiple unrelated projects if they generalize their operators.

I'm still working on some features to make this example more applicable to real-world use cases (e.g. allowing clustering in the MariaDB operator—as of this comment it's currently a single-pod-with-PV setup, which is not truly HA or HP).

But it would be good to have an example of how Ansible can deploy two operators (both built with Ansible as well!) to run a robust application in Kubernetes.

Typo 'specilizes'

In the first sentence in the section "Scaling with Ansible's k8s_scale module" in Ch.2.

Initial Go 'Hello World' example

This example will be used for an extremely brief introduction to the Go language, early in the book, and will be built upon for one or two other examples down the line. I would like to have something extremely lightweight and understandable, and have a simple test.

[section request] multiple k8s clusters management using ansible

Hi Jeff,

I work on a several clusters at the same time, and I think one of the difficulties is while they use the same playbook, they are some subtle differences because they target different envs (let's say development, staging. production) or perhaps different locations (Europe, USA, Asia)
it requires a particular organization for the manifest to deploy to each of them their specific difference.

I think that could be interesting.

Possibly wrong description in task

In Ch.3 section "Running a local container registry" (very interesting section btw!) there's a task that looks like this:

tasks:
 - name: Ensure the Docker registry container is present.
   docker_image:
     name: '{{ registry_image }}'
     source: pull

But I think the name should be "Ensure the Docker registry image is present."

Typo "it's" in Ch.2

In the section "Ansible 101" there's "known for it's simplicity" where it should be "its".

(BTW hope these super minor things are OK with you - I am loving the book and how much I'm learning).

Add CI coverage for cluster-aws-eks example

As the title says, add CI coverage (as much as possible) to the cluster-aws-eks example.

Since it relies on CloudFormation templates, maybe the main thing to do would be to validate them, and run ansible-lint on the playbook, and maybe something like --syntax-check. Otherwise I'd have to wire up a live AWS account to Travis CI, and I'm loathe to do that, especially when clusters cost $0.20/hour!

Issue in Chapter 4 - Building K8s clusters with Ansible

Not sure if it's a problem more people are having, but when:

  • I'm reaching the section : Running the cluster build playbook
  • I run the playbook ansible-playbook -i inventory main.yml
  • And the playbook can't connect to any of the VMs using the IPs 192.168.7.x

This is what I get in verbose from ansible:

<192.168.7.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ben/.vagrant.d/insecure_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o ControlPath=/home/ben/.ansible/cp/236294e04e 192.168.7.4 '/bin/sh -c '"'"'echo ~vagrant && sleep 0'"'"''
<192.168.7.2> (255, b'', b'Received disconnect from 192.168.7.2 port 22:2: Too many authentication failures\r\nDisconnected from 192.168.7.2 port 22\r\n')
fatal: [kube1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Received disconnect from 192.168.7.2 port 22:2: Too many authentication failures\r\nDisconnected from 192.168.7.2 port 22",
"unreachable": true
}
...

The only way I could make it work was to record the hosts in the ~/.ssh/config file

Host 192.168.7.2
Hostname 192.168.7.2
User vagrant
IdentityFile ~/.vagrant.d/insecure_private_key
IdentitiesOnly yes
Port 22
....

I'm unsure whether it's a problem from my local config, maybe my vagrant version.
Anyway I thought I'd share that, hopefully that helps someone with the same issue

Super interesting book by the way 👍
Config:

Ubuntu 18.04.4
Vagrant 2.2.6
Virtualbox 6.0.18
ansible 2.9.6
python version = 3.6.9

Extraneous 'a' in Ch3

In Ch.3 section "Building images using Ansible without a Dockerfile" there's an extraneous indefinite article 'a':

But most real-world apps require a more complexity.

requirements.yml style question

requirements.yml needs updated to your new galaxy structure to include k8s
from - name: geerlingguy.kubernetes
to - name: geerlingguy.k8s.kubernetes

Typo 'containter' in Ch.1

In section "Running Hello Go in Docker", there's "containter" in the Information note:

This book will mostly use Docker in its examples due to its ongoing popularity, but know there are other containter runtimes worth investigating

hello-go fails to run in alpine

Chapter 1 Page six build docker container
first line in the Dockerfile
FROM golang:1-alpine as build

With the command
docker build -t hello-go .

Output
Error parsing reference: "golang:1-alpine as build" is not a valid repository/tag: invalid reference format

[section_request] Testing Kubernetes with Molecule, Kind, and Ansible

I was just thinking, I don't have a dedicated chapter to testing Kubernetes infrastructure configurations, but that's one thing that's actually easier to do with Molecule + Ansible than (IMO) with most other solutions, especially in CI. I already have things working in GitHub Actions and Travis CI, so I could do a chapter that introduces:

  • Using Kind to create a lightweight Kubernetes environment locally or in CI.
  • Using Molecule to test Ansible playbooks and roles.
  • Using Molecule to create a Kind cluster and test k8s resources on it with Ansible (similar to how I use it for this book's playbooks in some cases, and for tower-operator and other K8s projects).
  • Using Molecule in GitHub Actions (and/or Travis CI) to run tests using CI for your project.

Prepare for Ansible 2.10 and new Kubernetes Collection

I've been working on the Ansible Kubernetes Collection for a bit, and in Ansible 2.10, it will become the source of the k8s-related modules and plugins.

As such, there are a few parts of the book (and probably a number of playbooks) which should be tweaked to include the consumption and use of the community.kubernetes kubernetes.core collection.

Even though some of the modules that were in Ansible <= 2.9 may be included in a new Ansible distribution and work similar to how they did in older versions, all the examples in the book should directly consume the community.kubernetes kubernetes.core collection, to minimize surprises.

There may be other things that need to change, e.g. consuming the new aws collection (when it becomes available) for the AWS-related examples—and same goes for GCP and any other non-core collections.

Use k8s 'wait' parameter instead of separate 'kubectl wait' task

I think I may have originally worked on the example in chapter 4 when the wait parameter was not in Ansible core's k8s module, and so I elected to use a separate kubectl wait parameter to wait for the hello-k8s pods to be in 'Ready' state.

The wait parameter is simpler and should be used instead of the separate command invocation.

CI test for local bare metal K8s cluster with Docker (in Docker) failing because of Travis CI AUFS problem

Related: geerlingguy/raspberry-pi-dramble#166

Symptom:

TASK [geerlingguy.kubernetes : Configure Flannel networking.] ******************
544failed: [kube1] (item=kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml) => {"ansible_loop_var": "item", "changed": false, "cmd": ["kubectl", "apply", "-f", "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml"], "delta": "0:00:01.071768", "end": "2019-12-16 01:18:21.672106", "item": "kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml", "msg": "non-zero return code", "rc": 1, "start": "2019-12-16 01:18:20.600338", "stderr": "unable to recognize \"https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml\": Get https://192.168.7.2:6443/api?timeout=32s: dial tcp 192.168.7.2:6443: connect: connection refused

Deeper diagnosis from kubelet's logs:

error creating aufs mount

Note that the related issue linked at the top of this post was not a problem until some time recently. Maybe the Travis CI platform changed a bit?

One thing to consider would be reinstalling a newer version of Docker since the version shipped in the python environment is like 18.06 or something...?

Port is variable in Chapter 4 - Building K8s clusters with Ansible

Not sure where is the port defined for the hello-kubernetes page service but in my case it's a different one than the one in the book (port 30297 instead of 30606):

This time it works! With the two networking adjustments, we’ve fixed our Kuber-
netes cluster. If you open a browser window, you should even be able to see the
hello-kubernetes page by accessing the NodePort on any of the three servers, e.g.
http://192.168.7.3:30606 .

Maybe you can add a final task at the end of test-deployment.yml to avoid confusion :

- name: Print out kubernetes-hello page url
   debug:
     msg: "http://{{ ansible_host }}:{{ port }}/"

Typos 'primatives' and 'you' in Ch.4

In Ch.4 section "Building a local Kubernetes cluster on VMs" there's the following:

The linked project walks you through the primatives, and holds you hand ...

Issue with backports.ssl-match-hostname for Python docker module

This might be too specific an error but it happened to me. In Ch.3 section "docker_image module" we start with the playbook for building the Docker image, using the docker_image module.

Running the playbook at this point (before moving on to the next section) resulted in an error for me:

"msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on penguin's Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named ssl_match_hostname"    

I finally solved this with a slightly extreme install of the backports.ssl-match-hostname module as described in this SO solution: https://stackoverflow.com/a/51071841/384366

It might be only me that experienced this, but flagging this just in case it's others and you want to put a note in this section.

spotted few typos

I read the first version, and spot a place where kubernetes was written "kuberenetes" and ansible was written "anible".
Sorry I forgot to note which chapter it was.

Support: cleanest way to assign more memory to kube1

In Chapter 4 of A4K8s, you describe the Vagrantfile used to create a local Virtualbox-based cluster.

I'm using the configuration to study a number of topics, including Istio. This will require more memory on kube1 than the default 2Gi. As currently written, all 3 hosts get their memory in the first config block:

  config.vm.provider :virtualbox do |v|
    v.memory = 2048
    v.cpus = 2
    v.linked_clone = true
    v.customize ['modifyvm', :id, '--audio', 'none']
  end

I'd like to add a symbol (say, :memory) to the boxes array for memory, potentially just for kube1.

What's the cleanest way to change the script so that I can have 4Gi memory for kube1, but the default 2Gi for kube2 and kube3?

Warn users that the `k8s_info` module was `k8s_facts` before Ansible 2.9

Thanks a lot for the great read! I'm not sure if this should be in the scope of the book, since this might be an issue with other modules as well, but just thought I'd mention it. Chapter 4 uses the k8s_info module to gather service facts, which was called k8s_facts before Ansible 2.9.

I'm executing the playbooks from a repo with GitLab CI rather than locally, using the ansible/ansible-runner:latest image from https://hub.docker.com/r/ansible/ansible-runner. I'm not sure this is the greatest idea (I thought I'd eventually use the added runner capabilities), but anyway - the latest runner image is currently still at Ansible 2.8 - so the deployment test in Chapter 4 was failing with ERROR! no action detected in task.

Add versions to requirements.yml examples

In a production Ansible configuration, it should be a best practice to provide desired versions of Galaxy roles that are to be downloaded to support the playbooks. I recommend adding versions to the roles in the requirements.yml files in the examples as you get closer to the 1.0 release.

Specific kubectl describe command for pod missing in Chapter 1

From a reader's LeanPub comment:

Hi Jeff, Chapter 1 - Hello World! page 11 Right at the bottom of the page you've got a kubectl describe command that continues onto the second page, but it's missing part of the command, I think it should have kubectl describe pod.

[erratum] Possible Typo. Crate -> Create.

Book version 0.3.
Chapter 2: Automation brings DevOps bliss
Section: Managing Minikube
Page: number 27.
Third paragraph.

Current sentence, with the assumed error:
"Crate a new directory hello-go-automation (next to the hello-go directory) with the same inventory file as used in the previous example, and add a main.yml playbook."

Proposed correction:
"Create a new directory hello-go-automation (next to the hello-go directory) with the same inventory file as used in the previous example, and add a main.yml playbook."

[section_request] Tower/AWX webhooks (with GitLab) for GitOps

I had a note to do this elsewhere, so I figured I'd put it in the official repo. Basically, one chapter that I might or might not work on pre-1.0 is a GitOps example that uses Ansible Tower (AWX really, since I want people to be able to run it on their own without having to get a license) to build a container, test it, push it to a registry, then update the application deployment if all goes well any time a push to master happens in the GitLab repository.

This example necessarily has a lot of moving parts, so I'm kind of nervous I might not be able to build it out in CI (so it would be an example I'd have to manually integrate every now and then and update outside of just code), but it's a valuable example because there are many use cases where using Tower/AWX AAP outside of the cluster might be important (especially for integration with organizational automation management/RBAC), and a tool like ArgoCD or even GitLab's own pipelines might not be a great fit.

My idea (partially inspired by this post on GitLab + Tower + IIS deployment) is to have:

  1. A Kubernetes cluster with an app (like hello-go) deployed to it.
  2. A GitLab instance running with an access token in AWX and a repo with a webhook configured for an AWX job.
  3. An AWX instance running with a kubeconfig to be able to manage the cluster, and a job that can:
    1. Check out code from the GitLab repo.
    1. Build a new container.
    1. Run the container and run a test against it (e.g. curl).
    1. If successful, push the container image to a registry (GitLab's?)
    1. If successful, update the Kubernetes deployment with the new image.
    • (Wait for that deployment to update using the k8s module's wait parameter)
      1. Run the same test against it (e.g. curl).

Reference to `minikube_status` as command

In Ch.2 section "Managing Minikube", there's:

"If there is no output from the minikube_status command"

I think that this perhaps should be minikube status, without the underscore, as minikube_status is the register rather than the command.

[section_request] deploy WAR to OpenShift 4

One ops-team takes care of the corporate OpenShift clusters, yet tens to hundreds of Java dev-teams want to use it. Please describe a portable workflow for an application development pipeline.

s/you/your in Chapter 2

Chapter 2, Managing Kubernetes with Ansible/Managing Minikube:
“(at least as of this writing—you could write you own!)” should be “(at least as of this writing—you could write your own!)”

ePub Dark Icons on Dark Background

Hello, I noticed something while reading this in Google Books so just wanted to mention. Not like it's a big deal but figured id let you know anyway. Especially since the link to the issue tracker is on that page.

The tip icons (at least) are invisible when reading on dark background like the screenshot below.

Perhaps it would be good to outline them? Or even making them a dim grey may be sufficient.

Screenshot_20191222-013030

Screenshot_20191222-115849

Ambiguous location for relative directory name for docker build command

In Ch.2, section "Building container images in Minikube with Ansible", the command to build the image if it's not already built is:

docker build -t {{ image_name }} ../hello-go

This is fine, but I had to double check that I was in the right place, but couldn't find any clear indication of where in the directory tree I should be. This ../hello-go was assuming I was in a directory parallel to hello-go/, and I wasn't. Perhaps it's worth being more explicit?

Mirantis misspelled in Chapter 1

In this paragraph

Docker vs. Podman: Docker users wonder about the future of Docker CE and moby, the engine that runs Docker containers. Events like the sale of ‘Docker Enterprise’ to Marantis in 2019 did nothing to quell fears about Docker’s future, and many developers who rely on containers for their application deployment have been seeking alternative container builders and runtimes.

The correct spelling is Mirantis

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.