Deploy AWS EKS via a Jenkins job using terraform. The idea here is to easily deploy EKS to AWS, specifying some settings via pipeline parameters.
eksctl
has now come along since I wrote this repo, and that is now my preferred way of deploying EKS. Thus I created an eksctl
based deployment here .
I am still maintaining this repo, but have moved most of the docs to my eksctl
repo (to save duplication).
EC2 instances are used as EKS workers via a node group. An autoscaling group is defined so the number of EC2 instances can be scaled up and down.
This is based on the eks-getting-started example in the terraform-provider-aws github repo.
Terraform docs are here.
AWS docs on EKS are here.
Some changes to the aws provider example:
- Alot of the settings have been moved to terraform variables, so we can pass them from Jenkins parameters:
- aws_region: you specify the region to deploy to (default
eu-west-1
). - cluster-name: see below (default
demo
). - vpc-network: network part of the vpc; you can have different networks for each of your vpc eks clusters (default
10.0.x.x
). - vpc-subnets: number of subnets/az's (default 3).
- inst-type: Type of instance to deploy as the worker nodes (default
m4.large
). - num-workers: Number of workers to deploy (default
3
).
- aws_region: you specify the region to deploy to (default
- The cluster name has been changed from
terraform-eks-demo
toeks-<your-name>
; this means multiple eks instances can be deployed, using different names, from the same Jenkins pipeline. There does not seem any point in includingterraform
(or eventf
) in the naming; how its deployed is irrevelant IMHO. - The security group providing access to the k8s api has been adapted to allow you to pass cidr addresses to it, so you can customise how it can be accessed. The provider example got your public ip from
http://ipv4.icanhazip.com/
; you are welcome to continue using this!
Jenkins needs the following linux commands, which can either be installed via the Linux package manager or in the case of terraform
, downloaded:
- terraform (0.12.x)
- jq
- kubectl
The pipeline uses a terraform workspace for each cluster name, so you should be safe deploying multiple clusters via the same Jenkins job. Obviously state is maintained in the Jenkins job workspace (see To do below).
Several roles are required, which is confusing. Thus decided to document these in simple terms.
Since EKS manages the kubernetes backplane and infrastructure, there are no masters in EKS. When you enter kubectl get nodes
you will just see the worker nodes that are either implemented via autoscaling groups (old method) or via node groups (new in EKS 1.14). With other kubernetes platforms, this command will also show Master nodes. Note that as well as using node groups, you can now use fargate, which also shows up as worker nodes via the kubectl get nodes
command.
I am just going to discuss those required with kubernetes 1.17 EKS.
Required roles:
- Cluster service role: this is associated with the cluster (and its creation). This allow the Kubernetes control plane to manage AWS resources on behalf of the cluster. The policy
AmazonEKSClusterPolicy
has all the required permissions, so best use that (unless you require a custom setup). The serviceeks.amazonaws.com
needs to be able to assume this role (trust relationship). We also attach policyAmazonEKSVPCResourceController
to the role, to allow security groups for pods (a new eks 1.17 feature; see this for details). - Node worker or specifically node group role: This allows worker nodes to be created for the cluster via an auto scaling group (ASG). The more modern node group replaces the older method of having to create all the resources manually in AWS (ASG, launch configuration, etc). There are three policies that are typically used (interestingly these have not changed since node groups were introduced):
- AmazonEKSWorkerNodePolicy
- AmazonEKS_CNI_Policy
- AmazonEC2ContainerRegistryReadOnly
It appears the aws-auth
configmap being inplace allows nodes to be added to the cluster automatically.
I tried to keep it simple as its a proof of concept/example. It probably needs these enhancements:
This the recommended method, as keeping the stack in the workspace of the Jenkins job is a bad idea! See terraform docs for this. You can probably add a Jenkins parameter for the bucket name, and get the Jenkins job to construct the config for the state before running terraform.
Similar to state, this ensure multiple runs of terraform cannot happen. See terraform docs for this. Again you might wish to get the dynamodb table name as a Jenkins parameter.