GithubHelp home page GithubHelp logo

julianpistorius / kubeadm-terraform Goto Github PK

View Code? Open in Web Editor NEW

This project forked from terraref/kubeadm-terraform

0.0 1.0 0.0 51 KB

Terraform plan that uses data-8's kubeadm bootstrap for Kubernetes on Openstack

License: BSD 3-Clause "New" or "Revised" License

HCL 88.25% Shell 11.75%

kubeadm-terraform's Introduction

Kubeadm Boostrapper with Terraform for OpenStack

This package enhances the data-8 kubeadm bootstrap process by brininging in Terraform to provision the network, hosts, and storage for a kubernetes cluster in OpenStack. It is mostly based on the brilliantly simple article by Andre Zonca of San Diego Supercomputer Center: Deploy scalable Jupyterhub with Kubernetes on Jetstream

How to Build a Cluster

Check out a copy of this repo and cd into the top directory.

Configure variables

You will need to set some of the variables found in variables.tf. The best way to do this is to create a .tfvars file in the configs directory. This directory is in .gitignore to make this easy. Entries in .tfvars files are just name = "value"

Most of the variables should be obvious, but here is a summary with some detail on the more specific value domains.

Variable Description
env_name Root name for this cluster. Will be used to name nodes and networks
pubkey Path to a public key file which will be used to generate the key pair
privkey Path to the corresponding private key file which will be used to access the hosts
master_flavor Name of the Openstack instance flavor to use for the master node
image Name of the OS image to be used to initialize master nodes. So far, this has been tested on Ubuntu 16
worker_flavor Name of the Openstack instance flavor to use for the worker nodes
storage_flavor Name of the Openstack instance flavor to use for the storage nodes
external_network_id ID of the network that has the gateway to the internet
pool_name The name of the pool from which the floating IP belongs to (usually the external network's name)
availability_zone Name of the Openstack availability zone where the hosts should be provisioned
worker_count How many workers to provision
worker_ips_count How many of the workers should be assigned an external IP address?
docker_volume_size All nodes will have external block storage attached to use as the docker storage base (/var/lib/docker). Specify the size for these volumes in GBytes
storage_node_count You can optionally provision nodes to host CEPH shared storage. This needs to be an even number.
storage_node_volume_size Specify the size of the storage attached to each storage node. Expressed in GBytes
dns_nameservers A list of IP addresses of DNS name servers available to the new subnet

Initialize Terraform

This recipe uses Terraform to provision the network, host, and execute the steps to set up your cluster. You will need to install terraform on your local machine.

Terraform uses a plug-in architecture. You will need to instruct it to download and install the plugins used in this setup.

In the root directory of this repo execute the following command:

% terraform init

Set your Openstack Credentials

Terraform makes use of the Openstack environment variables set by the script that you can download from your Openstack portal. Download this file and execute the script. It will prompt you for your password.

Install jq

The step that obtains the join token for the workers to connect to the kubernetes master requires the jq JSON processor. Please insure that it is installed on the host where you are executing the terraform command.

Build the Cluster

Now comes the easy part. To build your kubernetes cluster just issue this command in the root folder of this repo:

% terraform apply -var-file="configs/<<your .tfvars file>>"

Using the Cluster

You now have a running cluster with the helm tiller installed. To interact with it, you will need to log into your master node and use the kubectl commands from there.

Extneral IP addresses

The master node, as well as the specified number of worker nodes will have external IP addresses assigned. For your convenience, these nodes are all labeled external_ip=true.

NFS Provisioner

If you configured a storage node, it will be provisioned to run the NFS provisioner. This will run a lightweight NFS server in your cluster for persistent volume claim support.

Resizing the cluster

Terraform makes this easy. Just adjust the values for the number of worker nodes and reissue the

% terraform apply

command. Terraform will figure out what needs to change and run exactly the required steps.

If you reduce the number of worker nodes it will remove them from the cluster before deleting the underlying compute instance.

You can expand storage by adding new storage hosts. I don't think it will work to try to reduce the number of storage nodes.

Destroying the Cluster

If you want to release the resources allocated to your cluster you can destroy the cluster with the terraform command:

% terraform destroy

kubeadm-terraform's People

Contributors

bengalewsky avatar bodom0015 avatar craig-willis avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.