GithubHelp home page GithubHelp logo

vmware-archive / tgik Goto Github PK

View Code? Open in Web Editor NEW
833.0 101.0 158.0 313.45 MB

Official repository for TGI Kubernetes (TGIK)!

License: Apache License 2.0

CSS 6.77% Go 5.47% JavaScript 8.34% HTML 9.41% Ballerina 0.41% Shell 33.28% Smarty 2.22% Dockerfile 0.68% Makefile 0.72% C 0.59% TypeScript 2.83% Open Policy Agent 1.02% HCL 2.28% SCSS 0.95% Java 0.16% PHP 18.15% Blade 0.32% Python 0.15% Ruby 6.16% CoffeeScript 0.10%

tgik's Issues

Episode idea: When to use managed Kubernetes infrastructure

We've been having some internal discussions about when to use managed Kubernetes infrastructure vs. rolling your own clusters. There are pros and cons to both solutions. Would love to heard Kris / Joe discuss when to use one vs. the other.

Suggestion - Use HackMD-it Browser Extension for Editing Show Notes

@jbeda @kris-nova @castrojo Glad to see all the HackMD love in the latest episodes! I've recently gone HackMD crazy (I use it for everything) and thought I'd suggest the HackMD-it browser extension that adds an Edit on HackMD button to Markdown files on GitHub (and GitLab).

HackMD-it for Chrome
HackMD-it for Firefox
HackMD-it issue tracker

Bonus link: CodiMD Helm Chart - free software version of HackMD, which has been great for my on-prem notes!

Not sure if this issue was a good way to leave TGIK "suggestion box" feedback, but feel free to close.

Enjoy!

Episode Idea: Ingress Controller Face Off

  • NGINX - The most commonly used.
  • Traefik - The one I'm most interested in. It's supposedly almost as fast as NGINX and throws Lets Encrypt in to the mix too.
  • Others...

If that's too much content for one show, I'd definately like to see a Traefik episode.

Episode idea: Knative

Would be super cool to get a walk through of knative along with your thoughts on the technology stack being used.

Episode 042: Tips and Tricks with Ballerina and Kubernetes

After building .bal with @kubernetes:* annotations, Ballerina platform creates a docker image as specified. I see that @kris-nova pushes the image to docker hub in order to be pulled by the kubernetes cluster successfully.

If you are using minikube as a local kubernetes cluster, this is a tip:
Minikube can use the docker image as a local artifact without the need of pushing it to a remote repository in prior.
This is done by pointing your docker client CLI to minikube itslef

eval $(minikube docker-env)

Also, make sure imagePullPolicy not equal to Always, or do not specify it since the default value with Ballerina is IfNotPresent

Briefly

minikube start
eval $(minikube docker-env)
ballerina build hello-kubernetes.bal
kubectl apply -f kubernetes/

@kris-nova Please let me know if I should open pull-request, and where ( inside README or another file) ?

Episode idea: logging/observability

It would be great to dive into the mechanics of collecting logs or other telemetry from applications in Kubernetes. There's been a good bit of discussion in the past about this (e.g. kubernetes/kubernetes#24677, kubernetes/kubernetes#42718), and even today the docs discuss several possible setups. It's easy to get confused!

It might also be worthwhile to explore how logging agents can get pod metadata from the Kubernetes API (e.g., Fluentd, Honeycomb) -- it's a pretty interesting application of the Kubernetes API's Watch mechanism.

TGIK Suggestion for Kubebuilder

Kubebuilder is a framework for building Kubernetes APIs using custom resource definitions (CRDs). It provides powerful libraries and tools to simplify building and publishing Kubernetes APIs from scratch.

It is a sub project under APIMachinery SIG. More details can be found here.

Timestamp descriptions for Episode 38

I started a timer right as the episode started, so this should be close within a few seconds.

0:00 Hello and welcome
01:45 We’re streaming from Linux!
03:02 News and other exciting things in Kubernetes
05:25 100th TGIK star on github!
06:44 Operator Metering Framework now available
08:26 Microsoft + Github = Empowering Developers
09:17 Amazon EKS goes GA
12:35 Happy Birthday Kubernetes!
helm in here somewhere

15:02 Let’s start: Installing Kata on Ubuntu on AWS, let’s dive in.
More helm

20:55 Ok let’s dive in for real, into the console we go!
27:48 We’ve installed docker, let’s install kata-containers now
34:10 Ok let’s run our first kata container example, but first, what is OCI?
37:07 Let’s run our first kata container
41:42 Can’t get docker to use the runtime, let’s do a kata-check
45:45 Our cloud provider needs to support nested virtualization and AWS does not, so let’s try Azure.
50:47 Let’s look at what building it from source looks like
54:00 Let’s check on the Azure instance
59:05 Ok let’s install Kata in Azure
1:00:28 Configure Docker to use Kata
1:02:10 Run busybox
1:03:33 Rerun kata-check, same problem as in AWS

Episode 043: kops toolbox template clarification/example

kops toolbox template

@kris-nova, I wanted to clarify what I meant kops toolbox was useful during the episode because I was unable to express myself in 200 characters at a time.

I was specifically referring to the template function of the toolbox.

Creating clusters with the command line is fine to start with but if you want to deploy the same cluster topology into different environments (for example: dev, stage or production) the templating function is really handy.

For my clusters I cerated a cluster_template.yaml similar to the one below (it's a bastion topology so nodes are not exposed on the internet):

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  name: {{ .myClusterName }}.{{ .dnsZone }}
spec:
  # sts:AssumeRole is needed for kube2iam
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": ["sts:AssumeRole"],
          "Resource": ["*"]
        }
      ]
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: {{ .kopsStateStore }}/{{ .myClusterName }}.{{ .dnsZone }}
  encryptionConfig: false # until we figure out how to make `kops create encryptionconfig -f config.yaml` work (config file format unknown)
  etcdClusters:
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-{{ .awsRegion }}a
      name: a
    - encryptedVolume: true
      instanceGroup: master-{{ .awsRegion }}b
      name: b
    - encryptedVolume: true
      instanceGroup: master-{{ .awsRegion }}c
      name: c
    enableEtcdTLS: {{ .etcdTLS }}
    name: main
    version: {{ .etcdVersion }}
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-{{ .awsRegion }}a
      name: a
    - encryptedVolume: true
      instanceGroup: master-{{ .awsRegion }}b
      name: b
    - encryptedVolume: true
      instanceGroup: master-{{ .awsRegion }}c
      name: c
    enableEtcdTLS: {{ .etcdTLS }}
    name: events
    version: {{ .etcdVersion }}
  iam:
    allowContainerRegistry: true
    legacy: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: {{ .kubernetesVersion }}
  masterInternalName: api.internal.{{ .myClusterName }}.{{ .dnsZone }}
  masterPublicName: api.{{ .myClusterName }}.{{ .dnsZone }}
  networkCIDR: {{ .myNetworkCIDR }}
  networking:
    calico: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: {{ .myNetworkPrefix }}.32.0/19
    name: {{ .awsRegion }}a
    type: Private
    zone: {{ .awsRegion }}a
  - cidr: {{ .myNetworkPrefix }}.64.0/19
    name: {{ .awsRegion }}b
    type: Private
    zone: {{ .awsRegion }}b
  - cidr: {{ .myNetworkPrefix }}.96.0/19
    name: {{ .awsRegion }}c
    type: Private
    zone: {{ .awsRegion }}c
  - cidr: {{ .myNetworkPrefix }}.0.0/22
    name: utility-{{ .awsRegion }}a
    type: Utility
    zone: {{ .awsRegion }}a
  - cidr: {{ .myNetworkPrefix }}.4.0/22
    name: utility-{{ .awsRegion }}b
    type: Utility
    zone: {{ .awsRegion }}b
  - cidr: {{ .myNetworkPrefix }}.8.0/22
    name: utility-{{ .awsRegion }}c
    type: Utility
    zone: {{ .awsRegion }}c
  topology:
    bastion:
      bastionPublicName: bastion.{{ .myClusterName }}.{{ .dnsZone }}
    dns:
      type: Public
    masters: private
    nodes: private

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
  name: bastions
spec:
  image: {{ .bastionAwsAmiId }}
  machineType: {{ .bastionSize }}
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: bastions
  role: Bastion
  subnets:
  - utility-{{ .awsRegion }}a
  - utility-{{ .awsRegion }}b
  - utility-{{ .awsRegion }}c

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
  name: master-{{ .awsRegion }}a
spec:
  image: {{ .baseAwsAmiId }}
  machineType: {{ .masterSize }}
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-{{ .awsRegion }}a
  role: Master
  subnets:
  - {{ .awsRegion }}a

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
  name: master-{{ .awsRegion }}b
spec:
  image: {{ .baseAwsAmiId }}
  machineType: {{ .masterSize }}
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-{{ .awsRegion }}b
  role: Master
  subnets:
  - {{ .awsRegion }}b

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
  name: master-{{ .awsRegion }}c
spec:
  image: {{ .baseAwsAmiId }}
  machineType: {{ .masterSize }}
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-{{ .awsRegion }}c
  role: Master
  subnets:
  - {{ .awsRegion }}c

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: {{ .myClusterName }}.{{ .dnsZone }}
  name: nodes
spec:
  image: {{ .baseAwsAmiId }}
  machineType: {{ .nodeSize }}
  maxSize: {{ .nodesMaxCount }}
  minSize: {{ .nodesMinCount }}
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - {{ .awsRegion }}a
  - {{ .awsRegion }}b
  - {{ .awsRegion }}c

(NOTE: we have to use custom AMIs because the base image is a CentOS 7, and on top the bastion has 2FA, this might not be necessary for everyone)

Then you'd need value files to fill in the template before.

Example:

dev_vars.yaml:

baseAwsAmiId: ami-XXXXX
bastionAwsAmiId: ami-YYYYY
awsRegion: eu-west-1
bastionSize: t2.micro
myClusterName: k8sdev
dnsZone: mydomain.tld
etcdTLS: false
etcdVersion: 3.1.11
kopsStateStore: s3://mycompany-kops-state-store
kubernetesVersion: 1.9.6
masterSize: t2.medium
myNetworkCIDR: 172.21.0.0/16
myNetworkPrefix: 172.21
nodeSize: t2.medium
nodesMinCount: 3
nodesMaxCount: 5

stage_vars.yaml:

baseAwsAmiId: ami-ZZZZZ
bastionAwsAmiId: ami-WWWWW
awsRegion: us-east-2
bastionSize: t2.micro
myClusterName: k8sstage
dnsZone: mydomain.tld
etcdTLS: true
etcdVersion: 3.1.11
kopsStateStore: s3://mycompany-kops-state-store
kubernetesVersion: 1.9.8
masterSize: m4.large
myNetworkCIDR: 172.22.0.0/16
myNetworkPrefix: 172.22
nodeSize: m4.large
nodesMinCount: 2
nodesMaxCount: 5

Then to generate the final cluster spec file:

kops toolbox template --values <environment>_vars.yaml --template cluster_template.yaml --output new_cluster.yaml

And apply the changes:

kops replace -f new_cluster.yaml
kops update cluster $NAME # review and re-run the command with --yes
kops rolling-update cluster $NAME # review and re-run with --yes. This will reboot nodes one at a time

Episode idea: Juju for deploying (on) kubernetes

Juju is a tool by canonical to deploy and manage applications on public clouds / kuberneties / bare metal / MAAS that is manageable through command line and gui and can deploy kuberneties and can deploy applications on kuberneties.

Homepage: https://jujucharms.com/
Kuberneties deployment: https://jujucharms.com/canonical-kubernetes/ (Canonical distribution (pretty standard, only packaging for Ubuntu I think))
Try out / deploy to public cloud: https://jujucharms.com/new/

Adding Description in name of episode directory

Currently, when we have to find some episode, like for example- MetaController, we have to either go ahead and look at each of the folder just to find that 018 is the episode for MetaController.
Other option is to go to TGIK Youtube channel page and find the episode number by scrolling and looking for the Episode which names MetaController.
Problem will grow more complex when we will have more episodes, lets 200+.
Can we add One word description after the directory Cordinal Name for example - 018-MetaControllers. This will not even break the sequence.

Helm 3

Lets check out Helm 3 once it's out

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.