GithubHelp home page GithubHelp logo

kenzanlabs / kubernetes-ci-cd Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mschmidt712/kubernetes-ci-cd

211.0 8.0 661.0 1.73 MB

https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview

Dockerfile 2.62% Groovy 2.93% HTML 1.04% JavaScript 61.78% CSS 21.65% Shell 9.37% Smarty 0.61%

kubernetes-ci-cd's Introduction

Linux.com Kubernetes CI/CD Blog Series by Kenzan

The kubernetes-ci-cd project is Kenzan's crossword puzzle application that runs as several containers in Kubernetes (we call it the Kr8sswordz Puzzle). It showcases Kubernetes features like spinning up multiple pods and running a load test at scale. It also features Jenkins running on its own a container and a JenkinsFile script to demonstrate how Kubernetes can be integrated into a full CI/CD pipeline.

To get it up and running, see the following week-by-week Linux.com blog posts, or simply follow the directions below.

Linux.com Part 1

Linux.com Part 2

Linux.com Part 3

Linux.com Part 4

To generate this readme: node readme.js

Prerequisites

  • Install VirtualBox

https://www.virtualbox.org/wiki/Downloads

  • Install the latest versions of Docker, Minikube, and Kubectl

https://docs.docker.com/docker-for-mac/install/ https://github.com/kubernetes/minikube/releases https://kubernetes.io/docs/tasks/tools/install-kubectl/

  • Install Helm

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh; chmod 700 get_helm.sh; ./get_helm.sh

  • Clone this repository
  • To ensure you are starting with a clean slate, delete any previous minikube contexts.

minikube stop; minikube delete; sudo rm -rf ~/.minikube; sudo rm -rf ~/.kube

Tutorial Steps

Part 1

Step1

Start up the Kubernetes cluster with Minikube, giving it some extra resources.

minikube start --memory 8000 --cpus 2 --kubernetes-version v1.11.0

Step2

Enable the Minikube add-ons Heapster and Ingress.

minikube addons enable heapster; minikube addons enable ingress

Step3

View the Minikube Dashboard, a web UI for managing deployments.

minikube service kubernetes-dashboard --namespace kube-system

Step4

Deploy the public nginx image from DockerHub into a pod. Nginx is an open source web server that will automatically download from Docker Hub if it’s not available locally.

kubectl run nginx --image nginx --port 80

Step5

Create a K8s Service for the deployment. This will expose the nginx pod so you can access it with a web browser.

kubectl expose deployment nginx --type NodePort --port 80

Step6

Launch a web browser to test the service. The nginx welcome page displays, which means the service is up and running.

minikube service nginx

Step7

Delete the nginx deployment and service you created.

kubectl delete service nginx

kubectl delete deployment nginx

Step8

Set up the cluster registry by applying a .yaml manifest file.

kubectl apply -f manifests/registry.yaml

Step9

Wait for the registry to finish deploying using the following command. Note that this may take several minutes.

kubectl rollout status deployments/registry

Step10

View the registry user interface in a web browser.

minikube service registry-ui

Step11

Let’s make a change to an HTML file in the cloned project. Open the /applications/hello-kenzan/index.html file in your favorite text editor. (For example, you could use nano by running the command 'nano applications/hello-kenzan/index.html' in a separate terminal). Change some text inside one of the <p> tags. For example, change “Hello from Kenzan!” to “Hello from Me!”. Save the file.

Step12

Now let’s build an image, giving it a special name that points to our local cluster registry.

docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan

Step13

We’ve built the image, but before we can push it to the registry, we need to set up a temporary proxy. By default the Docker client can only push to HTTP (not HTTPS) via localhost. To work around this, we’ll set up a Docker container that listens on 127.0.0.1:30400 and forwards to our cluster. First, build the image for our proxy container.

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

Step14

Now run the proxy container from the newly created image. (Note that you may see some errors; this is normal as the commands are first making sure there are no previous instances running.)

docker stop socat-registry; docker rm socat-registry; docker run -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name socat-registry -p 30400:5000 socat-registry

Step15

With our proxy container up and running, we can now push our hello-kenzan image to the local repository.

docker push 127.0.0.1:30400/hello-kenzan:latest

Step16

The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry

Step17

With the image in our cluster registry, the last thing to do is apply the manifest to create and deploy the hello-kenzan pod based on the image.

kubectl apply -f applications/hello-kenzan/k8s/manual-deployment.yaml

Step18

Launch a web browser and view the service.

minikube service hello-kenzan

Step19

Delete the hello-kenzan deployment and service you created. We are going to keep the registry deployment in our cluster as we will need it for the next few parts in our series.

kubectl delete service hello-kenzan

kubectl delete deployment hello-kenzan

Part 2

Step1

First, let's build the Jenkins Docker image we'll use in our Kubernetes cluster.

docker build -t 127.0.0.1:30400/jenkins:latest -f applications/jenkins/Dockerfile applications/jenkins

Step2

Once again we'll need to set up the Socat Registry proxy container to push images, so let's build it. Feel free to skip this step in case the socat-registry image already exists from Part 1 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

Step3

Run the proxy container from the image.

docker stop socat-registry; docker rm socat-registry; docker run -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name socat-registry -p 30400:5000 socat-registry

Step4

With our proxy container up and running, we can now push our Jenkins image to the local repository.

docker push 127.0.0.1:30400/jenkins:latest

Step5

The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry

Step6

Deploy Jenkins, which we’ll use to create our automated CI/CD pipeline. It will take the pod a minute or two to roll out.

kubectl apply -f manifests/jenkins.yaml; kubectl rollout status deployment/jenkins

Step7

Open the Jenkins UI in a web browser.

minikube service jenkins

Step8

Display the Jenkins admin password with the following command, and right-click to copy it.

kubectl exec -it `kubectl get pods --selector=app=jenkins --output=jsonpath={.items..metadata.name}` cat /var/jenkins_home/secrets/initialAdminPassword

Step9

Switch back to the Jenkins UI. Paste the Jenkins admin password in the box and click Continue. Click Install suggested plugins. Plugins have actually been pre-downloaded during the Jenkins image build, so this step should finish fairly quickly.

Step10

Create an admin user and credentials, and click Save and Continue. (Make sure to remember these credentials as you will need them for repeated logins.) On the Instance Configuration page, click Save and Finish. On the next page, click Restart (if it appears to hang for some time on restarting, you may have to refresh the browser window). Login to Jenkins.

Step11

Before we create a pipeline, we first need to provision the Kubernetes Continuous Deploy plugin with a kubeconfig file that will allow access to our Kubernetes cluster. In Jenkins on the left, click on Credentials, select the Jenkins store, then Global credentials (unrestricted), and Add Credentials on the left menu.

Step12

The following values must be entered precisely as indicated:

  • Kind: Kubernetes configuration (kubeconfig)
  • ID: kenzan_kubeconfig
  • Kubeconfig: From a file on the Jenkins master
  • specify the file path: /var/jenkins_home/.kube/config

Finally click Ok.

Step13

We now want to create a new pipeline for use with our Hello-Kenzan app. Back on Jenkins home, on the left, click New Item. Enter the item name as "Hello-Kenzan Pipeline", select Pipeline, and click OK.

Step14

Under the Pipeline section at the bottom, change the Definition to be Pipeline script from SCM.

Step15

Change the SCM to Git. Change the Repository URL to be the URL of your forked Git repository, such as https://github.com/[GIT USERNAME]/kubernetes-ci-cd. Click Save. On the left, click Build Now to run the new pipeline.

Step16

After all pipeline stages are colored green as complete, view the Hello-Kenzan application.

minikube service hello-kenzan

Step17

Push a change to your fork. Run the job again. View the changes.

minikube service hello-kenzan

Part 3

Step1

Initialize Helm. This will install Tiller (Helm's server) into our Kubernetes cluster.

helm init --wait --debug; kubectl rollout status deploy/tiller-deploy -n kube-system

Step2

We will deploy the etcd operator onto the cluster using a Helm Chart.

helm install stable/etcd-operator --version 0.8.0 --name etcd-operator --debug --wait

Step3

Deploy the etcd cluster and K8s Services for accessing the cluster.

  • kubectl create -f manifests/etcd-cluster.yaml
  • kubectl create -f manifests/etcd-service.yaml

Step4

The crossword application is a multi-tier application whose services depend on each other. We will create three K8s Services so that the applications can communicate with one another.

kubectl apply -f manifests/all-services.yaml

Step5

Now we're going to walk through an initial build of the monitor-scale application.

docker build -t 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD` -f applications/monitor-scale/Dockerfile applications/monitor-scale

Step6

Once again we'll need to set up the Socat Registry proxy container to push the monitor-scale image to our registry, so let's build it. Feel free to skip this step in case the socat-registry image already exists from Part 2 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

Step7

Run the proxy container from the newly created image.

docker stop socat-registry; docker rm socat-registry; docker run -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name socat-registry -p 30400:5000 socat-registry

Step8

Push the monitor-scale image to the registry.

docker push 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD`

Step9

The proxy’s work is done, so go ahead and stop it.

docker stop socat-registry

Step10

Open the registry UI and verify that the monitor-scale image is in our local registry.

minikube service registry-ui

Step11

Monitor-scale has the functionality to let us scale our puzzle app up and down through the Kr8sswordz UI, therefore we'll need to do some RBAC work in order to provide monitor-scale with the proper rights.

kubectl apply -f manifests/monitor-scale-serviceaccount.yaml

Step12

Create the monitor-scale deployment and the Ingress defining the hostname by which this service will be accessible to the other services.

sed 's#127.0.0.1:30400/monitor-scale:$BUILD_TAG#127.0.0.1:30400/monitor-scale:'`git rev-parse --short HEAD`'#' applications/monitor-scale/k8s/deployment.yaml | kubectl apply -f -

Step13

Wait for the monitor-scale deployment to finish.

kubectl rollout status deployment/monitor-scale

Step14

View pods to see the monitor-scale pod running.

kubectl get pods

Step15

View services to see the monitor-scale service.

kubectl get services

Step16

View ingress rules to see the monitor-scale ingress rule.

kubectl get ingress

Step17

View deployments to see the monitor-scale deployment.

kubectl get deployments

Step18

We will run a script to bootstrap the puzzle and mongo services, creating Docker images and storing them in the local registry. The puzzle.sh script runs through the same build, proxy, push, and deploy steps we just ran through manually for both services.

scripts/puzzle.sh

Step19

Check to see if the puzzle and mongo services have been deployed.

  • kubectl rollout status deployment/puzzle
  • kubectl rollout status deployment/mongo

Step20

Bootstrap the kr8sswordz frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed.

scripts/kr8sswordz-pages.sh

Step21

Check to see if the frontend has been deployed.

kubectl rollout status deployment/kr8sswordz

Step22

Check to see that all the pods are running.

kubectl get pods

Step23

Start the web application in your default browser. You may have to refresh your browser so that the puzzle appears properly.

minikube service kr8sswordz

Part 4

Step1

Enter the following command to open the Jenkins UI in a web browser. Log in to Jenkins using the username and password you previously set up.

minikube service jenkins

Step2

We’ll want to create a new pipeline for the puzzle service that we previously deployed. On the left in Jenkins, click New Item.

Step3

Enter the item name as "Puzzle-Service", click Pipeline, and click OK.

Step4

Under the Build Triggers section, select Poll SCM. For the Schedule, enter the the string H/5 * * * * which will poll the Git repo every 5 minutes for changes.

Step5

In the Pipeline section, change the Definition to "Pipeline script from SCM". Set the SCM property to GIT. Set the Repository URL to your forked repo (created in Part 2), such as https://github.com/[GIT USERNAME]/kubernetes-ci-cd.git. Set the Script Path to applications/puzzle/Jenkinsfile

Step6

When you are finished, click Save. On the left, click Build Now to run the new pipeline. This will rebuild the image from the registry, and redeploy the puzzle pod. You should see it successfully run through the build, push, and deploy steps in a few minutes.

Step7

View the Kr8sswordz application.

minikube service kr8sswordz

Step8

Spin up several instances of the puzzle service by moving the slider to the right and clicking Scale. For reference, click on the Submit button, noting that the white hit does not register on the puzzle services.

Step9

Edit applications/puzzle/common/models/crossword.js in your favorite text editor (for example, you can use nano by running the command 'nano applications/puzzle/common/models/crossword.js' in a separate terminal). You'll see a commented section on lines 42-43 that indicates to uncomment a specific line. Uncomment line 43 by deleting the forward slashes and save the file.

Step10

Commit and push the change to your forked Git repo.

Step11

In Jenkins, open up the Puzzle-Service pipeline and wait until it triggers a build. It should trigger every 5 minutes.

Step12

After it triggers, observe how the puzzle services disappear in the Kr8sswordz Puzzle app, and how new ones take their place.

Step13

Try clicking Submit to test that hits now register as white.

Automated Scripts to Run Tutorial

If you need to walk through the steps in the tutorial again (or more quickly), we’ve provided npm scripts that automate running the same commands in the separate parts of the Tutorial.

  • Install NodeJS.
  • Install the scripts.
    • cd ~/kubernetes-ci-cd
    • npm install

Begin the desired section:

  • npm run part1
  • npm run part2
  • npm run part3
  • npm run part4

LICENSE

Copyright 2017 Kenzan, LLC http://kenzan.com

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

kubernetes-ci-cd's People

Contributors

davidzuluagae avatar dzuluagae avatar evan-kenzan avatar gmenoiaa avatar justint294 avatar mark-at-kenzan avatar mdurandjr avatar moondev avatar mschmidt712 avatar patrickthesailorman avatar skpullano avatar thisgeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-ci-cd's Issues

Issues With "manifests/registry.yaml".

Hi,
I have made the changes as suggested but the new issue popped up. Please help

kubectl apply -f manifests/registry.yaml
persistentvolume/registry unchanged
persistentvolumeclaim/registry-claim unchanged
service/registry unchanged
service/registry-ui unchanged
unable to recognize "manifests/registry.yaml": no matches for kind "Deployment" in version "extensions/v1"
Error from server (Invalid): error when creating "manifests/registry.yaml": Deployment.apps "registry-deployment" is invalid: spec.template.spec.containers: Required value
UritiFamilyMBP:kubernetes-ci-cd

=====

Solved: To fix this issue the following changes were required to "manifests/registry.yaml".

apiVersion: apps/v1
kind: Deployment
metadata:
name: registry-deployment
labels:
app: registry
spec:
strategy:
type: Recreate
replicas: 1
selector:
matchLabels:
app: registry
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:

minikube service registry-ui - opens browser and now loads content after these changes.

Patrick

Originally posted by @mchalep in #124 (comment)

Jenkins & kubectl

Provide Jenkins kubectl with access to the kubernetes cluster, presetting the context in kube config file.

Service account secrets must be dynamically referenced from the Jenkins deployment

Add option to delete docker image from local registry

Hi!
Thank you for the great tutorial!!! I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images in hello-kenzan local repository, see all tags via command line but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
The same issue was raised here: distribution/distribution#1573. Suggestion was to add this env into .../manifests/registry.yaml file. So I've tried add following under

apiVersion: extensions/v1beta1
kind: Deployment

.....

env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
  value: true

But this not work... Command kubectl apply -f manifests/registry.yaml fails:

persistentvolume/registry unchanged
persistentvolumeclaim/registry-claim unchanged
service/registry unchanged
service/registry-ui unchanged
Error from server (BadRequest): error when creating "manifests/registry.yaml": Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...

Could you please help me add docker deletion option into manifests/registry.yaml file?
Thank you in advance,
Igor

Socat container

Provide Dockerfile for the Socat service, used for proxying insecure private docker registries in localhost.

Jenkins issue(s)

If you run the tutorial currently, the plugins won't work. I don't know if Dockerfiles allow a latest tag but I changed /applications/jenkins/Dockerfile to use 2.187

BTW. take a note of https://plugins.jenkins.io/git if you're using a private git server.

etcd not registering as thirdpartyresource

I'm a complete kubernetes n00b so please bare with me. As I'm working on Part 3, the scripts/etcd.sh script runs:
echo "installing etcd operator"
kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml
kubectl rollout status -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml
until kubectl get thirdpartyresource cluster.etcd.coreos.com
do
echo "waiting for operator"
sleep 2
done

I get the following error message:
Error from server (NotFound): thirdpartyresources.extensions "cluster.etcd.coreos.com" not found
waiting for operator

Running kubectl get thirdpartyresource returns No resources found. Doesn't matter how long I wait for something to register.

I found a similar walk-through for etcd at IBM's Developer Site that references a pinned version of etcd-operator.

  • IBM pinned version = quay.io/coreos/etcd-operator:v0.2.6
  • Manifest from coreos = quay.io/coreos/etcd-operator:v0.5.0

If I use the manifest from that link, I get a thirdpartyresource result after about 30s. Is this a result of something changing with etcd?

OS X Socat issues

OS X
docker build -t socat-registry -f applications/socat/Dockerfile applications/socat
unable to prepare context: path "applications/socat" not found

Stuck on step 8 to apply kubectl apply -f manifests/registry.yaml

Hi I am stuck on step 8:

kubectl apply -f manifests/registry.yaml

I am getting following error:
The Deployment "registry" is invalid:

* spec.selector: Required value
* spec.template.metadata.labels: Invalid value: map[string]string{"app":"registry", "name":"registry"}: `selector` does not match template `labels

please help

kubectl apply -f manifests/registry.yaml fails - too older version for latest kubectl.

Hi,
When I try to run the following: kubectl apply -f manifests/registry.yaml

I get the following:
error: unable to recognize "manifests/registry.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

I read extensions/v1beta1 is incorrect and should be apps/v1 since Kubernetes 1.9. I am running the latest version - 1.16 and changing this still fails and outputs: Deployment.spec): missing required field "selector"

It appears that this file is out of date also and would be very grateful if this could be updated. I am new to Kubernetes and would like to continue following the web article currently available.

Cheers

Patrick

Cannot complete deploy in Jenkins - hello-Kenzan fails at deployment stage

Hi,

I have the following error at the end of the pipeline deployment in Jenkins.

Starting Kubernetes deployment
Loading configuration: /var/jenkins_home/workspace/Hello-Kenzan Pipeline/applications/hello-kenzan/k8s/deployment.yaml

ERROR: ERROR: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://192.168.99.100:8443/apis/extensions/v1beta1/namespaces/default/deployments. Message: the server could not find the requested resource.

I have checked the deployment script and the relevant parts have been updated in the deployment section i.e extensions/v1beta1 replaced with apps/v1 and selector section added.

I've spent a lot of time on this and have not been able to solve. Please could I have some advice.
I am new to Kubernetes and Jenkins.

Thanks

Patrick

update all instances of socat

Socat registry is used multiple times for different purposes across the tutorial. Make sure all socat calls point to the locally built image instead of chadmoon/socat-registry

Error found in Part 3 step 20

Hello,

Thank you for your amazing work.

I followed the steps and met an error at Part 3 Step 20:
com: scripts/kr8sswordz-pages.sh

$ scripts/kr8sswordz-pages.sh
Sending build context to Docker daemon  1.871MB
Step 1/8 : FROM node:7
 ---> d9aed20b68a4
Step 2/8 : RUN mkdir -p /usr/src/app
 ---> Using cache
 ---> a1ee5cff53bc
Step 3/8 : WORKDIR /usr/src/app
 ---> Using cache
 ---> 3e7deb14225a
Step 4/8 : COPY package.json gulpfile.js yarn.lock /usr/src/app/
 ---> Using cache
 ---> 5e54d165b750
Step 5/8 : RUN yarn --pure-lockfile
 ---> Running in d200d2f2c446
yarn install v0.24.4
[1/4] Resolving packages...
[2/4] Fetching packages...
**warning [email protected]: The platform "linux" is incompatible with this module.**
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
**warning "[email protected]" has incorrect peer dependency "react@^15.4.0-0".
warning "[email protected]" has incorrect peer dependency "react-dom@^15.4.0-0".**
[4/4] Building fresh packages...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
**error /usr/src/app/node_modules/node-sass: Command failed.
Exit code: 135
Command: sh
Arguments: -c node scripts/build.js
Directory: /usr/src/app/node_modules/node-sass
Output:
Binary found at /usr/src/app/node_modules/node-sass/vendor/linux-x64-51/binding.node
Testing binary
Bus error (core dumped)**
The command '/bin/sh -c yarn --pure-lockfile' returned a non-zero code: 1
socat-registry
socat-registry
ee1316b038d64bce9bcf49264cfee5e274a7d55afe7197898149ff94106ee6d6
5 second sleep to make sure the registry is ready
The push refers to repository [127.0.0.1:30400/kr8sswordz]
An image does not exist locally with the tag: 127.0.0.1:30400/kr8sswordz
socat-registry
service/kr8sswordz unchanged
deployment.extensions/kr8sswordz unchanged
ingress.extensions/kr8sswordz unchanged

Any Ideas?

micro-services versioning

Evaluate the need of versioning deployed applications through sed commands seen in monitor-scale, and update accordingly

Error creating proxy with socat docker image

When running following command

docker stop socat-registry; docker rm socat-registry; docker run -d -e "REGIP=minikube ip" --name socat-registry -p 30400:5000 chadmoon/socat:latest bash -c "socat TCP4-LISTEN:5000,fork,reuseaddr TCP4:minikube ip:30400"

docker image die and no socat container is running.
Workaround
Run
minikube ip
on command line and replace TCP4:minikube ip:30400 part with TCP4:IP_RETURNED_IN_CMD:30400

privileged private registry

Private docker registry deployment requires privileged access in order to Read/Write Docker.sock shared as a volume from the host

Temporary workaround:
chmod 666 /var/run/docker.sock directly on host

Enable CORS

Research if it's necessary to enable CORS between Puzzle and Kr8sswordz microservices

Update Readme Part 4

Given the different technical modifications made to the codebase, update the readme to accurately instruct steps in Part 4

Command line examples missing "\"

Some of the commands embedded in the tutorial are not really well formatted for cutting and running on a Linux machine. For example, I think this;

sudo docker stop socat-registry && sudo docker rm socat-registry

is better than;

docker stop socat-registry; docker rm socat-registry;

Also, this;

docker stop socat-registry; docker rm socat-registry; \
docker run -d -e "REGIP=`minikube ip`" --name socat-registry \
  -p 30400:5000 chadmoon/socat:latest bash -c "socat \

runs better than;

docker stop socat-registry; docker rm socat-registry; 
docker run -d -e "REGIP=`minikube ip`" --name socat-registry 
  -p 30400:5000 chadmoon/socat:latest bash -c "socat 

Git commit is picked up but not deployed

In part 2, I made the change and it seems to acknowledge the commit but the end result (deployed). Is it the use of cache? Is it the deprecated non-block argement stages?

> git config core.sparsecheckout # timeout=10
 > git checkout -f 318729588edd96bfdf3196ee4acf97d954b74201
Commit message: "sop"
 > git rev-list --no-walk 318729588edd96bfdf3196ee4acf97d954b74201 # timeout=10
[Pipeline] sh
+ git rev-parse --short HEAD
[Pipeline] readFile
[Pipeline] stage (Build)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Build
Proceeding
[Pipeline] sh
+ docker build -t 127.0.0.1:30400/hello-kenzan:3187295 -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
Sending build context to Docker daemon  71.68kB

Step 1/4 : FROM nginx:latest
 ---> e445ab08b2be
Step 2/4 : COPY index.html /usr/share/nginx/html/index.html
 ---> Using cache
 ---> 95d5adcb9539
Step 3/4 : COPY DockerFileEx.jpg /usr/share/nginx/html/DockerFileEx.jpg
 ---> Using cache
 ---> 03dea6c43215
Step 4/4 : EXPOSE 80
 ---> Using cache
 ---> 73e8c6de9c98
Successfully built 73e8c6de9c98
Successfully tagged 127.0.0.1:30400/hello-kenzan:3187295
[Pipeline] stage (Push)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Push
Proceeding
[Pipeline] sh
+ docker push 127.0.0.1:30400/hello-kenzan:3187295
The push refers to repository [127.0.0.1:30400/hello-kenzan]
f64c0bb711b7: Preparing
b12d9f14ac14: Preparing
fe6a7a3b3f27: Preparing
d0673244f7d4: Preparing
d8a33133e477: Preparing
b12d9f14ac14: Layer already exists
fe6a7a3b3f27: Layer already exists
f64c0bb711b7: Layer already exists
d8a33133e477: Layer already exists
d0673244f7d4: Layer already exists
3187295: digest: sha256:a4c07e4d0e0f7dcb66cb8522aca0a8165104d3f2bd95a67bedf28b017e0df716 size: 1364
[Pipeline] stage (Deploy)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Deploy
Proceeding
[Pipeline] kubernetesDeploy
Starting Kubernetes deployment
Loading configuration: /var/jenkins_home/workspace/Hello-Kenzan/applications/hello-kenzan/k8s/deployment.yaml
Applied Service: Service(apiVersion=v1, kind=Service, metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=2019-08-05T06:12:47Z, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, initializers=null, labels={app=hello-kenzan}, name=hello-kenzan, namespace=default, ownerReferences=[], resourceVersion=20935, selfLink=/api/v1/namespaces/default/services/hello-kenzan, uid=1ff0857f-dba9-4fcf-84a8-2d38493b0484, additionalProperties={}), spec=ServiceSpec(clusterIP=10.98.30.145, externalIPs=[], externalName=null, externalTrafficPolicy=Cluster, healthCheckNodePort=null, loadBalancerIP=null, loadBalancerSourceRanges=[], ports=[ServicePort(name=null, nodePort=31666, port=80, protocol=TCP, targetPort=IntOrString(IntVal=80, Kind=null, StrVal=null, additionalProperties={}), additionalProperties={})], selector={app=hello-kenzan, tier=hello-kenzan}, sessionAffinity=None, type=NodePort, additionalProperties={}), status=ServiceStatus(loadBalancer=LoadBalancerStatus(ingress=[], additionalProperties={}), additionalProperties={}), additionalProperties={})
Applied Deployment: Deployment(apiVersion=extensions/v1beta1, kind=Deployment, metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=2019-08-05T06:12:47Z, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=8, initializers=null, labels={app=hello-kenzan}, name=hello-kenzan, namespace=default, ownerReferences=[], resourceVersion=22441, selfLink=/apis/extensions/v1beta1/namespaces/default/deployments/hello-kenzan, uid=185e0ca3-4b18-42ad-a6b4-0294d3db5072, additionalProperties={}), spec=DeploymentSpec(minReadySeconds=null, paused=null, progressDeadlineSeconds=2147483647, replicas=1, revisionHistoryLimit=2147483647, rollbackTo=null, selector=LabelSelector(matchExpressions=[], matchLabels={app=hello-kenzan, tier=hello-kenzan}, additionalProperties={}), strategy=DeploymentStrategy(rollingUpdate=null, type=Recreate, additionalProperties={}), template=PodTemplateSpec(metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=null, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, initializers=null, labels={app=hello-kenzan, tier=hello-kenzan}, name=null, namespace=default, ownerReferences=[], resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=PodSpec(activeDeadlineSeconds=null, affinity=null, automountServiceAccountToken=null, containers=[Container(args=[], command=[], env=[], envFrom=[], image=127.0.0.1:30400/hello-kenzan:jenkins-Hello-Kenzan-6, imagePullPolicy=IfNotPresent, lifecycle=null, livenessProbe=null, name=hello-kenzan, ports=[ContainerPort(containerPort=80, hostIP=null, hostPort=null, name=hello-kenzan, protocol=TCP, additionalProperties={})], readinessProbe=null, resources=ResourceRequirements(limits=null, requests=null, additionalProperties={}), securityContext=null, stdin=null, stdinOnce=null, terminationMessagePath=/dev/termination-log, terminationMessagePolicy=File, tty=null, volumeMounts=[], workingDir=null, additionalProperties={})], dnsPolicy=ClusterFirst, hostAliases=[], hostIPC=null, hostNetwork=null, hostPID=null, hostname=null, imagePullSecrets=[], initContainers=[], nodeName=null, nodeSelector=null, restartPolicy=Always, schedulerName=default-scheduler, securityContext=PodSecurityContext(fsGroup=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, supplementalGroups=[], additionalProperties={}), serviceAccount=null, serviceAccountName=null, subdomain=null, terminationGracePeriodSeconds=30, tolerations=[], volumes=[], additionalProperties={}), additionalProperties={}), additionalProperties={}), status=DeploymentStatus(availableReplicas=1, collisionCount=null, conditions=[DeploymentCondition(lastTransitionTime=2019-08-05T10:15:44Z, lastUpdateTime=2019-08-05T10:15:44Z, message=Deployment has minimum availability., reason=MinimumReplicasAvailable, status=True, type=Available, additionalProperties={})], observedGeneration=7, readyReplicas=1, replicas=1, unavailableReplicas=null, updatedReplicas=1, additionalProperties={}), additionalProperties={})
Loading configuration: /var/jenkins_home/workspace/Hello-Kenzan/applications/hello-kenzan/k8s/manual-deployment.yaml
Applied Service: Service(apiVersion=v1, kind=Service, metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=2019-08-05T06:12:47Z, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, initializers=null, labels={app=hello-kenzan}, name=hello-kenzan, namespace=default, ownerReferences=[], resourceVersion=20935, selfLink=/api/v1/namespaces/default/services/hello-kenzan, uid=1ff0857f-dba9-4fcf-84a8-2d38493b0484, additionalProperties={}), spec=ServiceSpec(clusterIP=10.98.30.145, externalIPs=[], externalName=null, externalTrafficPolicy=Cluster, healthCheckNodePort=null, loadBalancerIP=null, loadBalancerSourceRanges=[], ports=[ServicePort(name=null, nodePort=31666, port=80, protocol=TCP, targetPort=IntOrString(IntVal=80, Kind=null, StrVal=null, additionalProperties={}), additionalProperties={})], selector={app=hello-kenzan, tier=hello-kenzan}, sessionAffinity=None, type=NodePort, additionalProperties={}), status=ServiceStatus(loadBalancer=LoadBalancerStatus(ingress=[], additionalProperties={}), additionalProperties={}), additionalProperties={})
Applied Deployment: Deployment(apiVersion=extensions/v1beta1, kind=Deployment, metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=2019-08-05T06:12:47Z, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=9, initializers=null, labels={app=hello-kenzan}, name=hello-kenzan, namespace=default, ownerReferences=[], resourceVersion=22452, selfLink=/apis/extensions/v1beta1/namespaces/default/deployments/hello-kenzan, uid=185e0ca3-4b18-42ad-a6b4-0294d3db5072, additionalProperties={}), spec=DeploymentSpec(minReadySeconds=null, paused=null, progressDeadlineSeconds=2147483647, replicas=1, revisionHistoryLimit=2147483647, rollbackTo=null, selector=LabelSelector(matchExpressions=[], matchLabels={app=hello-kenzan, tier=hello-kenzan}, additionalProperties={}), strategy=DeploymentStrategy(rollingUpdate=null, type=Recreate, additionalProperties={}), template=PodTemplateSpec(metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=null, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, initializers=null, labels={app=hello-kenzan, tier=hello-kenzan}, name=null, namespace=default, ownerReferences=[], resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=PodSpec(activeDeadlineSeconds=null, affinity=null, automountServiceAccountToken=null, containers=[Container(args=[], command=[], env=[], envFrom=[], image=127.0.0.1:30400/hello-kenzan:latest, imagePullPolicy=Always, lifecycle=null, livenessProbe=null, name=hello-kenzan, ports=[ContainerPort(containerPort=80, hostIP=null, hostPort=null, name=hello-kenzan, protocol=TCP, additionalProperties={})], readinessProbe=null, resources=ResourceRequirements(limits=null, requests=null, additionalProperties={}), securityContext=null, stdin=null, stdinOnce=null, terminationMessagePath=/dev/termination-log, terminationMessagePolicy=File, tty=null, volumeMounts=[], workingDir=null, additionalProperties={})], dnsPolicy=ClusterFirst, hostAliases=[], hostIPC=null, hostNetwork=null, hostPID=null, hostname=null, imagePullSecrets=[], initContainers=[], nodeName=null, nodeSelector=null, restartPolicy=Always, schedulerName=default-scheduler, securityContext=PodSecurityContext(fsGroup=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, supplementalGroups=[], additionalProperties={}), serviceAccount=null, serviceAccountName=null, subdomain=null, terminationGracePeriodSeconds=30, tolerations=[], volumes=[], additionalProperties={}), additionalProperties={}), additionalProperties={}), status=DeploymentStatus(availableReplicas=null, collisionCount=null, conditions=[DeploymentCondition(lastTransitionTime=2019-08-05T10:24:58Z, lastUpdateTime=2019-08-05T10:24:58Z, message=Deployment does not have minimum availability., reason=MinimumReplicasUnavailable, status=False, type=Available, additionalProperties={})], observedGeneration=8, readyReplicas=null, replicas=null, unavailableReplicas=null, updatedReplicas=null, additionalProperties={}), additionalProperties={})
Finished Kubernetes deployment
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Monitor-scale app serviceAccount

monitor-scale app provides scaling functionatilty to the user by interacting with the k8s cluster through kubectl, for which it requires a context (user + cluster) to be preconfigured.

update manifests to include the creation of a monitor-scale serviceAccount, and its corresponding roleBinding to the minimum access rights. monitor-scale deployment would then run as such serviceAccount.

make serviceAccount token accessible to the pod by specifying automountServiceAccountToken: true in the serviceAccount manifest.

update application built process to configure the kubectl context based on the newly created serviceAccount.

etcd job

update monitor-scale app to create the pod-list etcd directory on startup, and remove the etcd Job

Unable to pull puzzle images from local repository

I found that unable to pull images from the repository, but I can access repository portal and other images can pull from local repository. So, it is strange and here is the log

bryanleekw@nb1:~$ kubectl describe pod puzzle-85f88bffcf-x5n59
Name: puzzle-85f88bffcf-x5n59
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 19 Jan 2018 10:05:23 +0800
Labels: app=puzzle
pod-template-hash=4194469979
tier=puzzle
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"puzzle-85f88bffcf","uid":"0c562b6a-fc56-11e7-b1ed-080027302ed5",...
Status: Running
IP: 172.17.0.13
Controlled By: ReplicaSet/puzzle-85f88bffcf
Containers:
puzzle:
Container ID: docker://e51549e596ecbd437d88edc5920f45661a8eb79299c8a80d076f79a7d4e28719
Image: 127.0.0.1:30400/puzzle:8a7b5cb
Image ID: docker-pullable://127.0.0.1:30400/puzzle@sha256:118e62bb5f0177f9e2972be1a58974a4625592065c0521d1917dd15aa63ba600
Port: 3000/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Sat, 20 Jan 2018 20:47:26 +0800
Finished: Sat, 20 Jan 2018 20:50:01 +0800
Ready: False
Restart Count: 19
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4f4ds (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-4f4ds:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4f4ds
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations:
Events:
Type Reason Age From Message


Normal SandboxChanged 10h kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal SuccessfulMountVolume 10h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4f4ds"
Normal SuccessfulMountVolume 10h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4f4ds"
Warning Failed 10h (x2 over 10h) kubelet, minikube Failed to pull image "127.0.0.1:30400/puzzle:8a7b5cb": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:30400/v2/: dial tcp 127.0.0.1:30400: getsockopt: connection refused
Normal BackOff 10h (x2 over 10h) kubelet, minikube Back-off pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Pulling 10h (x3 over 10h) kubelet, minikube pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Pulled 10h kubelet, minikube Successfully pulled image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Created 10h kubelet, minikube Created container
Normal Started 10h kubelet, minikube Started container
Normal Killing 10h kubelet, minikube Killing container with id docker://puzzle:FailedPostStartHook
Warning FailedPostStartHook 10h (x2 over 10h) kubelet, minikube
Warning BackOff 10h (x26 over 10h) kubelet, minikube Back-off restarting failed container
Warning FailedSync 10h (x43 over 10h) kubelet, minikube Error syncing pod
Normal SuccessfulMountVolume 10h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4f4ds"
Normal SandboxChanged 10h kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulling 10h kubelet, minikube pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Warning Failed 10h kubelet, minikube Failed to pull image "127.0.0.1:30400/puzzle:8a7b5cb": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:30400/v2/: dial tcp 127.0.0.1:30400: getsockopt: connection refused
Warning FailedSync 10h kubelet, minikube Error syncing pod
Normal SuccessfulMountVolume 10h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4f4ds"
Warning Failed 10h kubelet, minikube Failed to pull image "127.0.0.1:30400/puzzle:8a7b5cb": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:30400/v2/: dial tcp 127.0.0.1:30400: getsockopt: connection refused
Normal Pulling 9h (x3 over 10h) kubelet, minikube pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Pulled 9h (x2 over 10h) kubelet, minikube Successfully pulled image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Created 9h (x2 over 10h) kubelet, minikube Created container
Normal Started 9h (x2 over 10h) kubelet, minikube Started container
Normal Killing 9h (x2 over 10h) kubelet, minikube Killing container with id docker://puzzle:FailedPostStartHook
Warning BackOff 9h (x4 over 10h) kubelet, minikube Back-off restarting failed container
Warning FailedSync 9h (x7 over 10h) kubelet, minikube Error syncing pod
Warning FailedPostStartHook 9h (x3 over 10h) kubelet, minikube
Warning FailedPreStopHook 9h kubelet, minikube
Normal SuccessfulMountVolume 19m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4f4ds"
Normal SandboxChanged 19m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulling 19m kubelet, minikube pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal SuccessfulMountVolume 17m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4f4ds"
Warning Failed 17m (x2 over 17m) kubelet, minikube Failed to pull image "127.0.0.1:30400/puzzle:8a7b5cb": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:30400/v2/: dial tcp 127.0.0.1:30400: getsockopt: connection refused
Normal BackOff 16m (x2 over 17m) kubelet, minikube Back-off pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Pulling 16m (x3 over 17m) kubelet, minikube pulling image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Pulled 16m kubelet, minikube Successfully pulled image "127.0.0.1:30400/puzzle:8a7b5cb"
Normal Created 16m kubelet, minikube Created container
Normal Started 16m kubelet, minikube Started container
Warning FailedPreStopHook 16m kubelet, minikube
Warning BackOff 15m (x3 over 15m) kubelet, minikube Back-off restarting failed container
Warning FailedPostStartHook 10m (x2 over 16m) kubelet, minikube
Warning FailedSync 7m (x34 over 17m) kubelet, minikube Error syncing pod
Normal Killing 2m (x3 over 15m) kubelet, minikube Killing container with id docker://puzzle:FailedPostStartHook

Jenkins serviceAccount

Whether Jenkins uses the kubernetes plugin or kubectl to interact with the cluster, it requires a context (user + cluster) to be preconfigured for its execution.

update manifests to include the creation of a Jenkins serviceAccount, and its corresponding roleBinding to admin access rights. Jenkins deployment would then run as such serviceAccount.

make serviceAccount token accessible to the pod by specifying automountServiceAccountToken: true in the serviceAccount manifest.

use kubectl initContainer to create /.kube/config file with the required context, for jenkins pod to use.

Project Dead?

Why is there no work being done on any of the issues or PR's? If the owner cannot continue updating this could we possible pass this on to someone who can consistently check PR's and fix the issues as k8s and and other components are updated? I am trying to do this project which is really awesome, but it makes it difficult with manifests being out of date and other issues. I'd be willing to help review if there needs to be someone to do that.

Thanks!

Andrew

Jenkins Dockerfile

Replace referenced Jenkins image with a Dockerfile that is inspectable and manually buildable

Update Readme Part 3

Given the different technical modifications made to the codebase, update the readme to accurately instruct steps in Part 3

Jenkins fails at deploy

Everything has worked till Jenkins. For some reason it fails on deploy and cant figure out why

Starting Kubernetes deployment
Loading configuration: /var/jenkins_home/workspace/Hello-Kenzan Pipeline/applications/hello-kenzan/k8s/deployment.yaml
ERROR: ERROR: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get]  for kind: [Service]  with name: [hello-kenzan]  in namespace: [default]  failed.
hudson.remoting.ProxyException: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get]  for kind: [Service]  with name: [hello-kenzan]  in namespace: [default]  failed.
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)

Update Readme Part 1

Given the different technical modifications made to the codebase, update the readme to accurately instruct steps in Part 1

Update k8s versions

Update the tutorial to the latest versions of Kubernetes (v1.11) and Minikube (v0.28)
Make all necessary changes to make the tutorial work across all parts.

Include Apache 2.0 license

As part of the kenzanlabs initiative of ensuring all repositories are following expected licensing guidelines for Kenzan, an Apache 2.0 should be included within this project repo.

Including a license will require:

kubectl apply -f manifests/registry.yaml fails

Hi,

When I try to run the following: kubectl apply -f manifests/registry.yaml

I get the following:
error: unable to recognize "manifests/registry.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

I read extensions/v1beta1 is incorrect and should be apps/v1 since Kubernetes 1.9. I am running the latest version - 1.16 and changing this still fails and outputs: Deployment.spec): missing required field "selector"

I had a go at updating the yaml file, which I thought had fixed the problem but when performing the next step which is: minikube service registry-ui

The web page opens but the service isn't running - Unable to connect
Firefox can’t establish a connection to the server at 192.168.99.101:30371.

It appears that this file is out of date also and would be very grateful if this could be updated. I am new to Kubernetes and would like to continue following the web article currently available.

Cheers

Patrick

security problems

This software generates a lot of outgoing traffic. It looks like that's coming from the jenkins pod, possibly from the plugins. The kr8sswords app has many severe security problems too.

Update Readme Part 2

Given the different technical modifications made to the codebase, update the readme to accurately instruct steps in Part 2

the server doesn't have a resource type "thirdpartyresource"

do anyone know how to solve this problem? Kubenetes keeps prompting following message

the server doesn't have a resource type "thirdpartyresource"
waiting for operator

my script is to wait for kubenetes to get resource from coreos
until kubectl get thirdpartyresource cluster.etcd.coreos.com
do
echo "waiting for operator"
sleep 2
done
but unfortunately, it run infinitely. Any thoughts?

I have forked the latest version already.

I had a issue.

on part2

I clicked 'Build Now' button on Jenkins. then successed. all changed to green(build, push, deploy).
But, hello-kenzan deployment occurred an error.
New docker image tag was '387ecb8'.
but, - image: 127.0.0.1:30400/hello-kenzan:latest in 'applications/hello-kenzan/k8s/deployment.yaml' file.

error message :
Failed to pull image "127.0.0.1:30400/hello-kenzan:latest": rpc error: code = Unknown desc = Error response from daemon: manifest for 127.0.0.1:30400/hello-kenzan:latest not found

Not found hello-kenzan:latest..
How to modify latest to ?

Part2 Step 4 - Jenkins Offline

After the Jenkins deployment I am unable to 'Install Suggested Plugins' as the Jenkins server is offline. On further investigation the Jenkins pod is unable to reach any external IP addresses. This is only the case when running minikube on --kubernetes-version v1.6.0 as instructed. When running minikube on v1.8.0 the jenkins pod deploys and there is no issue in connecting to the plugins URL. Though after that I'm unable to continue with the rest of the tutorial as the remainder is run in v1.6.0.

I've investigated any networking differences between v1.6.0 and v1.8.0 to no avail.

Privileged Jenkins deployment

Jenkins deployment requires privileged access in order to Read/Write Docker.sock shared as a volume from the host

Temporary workaround:
chmod 666 /var/run/docker.sock directly on host

scripts/etcd.sh correct?

I ran etcd.sh file, but failed many times. So I checked that etcd/etcd cluster service/etcd service shall be created. However, it seems the scripts only can create operator and pod, but where to check etcd cluster works fine?
Anyone have experience on this?

################################

Install etcd

################################
echo "installing etcd operator"
kubectl create -f manifests/deployment.yaml
kubectl rollout status -f manifests/deployment.yaml

until kubectl get thirdpartyresource cluster.etcd.coreos.com
do
echo "waiting for operator"
sleep 2
done

################################

create etcd cluster service

################################
echo "pausing for 10 seconds for operator to settle"
sleep 10

kubectl create -f manifests/example-etcd-cluster.yaml

echo "installing etcd cluster service"
kubectl create -f manifests/service.json

################################

etcd cluster

################################
echo "waiting for etcd cluster to turnup"

until kubectl get pod example-etcd-cluster-0002
do
echo "waiting for etcd cluster to turnup"
sleep 2
done

Part 1 Step 10 of tutorial fails if nano is not installed

Maybe use $EDITOR from shell environment instead?

Part 1 Step: 10
Let’s make a change to an HTML file in the cloned project. Running the command below will open /applications/hello-kenzan/index.html in the nano text editor. Change some text inside one of the

tags. For example, change “Hello from Kenzan!” to “Hello from Me!”.
When you’re done, press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.
nano applications/hello-kenzan/index.html
Press enter to run the above command for the step. Yes
/bin/sh: 1: nano: not found

/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx-lite/rx.lite.js:81
throw e;
^

Error: Command failed: nano applications/hello-kenzan/index.html
at checkExecSyncError (child_process.js:481:13)
at execSync (child_process.js:521:13)
at AnonymousObserver._onNext (/home/jeremiah/GitHub/kubernetes-ci-cd/start.js:16:13)
at AnonymousObserver.Rx.AnonymousObserver.AnonymousObserver.next (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:1828:12)
at AnonymousObserver.Rx.internals.AbstractObserver.AbstractObserver.onNext (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:1762:31)
at Subject.onNext (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:5998:19)
at Subject.tryCatcher (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:63:31)
at AutoDetachObserverPrototype.next (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:5883:51)
at AutoDetachObserver.Rx.internals.AbstractObserver.AbstractObserver.onNext (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:1762:31)
at AutoDetachObserver.tryCatcher (/home/jeremiah/GitHub/kubernetes-ci-cd/node_modules/rx/dist/rx.js:63:31)

npm ERR! Linux 4.9.0-3-amd64
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "part1"
npm ERR! node v7.10.0
npm ERR! npm v4.2.0
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] part1: node start.js part1.yml
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] part1 script 'node start.js part1.yml'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the kubernetes-ci-cd package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node start.js part1.yml
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs kubernetes-ci-cd
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls kubernetes-ci-cd
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR! /home/jeremiah/.npm/_logs/2017-06-19T14_08_19_867Z-debug.log

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.