GithubHelp home page GithubHelp logo

woozymasta / kube-dump Goto Github PK

View Code? Open in Web Editor NEW
304.0 10.0 61.0 366 KB

Backup a Kubernetes cluster as a yaml manifest

Home Page: https://kube-dump.woozymasta.ru

License: GNU General Public License v3.0

Shell 98.21% Dockerfile 1.79%
kubernetes backup-script k8s kubectl backup-tool backup bash

kube-dump's People

Contributors

achurak avatar davidalger avatar foreversunyao avatar viacheslave avatar woozymasta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-dump's Issues

Support busybox?

In busybox ash, the realpath have no '--canonicalize-missing' option, remove it works well. Will u support and testing on busybox ash in the future?
And this tool is very nice.

can't run in k8s with ServiceAccount/Rolebinding

I wanted to run kube-dump directly inside a container in a k8s cluster, but I always get an authorization error on k8s API (HTTP 401)

Therefore, I created the service account and also the role binding to cluster-admin role. Executing any kubectl command inside the container works without any problems.

What I don't understand is, why are you getting the key when you are not using it in the curl command? Only using the cert is not enough for authorization.

A Better way to solve that check would be:
(https://github.com/WoozyMasta/kube-dump/blob/master/kube-dump#L242)

_api_code=$(
      curl --fail --location --output /dev/null --write-out '%{http_code}\n' --cacert $kube_api_ca \
      -H "Authorization: Bearer $kube_api_token" --silent "https://$kube_api/livez"
    )

if [ $_api_code == "200" ]

With these changes it finally works

Flag -f, --force-remove doesn't work

First run
kube-dump ns -n default -d kdump/

Second run
kube-dump ns -n default -d kdump/ -f

Error:

Warning:  Destination kdump/ directory will be removed
find: failed to delete 'kdump / default': Directory is not empty 

Make fully root-less

Make it possible to launch completely without root and with an offset UID GID.

It will be necessary to check the work in OpenShift (it is possible to expand the list of resources if it is OpenShift)

Exclude secrets

Nice script:)

Is there a way to exclude resource kind=secrets from the dump?

works on macOS too

this script works on macOS too if you use homebrew, assuming you have all other dependencies already installed
To get realpath it needs:

brew install coreutils

Flags not works

I am running the tool locally. I have tried the following instructions

./kube-dump dump -d /Users/andrea/Tools/dump
./kube-dump dump --namespaced-resources deployment --destination-dir /Users/andrea/Tools/dump
./kube-dump dump-namespaces -d /Users/andrea/Tools/dump
./kube-dump ns -r deployment -d /Users/andrea/Tools/dump

but it seems to ignore all the flags because it always downloads all the resources and saves them in the default directory (data).
Am I doing something wrong?

does not work in ubuntu

$ ./kube-dump all
./kube-dump: line 21: $'\r': command not found
./kube-dump: line 23: $'\r': command not found
./kube-dump: line 25: syntax error near unexpected token $'{\r'' '/kube-dump: line 25: log () {

Docker container unable to build archive

Hi,

It seems the Docker build is unable to build the archive. I've build my own image with set -x set in the kube-dump script.

$ docker run --user 1000:1001 --tty --interactive --rm   --volume $HOME/.kube:/.kube --volume $HOME/dump/docker/:/dump   jeroen0494/kube-dump:1.1.2 ns -n default -d /dump --kube-config /.kube/config -a --archive-type gz
++ pwd
+ working_dir=/
++ date +%Y.%m.%d_%H-%M
+ timestamp=2022.08.23_11-37
+ '[' -f //.env ']'
+ [[ ns =~ ^(dump|all|dump-namespaces|ns|dump-cluster|cls)$ ]]
+ mode=ns
++ getopt -l namespaces:,namespaced-resources:,cluster-resources: -l kube-config:,kube-context:,kube-cluster:,kube-insecure-tls -l help,silent,destination:,force-remove,detailed,output-by-type,flat -l git-commit,git-push,git-branch:,git-commit-user:,git-commit-email: -l git-remote-name:,git-remote-url: -l archivate,archive-rotate-days:,archive-type: -o n:,r:,k:,h,s,d:,f,c,p,b:,a -- -n default -d /dump --kube-config /.kube/config -a --archive-type gz
+ args=' -n '\''default'\'' -d '\''/dump'\'' --kube-config '\''/.kube/config'\'' -a --archive-type '\''gz'\'' --'
+ eval set -- ' -n '\''default'\'' -d '\''/dump'\'' --kube-config '\''/.kube/config'\'' -a --archive-type '\''gz'\'' --'
++ set -- -n default -d /dump --kube-config /.kube/config -a --archive-type gz --
+ '[' 10 -ge 1 ']'
+ case "$1" in
+ namespaces+=default,
+ shift
+ shift
+ '[' 8 -ge 1 ']'
+ case "$1" in
+ destination_dir=/dump
+ shift
+ shift
+ '[' 6 -ge 1 ']'
+ case "$1" in
+ kube_config=/.kube/config
+ shift
+ shift
+ '[' 4 -ge 1 ']'
+ case "$1" in
+ archivate=true
+ shift
+ '[' 3 -ge 1 ']'
+ case "$1" in
+ archive_type=gz
+ shift
+ shift
+ '[' 1 -ge 1 ']'
+ case "$1" in
+ shift
+ break
+ [[ -n '' ]]
+ : ''
+ : ''
+ : ''
+ : ''
+ : /.kube/config
+ : ''
+ : ''
+ : ''
+ : ''
+ : ''
+ : ''
+ : ''
+ : ''
+ : ''
+ : ''
+ : true
+ : ''
+ : gz
+ require kubectl jq yq
+ for command in "$@"
++ command -v kubectl
+ '[' -x /usr/bin/kubectl ']'
+ for command in "$@"
++ command -v jq
+ '[' -x /usr/bin/jq ']'
+ for command in "$@"
++ command -v yq
+ '[' -x /usr/bin/yq ']'
+ '[' '' == true ']'
+ '[' true == true ']'
+ '[' gz == xz ']'
+ '[' true == true ']'
+ '[' gz == gzip ']'
+ '[' true == true ']'
+ '[' gz == bzip2 ']'
+ '[' -n /.kube/config ']'
+ k_args+=("--kubeconfig=$kube_config")
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' '' == true ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ kubectl config current-context --kubeconfig=/.kube/config
++ kubectl config current-context --kubeconfig=/.kube/config
+ context=acc-jri-ot
+ '[' -n '' ']'
+ kubectl cluster-info --kubeconfig=/.kube/config
+ '[' -z default, ']'
+ namespaces=default,
+ '[' -z '' ']'
++ kubectl api-resources --namespaced=true --output=name --kubeconfig=/.kube/config
++ tr '\n' ' '
+ namespaced_resources='bindings configmaps endpoints events limitranges persistentvolumeclaims pods podtemplates replicationcontrollers resourcequotas secrets serviceaccounts services controllerrevisions.apps daemonsets.apps deployments.apps replicasets.apps statefulsets.apps localsubjectaccessreviews.authorization.k8s.io horizontalpodautoscalers.autoscaling cronjobs.batch jobs.batch leases.coordination.k8s.io endpointslices.discovery.k8s.io events.events.k8s.io addresspools.metallb.io bfdprofiles.metallb.io bgpadvertisements.metallb.io bgppeers.metallb.io communities.metallb.io ipaddresspools.metallb.io l2advertisements.metallb.io pods.metrics.k8s.io ingresses.networking.k8s.io networkpolicies.networking.k8s.io poddisruptionbudgets.policy rolebindings.rbac.authorization.k8s.io roles.rbac.authorization.k8s.io volumesnapshots.snapshot.storage.k8s.io csistoragecapacities.storage.k8s.io '
+ '[' -z '' ']'
++ kubectl api-resources --namespaced=false --output=name --kubeconfig=/.kube/config
++ tr '\n' ' '
+ cluster_resources='componentstatuses namespaces nodes persistentvolumes mutatingwebhookconfigurations.admissionregistration.k8s.io validatingwebhookconfigurations.admissionregistration.k8s.io customresourcedefinitions.apiextensions.k8s.io apiservices.apiregistration.k8s.io tokenreviews.authentication.k8s.io selfsubjectaccessreviews.authorization.k8s.io selfsubjectrulesreviews.authorization.k8s.io subjectaccessreviews.authorization.k8s.io certificatesigningrequests.certificates.k8s.io flowschemas.flowcontrol.apiserver.k8s.io prioritylevelconfigurations.flowcontrol.apiserver.k8s.io nodes.metrics.k8s.io ingressclasses.networking.k8s.io runtimeclasses.node.k8s.io podsecuritypolicies.policy clusterrolebindings.rbac.authorization.k8s.io clusterroles.rbac.authorization.k8s.io priorityclasses.scheduling.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io csidrivers.storage.k8s.io csinodes.storage.k8s.io storageclasses.storage.k8s.io volumeattachments.storage.k8s.io '
++ cat
+ cluster_jq_filter='  del(
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.uid,
    .metadata.selfLink,
    .metadata.resourceVersion,
    .metadata.creationTimestamp,
    .metadata.generation
  )'
++ cat
+ namespaced_jq_filter='  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'
+ '[' '' == true ']'
+ destination_dir=/dump
++ realpath /dump --canonicalize-missing
+ destination_dir=/dump
+ '[' '!' -d /dump ']'
+ '[' '' == true ']'
+ '[' '' == true ']'
+ success 'Dump data in' /dump directory ''
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '%s \e[1;36m%s\e[m %s\n' 'Dump data in' /dump directory ''
Dump data in /dump directory
  
+ score=1
+ score=0
+ [[ ns =~ ^(dump|all|dump-namespaces|ns)$ ]]
+ for ns in ${namespaces//,/ }
+ kubectl get ns default --kubeconfig=/.kube/config
+ destination_namespace_dir=/dump/default
+ '[' -d /dump/default ']'
+ heading 'Dump namespace' default
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '%s \e[1;34m%s\e[m %s\n%-15s%-30s%s\n' 'Dump namespace' default started STATE RESOURCE NAME
Dump namespace default started
STATE          RESOURCE                      NAME
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_bindings
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get bindings --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_configmaps
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get configmaps --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ '[' -z kube-root-ca.crt ']'
+ '[' configmaps == secret ']'
+ msg-start configmaps kube-root-ca.crt
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing configmaps kube-root-ca.crt
Processing     configmaps                    kube-root-ca.crt
+ destination_resource_name=kube-root-ca.crt_configmaps.yaml
+ kubectl --namespace=default get --output=json configmaps kube-root-ca.crt --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'
+ yq eval --prettyPrint --no-colors --exit-status -
+ msg-end configmaps kube-root-ca.crt
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        configmaps                    kube-root-ca.crt kube-root-ca.crt
+ read -r name
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_endpoints
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get endpoints --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ '[' -z kubernetes ']'
+ '[' endpoints == secret ']'
+ msg-start endpoints kubernetes
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing endpoints kubernetes
Processing     endpoints                     kubernetes
+ destination_resource_name=kubernetes_endpoints.yaml
+ kubectl --namespace=default get --output=json endpoints kubernetes --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'
+ yq eval --prettyPrint --no-colors --exit-status -
+ msg-end endpoints kubernetes
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        endpoints                     kubernetesoints kubernetes
+ read -r name
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_events
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get events --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_limitranges
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get limitranges --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_persistentvolumeclaims
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get persistentvolumeclaims --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_pods
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get pods --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_podtemplates
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get podtemplates --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_replicationcontrollers
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get replicationcontrollers --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_resourcequotas
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get resourcequotas --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_secrets
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get secrets --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ '[' -z default-token-dk5tw ']'
+ '[' secrets == secret ']'
+ msg-start secrets default-token-dk5tw
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing secrets default-token-dk5tw
Processing     secrets                       default-token-dk5tw
+ destination_resource_name=default-token-dk5tw_secrets.yaml
+ kubectl --namespace=default get --output=json secrets default-token-dk5tw --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'+ yq eval 
--prettyPrint --no-colors --exit-status -
+ msg-end secrets default-token-dk5tw
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        secrets                       default-token-dk5twlt-token-dk5tw
+ read -r name
+ '[' -z mount-options ']'
+ '[' secrets == secret ']'
+ msg-start secrets mount-options
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing secrets mount-options
Processing     secrets                       mount-options
+ destination_resource_name=mount-options_secrets.yaml
+ kubectl --namespace=default get --output=json secrets mount-options --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'+ yq 
eval --prettyPrint --no-colors --exit-status -
+ msg-end secrets mount-options
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        secrets                       mount-options mount-options
+ read -r name
+ '[' -z velero-restic-credentials ']'
+ '[' secrets == secret ']'
+ msg-start secrets velero-restic-credentials
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing secrets velero-restic-credentials
Processing     secrets                       velero-restic-credentials
+ destination_resource_name=velero-restic-credentials_secrets.yaml
+ kubectl --namespace=default get --output=json secrets velero-restic-credentials --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'
+ yq eval --prettyPrint --no-colors --exit-status -
+ msg-end secrets velero-restic-credentials
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        secrets                       velero-restic-credentialsic-credentials
+ read -r name
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_serviceaccounts
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get serviceaccounts --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ '[' -z default ']'
+ '[' serviceaccounts == secret ']'
+ msg-start serviceaccounts default
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing serviceaccounts default
Processing     serviceaccounts               default
+ destination_resource_name=default_serviceaccounts.yaml
+ kubectl --namespace=default get --output=json serviceaccounts default --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys + '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'yq eval 
--prettyPrint --no-colors --exit-status -
+ msg-end serviceaccounts default
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        serviceaccounts               defaulterviceaccounts default
+ read -r name
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_services
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get services --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ '[' -z kubernetes ']'
+ '[' services == secret ']'
+ msg-start services kubernetes
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing services kubernetes
Processing     services                      kubernetes
+ destination_resource_name=kubernetes_services.yaml
+ kubectl --namespace=default get --output=json services kubernetes --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'+ yq eval 
--prettyPrint --no-colors --exit-status -
+ msg-end services kubernetes
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        services                      kubernetesices kubernetes
+ read -r name
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_controllerrevisions.apps
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get controllerrevisions.apps --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_daemonsets.apps
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get daemonsets.apps --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_deployments.apps
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get deployments.apps --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_replicasets.apps
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get replicasets.apps --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_statefulsets.apps
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get statefulsets.apps --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_localsubjectaccessreviews.authorization.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get localsubjectaccessreviews.authorization.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_horizontalpodautoscalers.autoscaling
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get horizontalpodautoscalers.autoscaling --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_cronjobs.batch
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get cronjobs.batch --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_jobs.batch
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get jobs.batch --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_leases.coordination.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get leases.coordination.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_endpointslices.discovery.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get endpointslices.discovery.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ '[' -z kubernetes ']'
+ '[' endpointslices.discovery.k8s.io == secret ']'
+ msg-start endpointslices.discovery.k8s.io kubernetes
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '\e[1;33m%-15s\e[m%-30s%s\n' Processing endpointslices.discovery.k8s.io kubernetes
Processing     endpointslices.discovery.k8s.iokubernetes
+ destination_resource_name=kubernetes_endpointslices.discovery.k8s.io.yaml
+ kubectl --namespace=default get --output=json endpointslices.discovery.k8s.io kubernetes --kubeconfig=/.kube/config
+ jq --exit-status --compact-output --monochrome-output --raw-output --sort-keys '  del(
    .metadata.annotations."autoscaling.alpha.kubernetes.io/conditions",
    .metadata.annotations."autoscaling.alpha.kubernetes.io/current-metrics",
    .metadata.annotations."control-plane.alpha.kubernetes.io/leader",
    .metadata.annotations."deployment.kubernetes.io/revision",
    .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration",
    .metadata.annotations."kubernetes.io/service-account.uid",
    .metadata.annotations."pv.kubernetes.io/bind-completed",
    .metadata.annotations."pv.kubernetes.io/bound-by-controller",
    .metadata.finalizers,
    .metadata.managedFields,
    .metadata.creationTimestamp,
    .metadata.generation,
    .metadata.resourceVersion,
    .metadata.selfLink,
    .metadata.uid,
    .spec.clusterIP,
    .spec.progressDeadlineSeconds,
    .spec.revisionHistoryLimit,
    .spec.template.metadata.annotations."kubectl.kubernetes.io/restartedAt",
    .spec.template.metadata.creationTimestamp,
    .spec.volumeName,
    .spec.volumeMode,
    .status
  )'+ 
yq eval --prettyPrint --no-colors --exit-status -
+ msg-end endpointslices.discovery.k8s.io kubernetes
+ '[' '' == true ']'
+ '[' -t 1 ']'
Success        endpointslices.discovery.k8s.iokubernetesintslices.discovery.k8s.io kubernetes
+ read -r name
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_events.events.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get events.events.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_addresspools.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get addresspools.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_bfdprofiles.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get bfdprofiles.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_bgpadvertisements.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get bgpadvertisements.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_bgppeers.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get bgppeers.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_communities.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get communities.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_ipaddresspools.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get ipaddresspools.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_l2advertisements.metallb.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get l2advertisements.metallb.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_pods.metrics.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get pods.metrics.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_ingresses.networking.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get ingresses.networking.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_networkpolicies.networking.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get networkpolicies.networking.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_poddisruptionbudgets.policy
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get poddisruptionbudgets.policy --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_rolebindings.rbac.authorization.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get rolebindings.rbac.authorization.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_roles.rbac.authorization.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get roles.rbac.authorization.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_volumesnapshots.snapshot.storage.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get volumesnapshots.snapshot.storage.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ for resource in ${namespaced_resources//,/ }
+ destination_resource_dir=/dump/default
+ '[' '' == true ']'
+ destination_suffix=_csistoragecapacities.storage.k8s.io
+ '[' '' == true ']'
+ '[' '' == true ']'
+ read -r name
++ kubectl --namespace=default get csistoragecapacities.storage.k8s.io --output=custom-columns=NAME:.metadata.name --no-headers --kubeconfig=/.kube/config
+ success Namespace default 'resources dump completed' ''
+ '[' '' == true ']'
+ '[' -t 1 ']'
+ printf '%s \e[1;36m%s\e[m %s\n' Namespace default 'resources dump completed' ''
Namespace default resources dump completed
  
+ score=1
+ [[ ns =~ ^(dump|all|dump-cluster|cls)$ ]]
+ '[' '' == true ']'
+ '[' '' == true ']'
+ '[' true == true ']'
+ '[' gz == xz ']'
+ '[' gz == gz ']'
+ _compress=--gzip
+ '[' gz == bz2 ']'
+ '[' -n --gzip ']'
+ _archive=/dump/dump_2022.08.23_11-37.tar.gz
+ tar --create --file=/dump/dump_2022.08.23_11-37.tar.gz --absolute-names --gzip --exclude-vcs '--exclude=*.tar' '--exclude=*.tar.xz' '--exclude=*.tar.gz' '--exclude=*.tar.bz2' --directory= dump
tar: : Cannot open: No such file or directory
tar: Error is not recoverable: exiting now

Seems the archives are being created, but kube-dump is unable to either find it after creation or write to it. Running the container without the --user options doesn't change the observed behaviour.

$ ls -l docker/
total 0
drwxr-xr-x 1 jrijken jrijken 504 aug 23 13:34 default
-rw-r--r-- 1 jrijken jrijken   0 aug 23 13:35 dump_2022.08.23_11-34.tar.gz
-rw-r--r-- 1 jrijken jrijken   0 aug 23 13:38 dump_2022.08.23_11-37.tar.gz

When using the bash script directly on the host system I don't seem to have this problem.

Kube-dump fails

Hello,

kube-dump fails to fill up the yaml files,
the connection to the cluster works,
in fact he can see al resources,
but the files are empty and there is a series of failures :
the kubeconfig is passed to the command, or in environment,
the result is the same.

./kube-dump all

Dump data in /root/data directory

Dump namespace alert-dispatcher started
STATE RESOURCE NAME
Failed configmaps kube-root-ca.crt
Failed endpoints publisher
Failed events alert-publisher-7db85b76d8-kwgf5.17051a2a89149f67
Failed events alert-publisher-7db85b76d8-kwgf5.17051a2a91a3af76
Failed events alert-publisher-7db85b76d8-kwgf5.17051a2a91a3cd0f

I have yq and jq binaries in /bin
I run those commands from an ubuntu machine,

how can i troubleshoot further ?
what can be the reasons for this failures ?

thanks in advance

Bash autocomplete

Implement autocomplete in Bash for:

  • commands
  • flags
  • paths
  • kubernetes resources

Cannot restrict dumps to a single namespace

I'm trying to dump namespaced resources from a single namespace with the following command

$ kube-dump ns -n recsapi
Dump data directory /Users/agooch/tmp/data created
Dump data in /Users/agooch/tmp/data directory

Dump namespace admin-console started
STATE          RESOURCE                      NAME
Success        configmaps                    kube-root-ca.crt
Success        secrets                       default-token-jgsgt
<ctrl-c>
$

The admin-console namespace is the first namespace in my cluster when sorted alphabetically, so it appears to be acting on all namespaces.

I've reviewed the help docs, but can find nothing to indicate I'm using it incorrectly. Has anyone experienced this? Am I doing it wrong?

Many thanks in advance.

Kube-dump Not working as expected

Hello All,

I ran kube-dump as a kubernetes pod within my container and did the configuration to store both on a persistent disk and on a github repo.

I currently do not know the state of this as it is not showing a reasonable log information. Please see attached image.

Thanks
kube-dump

canonicalize-missing: No such file or directory

Hi
Thanks for this wonderful tool. I Have Centos 7.8 and try to run kube-dump 1.0 in Docker:

docker run --interactive --volume /home/kubernetesadmin/.kube:/.kube --volume /home/kubernetesadmin/dump:/dump woozymasta/kube-dump:1.0 dump-namespaces -n dev -d /dump --kube-config /.kube/config

and get:
realpath: --canonicalize-missing: No such file or directory

On version 1.0.0 or earliest everything is fine.

kube-dump not pushing to git with ssh git url

deployed on kubernetes pod-sa-git-key.yaml
kube-dump not pushing when trying to set git link as
ssh://corp.git:252/some/link.git

git clone ssh://corp.git:252/some/link.git from pod works

Maybe change --absolute-names to -P

Hi,
trying to use the tool under MacOS (provided by default with bsdtar) is giving the following error when using archive options:

tar: Option --absolute-names is not supported
Usage:
List: tar -tf
Extract: tar -xf
Create: tar -cf [filenames...]
Help: tar --help

The shorthand flag for absolute names on both GNU Tar and BSD Tar is -P .. however BSD Tar doesn't have the long option name --absolute-names. Otherwise, we can use GNU Tar by installing it with Homebrew by executing brew install gnu-tar .. and then replace tar with gtar inside kube-dump. So either -P or gtar is needed for MacOS for archive options to work.

Thanks

tar issue

kube-dump all -d kdump/ -a
Error at the end of export:

tar: `. 'are replaced by empty object names
tar :: stat failed with error: No such file or directory
tar: Exit with a failed state due to errors 

All files presented but tar archive broken, and its size are 10kb.
v.1.0.1

Error on GIT

Hello guys I found a issue when use git push:

grep: unrecognized option: quiet

I added this options in Dockefile and worked:

apk add --no-cache --upgrade grep

kube_api_token not being filled

I was trying to implement this script into a custom Docker container based on Alpine.
Everything seemed allright, but when I try to run the container as a job, it would be stuck in startup, claiming to retrieve 401's from k8s' API.
After doing some digging in the container and trying to execute the script myself, it turned out that somehow the api token was not filled. Filling the token variable by hand with a cat and executing the health check with the same command, it now succeeds.
I do not know why exactly, but filling with kube_api_token=$(</token/path) does not work while kube_api_token=$(cat /token/path) does work.

My Dockerfile for reference;

FROM woozymasta/kube-dump:1.0

# Add 'kubedump' user
ARG user=kubedump
ARG group=kubedump
ARG uid=1000
ARG gid=1000
ARG home_dir=/home/kubedump

ENV HOME ${home_dir}

RUN apk upgrade --no-cache \
    && addgroup --system ${group} --gid ${gid} \
    && adduser --system --uid ${uid} --home "$HOME" --ingroup ${group} --shell /bin/ash ${user} \
    && mkdir ${home_dir}/.ssh \
    && chmod 0700 ${home_dir}/.ssh \
    && chown -R ${user}:${group} ${home_dir} \
    && mv /kube-dump /usr/bin/kube-dump \
    && chmod a+x /usr/bin/kube-dump

# Switch context
WORKDIR ${home_dir}
USER ${user}

ENTRYPOINT [ "kube-dump" ]

Parameter --kube-insecure-tls doesn't work

Hey,
we're running an RKE cluster with a self-signed cert which causes the issue:
Unable to connect to the server: x509: certificate signed by unknown authority

But even if I specify the parameter --kube-insecure-tls the error appears. On kubectl the parameter --insecure-skip-tls-verify prevents that error.

Used kubectl version: v1.18.2
Used kube-dump version: v1.0.4

Replace realpath with realpath()

The script relies on the existence of "realpath". That does not exist in MacOS, and there does not exist a ready made package to install that holds realpath.

However, it seems as if this could be replaced by a combination of readlink/dirpath/basename, like this:

realpath() (
  OURPWD=$PWD
  cd "$(dirname "$1")"
  LINK=$(readlink "$(basename "$1")")
  while [ "$LINK" ]; do
    cd "$(dirname "$LINK")"
    LINK=$(readlink "$(basename "$1")")
  done
  REALPATH="$PWD/$(basename "$1")"
  cd "$OURPWD"
  echo "$REALPATH"
)

(from https://stackoverflow.com/questions/3572030/bash-script-absolute-path-with-os-x/18443300#18443300)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.