GithubHelp home page GithubHelp logo

yashbhutwala / kubectl-df-pv Goto Github PK

View Code? Open in Web Editor NEW
371.0 9.0 30.0 8.1 MB

kubectl plugin - giving admins df (disk free) like utility for persistent volumes

License: Apache License 2.0

Makefile 15.16% Go 74.91% Shell 9.93%
krew-plugin kubectl-plugin kubernetes kubectl df persistent-volumes persistent-volume-claims

kubectl-df-pv's Introduction

kubectl df-pv

A kubectl plugin to see df for persistent volumes.

Requirements

kube-apiserver has api/v1/nodes/ endpoint enabled

☑ Appropriate RBAC. This utility is meant for cluster-admin like user; specifically, you need a service account with enough RBAC privileges to access api/v1/nodes/ from the kube-apiserver.

☑ Using a storage provisioner that populates pv metrics in a compatible manner (see what's been tested below)

Quick Start

Installation

Via Krew

curl https://krew.sh/df-pv | bash
# . ~/.bashrc   # run if you use bash shell
# . ~/.zshrc    # run if you use zsh shell

From source

cd $GOPATH/src/github.com/
mkdir -p yashbhutwala
cd yashbhutwala/
git clone [email protected]:yashbhutwala/kubectl-df-pv.git
cd kubectl-df-pv/
make install
df-pv --help

Via Release Binary

macOS

download_path="./df-pv"
version="v0.2.2"
curl --fail -Lo $download_path "https://github.com/yashbhutwala/kubectl-df-pv/releases/download/${version}/kubectl-df-pv_${version}_darwin_amd64.tar.gz"
chmod +x $download_path
mv $(download_path) /some-dir-in-your-PATH/df-pv

Usage

kubectl df-pv

example output

Flags

> kubectl df-pv --help

df-pv emulates Unix style df for persistent volumes w/ ability to filter by namespace

It autoconverts all "sizes" to IEC values (see: https://en.wikipedia.org/wiki/Binary_prefix and https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory)

It colors the values based on "severity" [red: > 75% (too high); yellow: < 25% (too low); green: >= 25 and <= 75 (OK)]

Usage:
  df-pv [flags]

Flags:
  -h, --help               help for df-pv
  -n, --namespace string   if present, the namespace scope for this CLI request (default is all namespaces)
  -v, --verbosity string   log level; one of [info, debug, trace, warn, error, fatal, panic] (default "info")

Other useful commands

enable trace logging, but output to a file

df-pv -v trace 2> trace.log

Tested

Works on

GKE (kubernetes.io/gce-pd dynamic provisioner; both with ssd and standard)

kubeadm configured bare-metal cluster (rook ceph block dynamic provisioner using script)

Does not work due to storage provisioner

kind (rancher/local-path-provisioner dynamic provisioner)

minikube (gcr.io/k8s-minikube/storage-provisioner minikube-hostpath dynamic provisioner)

TODO

[ ] EKS

[ ] AKS

TODO Features

Yet to be completed

☒ sort-by flag

☒ exclude namespaces

☒ only show a specific colored result ("red", "yellow", "green")

Completed

df for all Persistent Volumes in the cluster

☑ human readable output as default (using IEC format)

☑ color based on usage [red: > 75% (too high); yellow: < 25% (too low); green: >= 25 and <= 75 (OK)]

☑ print PV name

☑ print volume mount name

Motivation

Have you ever wondered, "How much free disk space do all my PVs have?" Me too! That's why I built this plugin! I have always just wanted to quick way to see the disk usage of my Persistent Volumes (similar to df or du in Unix). It turns out I'm not the only one, there has been many upstream Kubernetes issues open again and again about this, even some KEPs and PRs. I have compiled some of the issues and KEPs that I've seen in the wild here:

Issues

"this feature is needed .. !!!!" - @halradaideh

"There was a plan to implement this for 1.7 but we ran out of time."

"I need to know to which Pod(s) a PVC is being mounted to, without having to go through each Pod one by one checking the Volume mounts and the PersistentVolumeClaim."

KEPs

"PVC should show how much of the available capacity is used vs available"

"Expose storage metrics to end users"

"Display PVC capacity usage using kubectl"... "Has this been continued somewhere? It would be super useful"

"exposing storage metrics to users"... "Status? Was it done?"

Other relevant/rabbit-hole links

"Volume metrics exposed in /stats/summary not available in /metrics"

interesting metric PR

something similar to du in metrics

client-go issue about kubelet api by @DirectXMan12

kubelet stats api

kubelet client

metrics-server client

blog about kubectl printers and columns

cli-runtime tableprinter usage

kubectl using cli-runtime; sorting is still not implemented

kubectl-df-pv's People

Contributors

volatus avatar yashbhutwala avatar ybstaked avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubectl-df-pv's Issues

Add a flag for the output to convert the resulting values into MB/GB/TB if possible

Add a flag for the output to convert the resulting values into MB/GB/TB if possible.
It's already an awesome plug-in (one of my favorite ones)!
I think converting the bytes into MB/GB/TB shouldn't be really complex but will have a huge benefit for the users.

One more cosmetic thing:
Add a line break after "Searching PVCs so that the output starts on a new line.

I saw the colour defs in the code in root.go (50%/75%) that would be great to have it! Please activate it!

TIA

Cheers

FATA[0000] unable to run root command: accepts at most 0 arg(s), received 1

kubectl krew install df-pv

Updated the local copy of plugin index.
Installing plugin: df-pv
Installed plugin: df-pv
\
 | Use this plugin:
 |      kubectl df-pv
 | Documentation:
 |      https://github.com/yashbhutwala/kubectl-df-pv
/
WARNING: You installed plugin "df-pv" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.

kubectl df-pv

INFO[2021-07-19T16:32:50Z] Either no volumes found in namespace/s: 'all' or the storage provisioner used for the volumes does not publish metrics to kubelet 

kubectl df-pv all

Error: accepts at most 0 arg(s), received 1
Usage:
  df-pv [flags]

Flags:
      --as string                      Username to impersonate for the operation
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --cache-dir string               Default HTTP cache directory (default "/home/apatsev/.kube/http-cache")
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
  -d, --disable-color                  boolean flag for disabling colored output
  -h, --help                           help for df-pv
      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string               If present, the namespace scope for this CLI request
      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
  -s, --server string                  The address and port of the Kubernetes API server
      --tls-server-name string         Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                   Bearer token for authentication to the API server
      --user string                    The name of the kubeconfig user to use
  -v, --verbosity string               log level; one of [info, debug, trace, warn, error, fatal, panic] (default "info")

FATA[0000] unable to run root command: accepts at most 0 arg(s), received 1
run df-pv root command
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
        /home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:30
main.main
        /home/runner/work/kubectl-df-pv/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
        /opt/hostedtoolcache/go/1.14.7/x64/src/runtime/proc.go:203
runtime.goexit
        /opt/hostedtoolcache/go/1.14.7/x64/src/runtime/asm_amd64.s:1373 

Output can be misleading without decimals

We have some PV's with 4362206093312 bytes

This corresponds to 3.97 tebibytes, which this tool outputs as 3Ti

kubectl get pvc shows 4Ti, and GCP shows 4096GB for this volume.

"df" can show decimals, I propose this tool should do the same

PR incoming for this

Is it possible to display the REQUESTS & CAPACITY of each PVC?

Thanks @yashbhutwala for such an amazing util. It is helping us great for our performance analysis.

I've a query on this.

Currently on the usage front, it displays the SIZE,USED,AVAILABLE,%USED,IUSED,IFREE,%IUSED columns.
Is it possible to display the REQUESTS & CAPACITY of each PVC?
I use the below command to get that details separately and run yours separately and then combine both.
kubectl get pvc --all-namespaces -o custom-columns="NAMESPACE:.metadata.namespace,NAME:metadata.name,REQUESTS:spec.resources.requests.storage,CAPACITY:status.capacity.storage"

While I do that, I was thinking it would be great to get those via your kubectl-df-pv.
I'm not sure if such feature currently exists.
Thanks!

Feature request: Select PVCs by label

Hi,
nice plugin, thanks for that!

It would be nice to have the ability to limit PVCs usage query to only a specific subset by using labels.
Eg. kubectl-df_pv -l app=name
to query disk usage only for PVCs used by specific app

Have you been thinking about it?

Support kubectl standard flags

This is an awesome project! Thank you for working on this much needed feature!

I installed this plugin via:

kubectl krew install df-pv

I am unable to use the standard kubectl flags to switch the context and the namespace and I get this error:

$ kubectl --context $kubecontext df-pv                                                                                                        
Error: flags cannot be placed before plugin name: --context

I work with multiple clusters at a time and so setting the kubectx would be time consuming.

about metrics

Hi,

For some pv, the results is strange. I see the whole storage capacity for last 3 pv instead of pv capacity.
The first use rook-ceph-rbd sc, and the last 3 rook-cephfs sc. Do you think it's related to df-pv or CSI ?

 NAMESPACE  PVC NAME                                   PV NAME                                   POD NAME                                                         VOLUME MOUNT NAME  SIZE    USED   AVAILABLE  %USED  IUSED  IFREE                 %IUSED
 sandbox    data-drive-preprod-mariadb-galera-0        pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896  drive-preprod-mariadb-galera-0                                   data               1014Mi  695Mi  318Mi      68.63  304    523984                0.06
 sandbox    data-drive-preprod-mariadb-tooling-backup  pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b  drive-preprod-mariadb-tooling-restore-shell-c5c585478-wxk5m      mariadb            21Gi    500Mi  21Gi       2.27   33290  18446744073709551615  100.00
 sandbox    drive-preprod-xxx-nextcloud-ncdata         pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0  bckp-nextcloud-preprod-xxx-basic-volume-bckp-restore-shellcfgjx  source             21Gi    500Mi  21Gi       2.27   33290  18446744073709551615  100.00
 sandbox    drive-preprod-xxx-nextcloud-ncdata         pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0  drive-preprod-xxx-nextcloud-7b877b4d78-w6cg8                     data               21Gi    500Mi  21Gi       2.27   33290  18446744073709551615  100.00

use int64

this tool is very useful!

INodesFree int `json:"inodesFree"` // TODO: check if inodes is int ot int64?

=> response: int64 :)

> kubectl df-pv
  Searching for PVCs ⠴json: cannot unmarshal number 18446744073709551615 into Go struct field Volume.inodesFree of type int

Nothing is displayed for minikube PVCs

I tried to test via

minikube start
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: some-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standard
  resources:
    requests:
      storage: 2Mi
EOF

which automatically creates a PV, see kubectl get pv:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
pvc-803939a0-13ca-11ea-b74d-0800279f5fff   2Mi        RWO            Delete           Bound    default/block-pvc     standard                2m57s

However, when running kubectl df-pv nothing is displayed.

Add feature to resize PVC

If a PVC is running out of space, we usually need to expand the PVC size.

We can do it automatically by installing some external plugins like pvc-autoresizer, Kubernetes-Volume-Autoscaler
But if we don't, we need to edit the PVC to change the size parameter manually.
How about bringing that manual process into this tool?

Something like this: kubectl df-pv resize <pvc-name> 20Gi

Error:%IUSED > 100

Error:%IUSED > 100

kubectl df-pv

PV NAME PVC NAME NAMESPACE NODE NAME POD NAME VOLUME MOUNT NAME SIZE USED AVAILABLE %USED IUSED IFREE %IUSED
pvc-c3fafdc4-85ca-46e9-9c9c-6bfb4688f441 grafana-pvc-fs monitoring k8s-myhome-node-02 grafana-69d969b8db-f57sk grafana-storage 1024Mi 8Mi 1016Mi 0.78 6 18446744073709551615 120.00

Feature request: customize columns

Is it possible to set the output to show only specific columns?

image

I don't need all those columns and I want to see each row in one line

RFE: Disable color when output is not a tty

I wanted to include the output of kubectl df-pv in an email, but df-pv outputs color codes even when stdout is not a tty device. It would be great to either autodetect this situation, or at least have a --no-color option.

optional no color and light color scheme

Nice plugin!
Unluckily it is impossible to read on a light or white background.

It would be nice to have either options and/or a configuration file
where one could switch to a no color scheme (and optional to a color-scheme for light backgrounds).

image

Print info message when no volumes are found

Currently, when no volumes are found the command does not display anything. So it's unclear if the command succeeded or not.

It would be better, if the command would print some informative message, similar to what kubectl get pods does:

No resources found in default namespace.

Sort Output

there is no flag which will sort the output in ascending or descending order

Will work on adding this

current actor interrupted with error

Hi, Thanks for this amazing plugin.

While using this, the plugin throws the below error for one namespace alone. I've around 5+ namespaces. Of all of this, only one is producing an issue. Not sure why though.

Error:

INFO[2020-12-08T10:58:38Z] TODO goroutine error handling; current actor was interrupted with: failed to get stats from node: resource name may not be empty
INFO[2020-12-08T10:58:38Z] TODO goroutine error handling; current actor was interrupted with: failed to get stats from node: resource name may not be empty
I1208 10:58:39.384456    1995 request.go:621] Throttling request took 1.089050551s, request: GET:https://k8s-apiserver.cluster.local:8443/api/v1/nodes/node20/proxy/stats/summary
INFO[2020-12-08T10:58:47Z] TODO goroutine error handling; current actor was interrupted with: failed to get stats from node: resource name may not be empty
INFO[2020-12-08T10:58:47Z] TODO goroutine error handling; current actor was interrupted with: failed to get stats from node: resource name may not be empty

While with trace verbosity I could see that few metrics are being collected for some PVC's though. But not sure why it gets stuck post that and doesn't respond.

Very much appreciate your support on this.

~/.kube/config shouldn't be required

i just upgraded the df-pv plugin from v0.1.5 to v0.2.4 ...

Unfortunately now the file ~/.kube/config is required!
Only with a successful login this file exists. If you're logged in via kubeconfig (export KUBECONFIG=/path/to/kubeconfig) this file doesn't exists and the plugin crashes with the following error message:

oc df-pv

Error: error getting output slice: unable to build config from flags: stat /root/.kube/config: no such file or directory
Usage:
df-pv [flags]

Flags:
-h, --help help for df-pv
-n, --namespace string if present, the namespace scope for this CLI request (default is all namespaces)
-v, --verbosity string log level; one of [info, debug, trace, warn, error, fatal, panic] (default "info")

FATA[2020-07-13T15:48:40+02:00] unable to run root command: stat /root/.kube/config: no such file or directory
unable to build config from flags
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.GetSliceOfOutputRowPVC
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:442
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.runRootCommand
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:70
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.setupRootCommand.func1
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:53
github.com/spf13/cobra.(*Command).execute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/home/runner/work/kubectl-df-pv/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:950
github.com/spf13/cobra.(*Command).Execute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:887
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:30
main.main
/home/runner/work/kubectl-df-pv/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
/opt/hostedtoolcache/go/1.14.4/x64/src/runtime/proc.go:203
runtime.goexit
/opt/hostedtoolcache/go/1.14.4/x64/src/runtime/asm_amd64.s:1373
error getting output slice
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.runRootCommand
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:72
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.setupRootCommand.func1
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:53
github.com/spf13/cobra.(*Command).execute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/home/runner/work/kubectl-df-pv/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:950
github.com/spf13/cobra.(*Command).Execute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:887
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:30
main.main
/home/runner/work/kubectl-df-pv/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
/opt/hostedtoolcache/go/1.14.4/x64/src/runtime/proc.go:203
runtime.goexit
/opt/hostedtoolcache/go/1.14.4/x64/src/runtime/asm_amd64.s:1373
run df-pv root command
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
/home/runner/work/kubectl-df-pv/kubectl-df-pv/pkg/df-pv/root.go:30
main.main
/home/runner/work/kubectl-df-pv/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
/opt/hostedtoolcache/go/1.14.4/x64/src/runtime/proc.go:203
runtime.goexit
/opt/hostedtoolcache/go/1.14.4/x64/src/runtime/asm_amd64.s:1373

This file shouldn't be required ... it worked in v0.1.5 without any issues! :(

Installatino and running

I installed the using krew and also as per mac instruction,

  1. With krew
% kubectl df-pv -n ngfw-saas-stg
+ kubectl df-pv -n ngfw-saas-stg
Error: error getting output slice: Get "https://<url>:443/api/v1/namespaces/ngfw-saas-stg/pods": dial tcp <ip>: i/o timeout
Usage:
  df-pv [flags]

with mac installatino

% kubectl df-pv
+ kubectl df-pv
error: unknown command "df-pv" for "kubectl"

Followed the installation commands as is.

x509: certificate has expired

x509: certificate has expired or is not yet valid: current time 2021-04-26T10:18:22+02:00 is after 2021-04-07T03:12:06Z

Could you please update?

thx

Mateusz

Installation issue

➜ kubectl df-pv               
'Tipz:' k df-pv
Error: unknown command "df-pv" for "kubectl"
Run 'kubectl --help' for usage.

➜ which df-pv
~/.go/bin/df-pv

I can run command from CLI df-pv and it works correctly, but unable via the kubectl

thanks,
Dmitry

duplicate PV

When a pv is used by X multiple pods, it appears X times

$ k get pvc
NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-drive-preprod-mariadb-galera-0         Bound    pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896   1Gi        RWO            rook-ceph-block   35h
data-drive-preprod-mariadb-tooling-backup   Bound    pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b   500Mi      RWX            rook-cephfs       57d
drive-preprod-xxx-nextcloud-ncdata          Bound    pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0   1Gi        RWX            rook-cephfs       57d

$ k df-pv
 NAMESPACE  PVC NAME                                   PV NAME                                   POD NAME                                                         VOLUME MOUNT NAME  SIZE    USED   AVAILABLE  %USED  IUSED  IFREE                 %IUSED
 sandbox    data-drive-preprod-mariadb-galera-0        pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896  drive-preprod-mariadb-galera-0                                   data               1014Mi  704Mi  309Mi      69.51  304    523984                0.06
 sandbox    data-drive-preprod-mariadb-tooling-backup  pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b  drive-preprod-mariadb-tooling-restore-shell-c5c585478-wxk5m      mariadb            21Gi    516Mi  21Gi       2.34   33296  18446744073709551615  100.00
 sandbox    drive-preprod-xxx-nextcloud-ncdata         pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0  bckp-nextcloud-preprod-xxx-basic-volume-bckp-restore-shellcfgjx  source             21Gi    516Mi  21Gi       2.34   33296  18446744073709551615  100.00
 sandbox    drive-preprod-xxx-nextcloud-ncdata         pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0  drive-preprod-xxx-nextcloud-7b877b4d78-w6cg8                     data               21Gi    516Mi  21Gi       2.34   33296  18446744073709551615  100.00

installation error

I installed from source and recieved the following output after running df-pv

I am using macos hence the cellar gopath

go version
go version go1.15.5 darwin/amd64

error output:

Error: error getting output slice: failed to list nodes: Get "http://localhost:8080/api/v1/nodes": dial tcp [::1]:8080: connect: connection refused
Usage:
  df-pv [flags]

Flags:
      --as string                      Username to impersonate for the operation
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --cache-dir string               Default cache directory (default "/Users/rong/.kube/cache")
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
  -d, --disable-color                  boolean flag for disabling colored output
  -h, --help                           help for df-pv
      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string               If present, the namespace scope for this CLI request
      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
  -s, --server string                  The address and port of the Kubernetes API server
      --tls-server-name string         Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                   Bearer token for authentication to the API server
      --user string                    The name of the kubeconfig user to use
  -v, --verbosity string               log level; one of [info, debug, trace, warn, error, fatal, panic] (default "info")

FATA[2020-12-08T09:52:06+02:00] unable to run root command: Get "http://localhost:8080/api/v1/nodes": dial tcp [::1]:8080: connect: connection refused
failed to list nodes
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.GetSliceOfOutputRowPVC
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:349
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.runRootCommand
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:76
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.setupRootCommand.func1
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:56
github.com/spf13/cobra.(*Command).execute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:950
github.com/spf13/cobra.(*Command).Execute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:887
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:32
main.main
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
	${CELLAR_GOPATH}/src/runtime/proc.go:204
runtime.goexit
	${CELLAR_GOPATH}/src/runtime/asm_amd64.s:1374
error getting output slice
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.runRootCommand
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:78
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.setupRootCommand.func1
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:56
github.com/spf13/cobra.(*Command).execute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:950
github.com/spf13/cobra.(*Command).Execute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/vendor/github.com/spf13/cobra/command.go:887
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:32
main.main
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
	${CELLAR_GOPATH}/src/runtime/proc.go:204
runtime.goexit
	${CELLAR_GOPATH}/src/runtime/asm_amd64.s:1374
run df-pv root command
github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv.InitAndExecute
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/pkg/df-pv/root.go:32
main.main
	${BASE_GOPATH}/src/github.com/yashbhutwala/kubectl-df-pv/cmd/df-pv/main.go:10
runtime.main
	${CELLAR_GOPATH}/src/runtime/proc.go:204
runtime.goexit
	${CELLAR_GOPATH}/src/runtime/asm_amd64.s:1374

Display PV/PVCs that are not connected to any POD

Here is an example of the PV/PVC that does exists in kubernetes state, yet it's not used (nor exists in AWS volumes)

➜ k get pvc -n metrics   
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
grafana-persistent-storage             Bound    pvc-1395291c-d89b-11e9-8a64-0a4976158cfe   1Gi        RWO            gp2            398d

➜ k describe pv pvc-1395291c-d89b-11e9-8a64-0a4976158cfe           
Name:              pvc-1395291c-d89b-11e9-8a64-0a4976158cfe
Labels:            failure-domain.beta.kubernetes.io/region=eu-central-1
                   failure-domain.beta.kubernetes.io/zone=eu-central-1c
Annotations:       kubernetes.io/createdby: aws-ebs-dynamic-provisioner
                   pv.kubernetes.io/bound-by-controller: yes
                   pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      gp2
Status:            Bound
Claim:             metrics/grafana-persistent-storage
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          1Gi
Node Affinity:     
  Required Terms:  
    Term 0:        failure-domain.beta.kubernetes.io/zone in [eu-central-1c]
                   failure-domain.beta.kubernetes.io/region in [eu-central-1]
Message:           
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   aws://eu-central-1c/vol-0b92b7db07b87b3e8
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>

➜ aws ec2 describe-volumes --volume-ids vol-0b92b7db07b87b3e8

An error occurred (InvalidVolume.NotFound) when calling the DescribeVolumes operation: The volume 'vol-0b92b7db07b87b3e8' does not exist.

would be nice to see output for these PV, PVCs with POD column empty

It ran on my AWS EKS without any issues.

Just wanted to let you know that it worked great on my AWS EKS cluster. Really helped solve I problem I was having. My setup is pretty simple. WSL 2.0 Ubuntu 20.04 with AWS CLI and Kubectl installed. I just use "aws eks update-kubeconfig" to set my connection to the cluster and standard kubectl commands work fine. Below is a screenshot:

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.