mtougeron / k8s-pvc-tagger Goto Github PK
View Code? Open in Web Editor NEWA utility to tag volumes based on a Kubernetes PVC annotation
License: Apache License 2.0
A utility to tag volumes based on a Kubernetes PVC annotation
License: Apache License 2.0
when i try to create a volume with Name tag it just ignore the variable and move to the next tag.
is it in purpose? if so can you change it cause this is very valuable tag we need.
Is your feature request related to a problem? Please describe.
Would be great to automatically tag volumes with the same or a subset of tags of the instance the ebs is attached to. This would make easier handle tags related for example to cost allocation, business department and so on...
Figure out the proper way to watch multiple, specified namespaces at the same time.
Mock & test the logic in watchForPersistentVolumeClaims that the informer calls
Is your feature request related to a problem? Please describe.
It could be nice to have a way to tag PV with the tags provided
Describe the solution you'd like
--annotate-pv
--annotate-p
is enabled, pv will be annotated with the Tagspatch
PersitentVolume
Describe alternatives you've considered
N/A
Additional context
N/A
I can work on an PR if needed
Need to make the UpdateFunc
handle when a tag has been removed from the annotation.
Figure out a way to do integration testing against AWS clusters when approved in the PR. e.g., the /ok-to-test
sort of thing.
Describe the bug
When EBS tagger is running and you add a new EBS volume that it should monitor the pod crashes.
Expected behavior
Tagger should discover the new EBS volume and check if it needs to tag it then tag it.
Error log
E1209 09:18:32.383947 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 65 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1cc9be0, 0x305f4e0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
panic(0x1cc9be0, 0x305f4e0)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
main.processPersistentVolumeClaim(0xc00034cf78, 0xc000000004, 0xc0006c5c28, 0x1, 0x1, 0x0)
/build/kubernetes.go:253 +0x677
main.watchForPersistentVolumeClaims.func1(0x1f890a0, 0xc00034cf78)
/build/kubernetes.go:102 +0x24c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00005ff60)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c5f60, 0x22c56c0, 0xc00038e330, 0x1c6d901, 0xc000640000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00005ff60, 0x3b9aca00, 0x0, 0x1, 0xc000640000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005da380)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000598c30, 0xc000614910)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x65
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1a24ab7]
goroutine 65 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x109
panic(0x1cc9be0, 0x305f4e0)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
main.processPersistentVolumeClaim(0xc00034cf78, 0xc000000004, 0xc0006c5c28, 0x1, 0x1, 0x0)
/build/kubernetes.go:253 +0x677
main.watchForPersistentVolumeClaims.func1(0x1f890a0, 0xc00034cf78)
/build/kubernetes.go:102 +0x24c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00005ff60)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c5f60, 0x22c56c0, 0xc00038e330, 0x1c6d901, 0xc000640000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00005ff60, 0x3b9aca00, 0x0, 0x1, 0xc000640000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005da380)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000598c30, 0xc000614910)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x65
Describe the bug
Any time I create a pod with a new pvc, the k8s-pvc-tagger pod crashes:
E1207 21:21:37.388710 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 121 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c1ace0?, 0x32f11e0})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0003ca000?})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1c1ace0, 0x32f11e0})
/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0004d08f0)
/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0004d08f0})
/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00051ae98?)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051af38?, {0x235c4e0, 0xc0004284b0}, 0x1, 0xc0005a60c0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00051af88?)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005d6600?)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1935b1f]
goroutine 121 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0003ca000?})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xd8
panic({0x1c1ace0, 0x32f11e0})
/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0004d08f0)
/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0004d08f0})
/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00051ae98?)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051af38?, {0x235c4e0, 0xc0004284b0}, 0x1, 0xc0005a60c0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00051af88?)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005d6600?)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
I don't see any useful debugging info there, so I'm not sure how to debug.
Additional context
Here's how k8s-pvc-tagger was deployed via TerraForm and Helm:
resource "helm_release" "k8s-pvc-tagger" {
name = "k8s-pvc-tagger"
namespace = "k8s-pvc-tagger"
create_namespace = true
repository = "https://mtougeron.github.io/helm-charts"
chart = "k8s-pvc-tagger"
version = "2.0.1"
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = aws_iam_role.foo.arn
}
set {
name = "serviceAccount.name"
value = "k8s-pvc-tagger-sa"
}
}
(plus of course the respective IAM service policy and role)
Our pods are also created via Helm. Here's their PVC definition, which seems to trigger the crash:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
annotations:
volume.beta.kubernetes.io/storage-class: gp2
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
k8s-pvc-tagger/tags: |
{
"foo/bu": "dc",
"foo/consumer": "bar",
"foo/expiry": "9999-01-01",
"foo/created_by": "helm",
"foo/environment": "unknown"
}
labels:
managed-by: helm
foo/bu: dc
foo/consumer: bar
foo/stage: unknown
foo/expiry: "9999-01-01"
foo/created_by: helm
foo/environment: unknown
app: {{ .Release.Name }}
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.requestedSize | default "60Gi" }}
This error seems awfully similar to #37
Setup a cluster using the aws-ebs-csi-driver
and test/update accordingly
Describe the bug
Getting the ff error after a pvc has been deleted and recreated
E0117 23:07:54.796497 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 69 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c1ace0?, 0x32f11e0})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00013b800?})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1c1ace0, 0x32f11e0})
/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0001694d0)
/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0001694d0})
/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x10000000011?)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000523f38?, {0x235c4e0, 0xc000216060}, 0x1, 0xc000734000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc000523f88?)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005c4500?)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1935b1f]
To Reproduce
We are currently using k8s-pvc-tagger:v1.0.1.
We installed the k8s pvc tagger which is working fine for a while. We've installed ebs csi driver after.
Previously, the PVCs are created by kubernetes.io/aws-ebs storage-provisioner(PVC annotation: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs).
We have recreated one of the pvcs and it got provisioned by ebs-csi-driver this time (PVC annotation: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com)
The k8s-pvc-tagger started crashing after.
Describe the bug
First of all, thank you for this! Works great with a minor issue I noticed when using the helm chart.
When setting watchNamespace
to a namespace different from where the helm-generated manifests are, the tagger pods can't access the PVCs. For example, I deploy the resources in ns1
and set watchNamespace: "monitoring"
. This results in errors in the tagger pod logs:
k8s-aws-ebs-tagger-bdd94dd85-t8xqk k8s-aws-ebs-tagger E0930 01:04:41.988660 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:ns1:k8s-aws-ebs-tagger" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "monitoring"
I believe it just lacks rolebindings
in the target namespaces in watchNamespace
.
To Reproduce
Steps to reproduce the behavior:
watchNamespace: monitoring
in chart values.ns1
.Expected behavior
k8s-aws-ebs-tagger pods should be able to access all PVCs in target namespaces in watchNamespace
.
Additional context
I'm not a Go developer but it looks like this is purely helm chart-related which I'm familiar with. I could open a PR for this if needed.
Problem description
Whenever k8s-pvc-tagger is deployed there might be already many untagged PVCs (and dynamically provisioned PVs / EBS / EFS).
Solution
If a flag (e.g. tag-existing-resources
) is set, the tagger lists all the PVCs and tries to tag the underlying EBS/EFS.
Is your feature request related to a problem? Please describe.
Let's say I would like for billing purpose to enforce some mandatory tags such as customer
, billing-id
, department
etc. It could be nice to report (using events + Prometheus) the PVCs which does not fit a "Standard".
Describe the solution you'd like
--mandatory-tags
which contain a json of mandatory tags + the formatDescribe alternatives you've considered
N/A
Additional context
N/A
I can work on a PR if needed
Describe the bug
A clear and concise description of what the bug is.
When creating a new EBS volume (EKS v1.23) using the intree provisioner, the request is redirected to te csi driver which creates the PV object with volumeID: vol-xxxxxx without the aws:// prefix. This is expected according to this. Though the k8s-pvc-tagger fails to parse the PV object since the regex doesn't include that case.
I think that the regex can be changed to something like: ^(?:aws:\/\/\w{2}-\w{4,9}-\d\w\/){0,1}(vol-\w+){1}$
To Reproduce
Steps to reproduce the behavior:
Expected behavior
k8s-pvc-tagger should be able to tag the ebs volume.
Hi
I have problem that I can't use aws-ebs-tagger/tags annotation. This is my PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
aws-ebs-tagger/tags: {"me": "someone else", "another tag": "some value"}
name: foo
spec:
storageClassName: gp3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
error: error validating "pvc.yaml": error validating data: ValidationError(PersistentVolumeClaim.metadata.annotations.aws-ebs-tagger/tags): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
It is like there.
https://grepmymind.com/introducing-the-k8s-aws-ebs-tagger-3ec2502cf40e
Maybe something changed?
Here doc
https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
Is your feature request related to a problem? Please describe.
Tool currently tags PVC that are created in aws using EBS storage class, needed functionality to tag PVC created using EFS storage class.
Describe the solution you'd like
Using existing Informer that is watching all PVC, add condition for efs, and fetch efs access point details from persistent volume and tag particular access point using same annotation logic.
Describe alternatives you've considered
remove repo to k8s-aws-pvc tagger for generic use
Additional context
I have made changes in my fork project
Track the following:
Are there others?
Is your feature request related to a problem? Please describe.
Yes, Prometheus Operator will read ServiceMonirot resources labeled with sepcifc label, in my case.
Describe the solution you'd like
Add support to specify labels in the helm chart
Describe alternatives you've considered
Manualy add those labels
Need to mock & test the AWS tagging calls so that #23 can be fully tested
Setup automatic help chart publishing. Consider something like https://github.com/ricoberger/vault-secrets-operator/blob/master/.github/workflows/helm.yaml or https://medium.com/@stefanprodan/automate-helm-chart-repository-publishing-with-github-actions-and-pages-8a374ce24cf4
Should I auto-update the appVersion
when a new tag has been created?
Setup sigstore/cosign for the container images
https://github.blog/2021-12-06-safeguard-container-signing-capability-actions/
Refactor project from k8s-aws-ebs-tagger to k8s-pvc-tagger so that it can be used for multiple PVC types and used for multiple cloud providers. While this rebranding/renaming sucks, it's better to do it now than later.
k8s_pvc_tagger
k8s-pvc-tagger
Setup a repo for helm chart publishing. Automation (#7) will come later
Allow setting a tag that uses tpl vars to substitute values from a label or annotation. e.g., "mytag": "{{ metadata.namespace}}-{{ labels.app }}"
or something like that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.