GithubHelp home page GithubHelp logo

mtougeron / k8s-pvc-tagger Goto Github PK

View Code? Open in Web Editor NEW
40.0 2.0 12.0 304 KB

A utility to tag volumes based on a Kubernetes PVC annotation

License: Apache License 2.0

Dockerfile 1.56% Go 95.80% Mustache 2.64%
kubernetes aws aws-ebs k8s

k8s-pvc-tagger's People

Contributors

dependabot[bot] avatar dinhkim avatar dol3y avatar khartahk avatar mberga14 avatar mtougeron avatar wadhwakabir avatar yurrriq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

k8s-pvc-tagger's Issues

Cant create Default tags

when i try to create a volume with Name tag it just ignore the variable and move to the next tag.
is it in purpose? if so can you change it cause this is very valuable tag we need.

Copy tags from EC2 instance

Is your feature request related to a problem? Please describe.
Would be great to automatically tag volumes with the same or a subset of tags of the instance the ebs is attached to. This would make easier handle tags related for example to cost allocation, business department and so on...

Annotate PVs

Is your feature request related to a problem? Please describe.
It could be nice to have a way to tag PV with the tags provided

Describe the solution you'd like

Describe alternatives you've considered
N/A

Additional context
N/A

I can work on an PR if needed

EBS Tagger crashes whenever we add a new non-EBS PVC

Describe the bug
When EBS tagger is running and you add a new EBS volume that it should monitor the pod crashes.

Expected behavior
Tagger should discover the new EBS volume and check if it needs to tag it then tag it.

Error log

E1209 09:18:32.383947       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 65 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1cc9be0, 0x305f4e0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
panic(0x1cc9be0, 0x305f4e0)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
main.processPersistentVolumeClaim(0xc00034cf78, 0xc000000004, 0xc0006c5c28, 0x1, 0x1, 0x0)
    /build/kubernetes.go:253 +0x677
main.watchForPersistentVolumeClaims.func1(0x1f890a0, 0xc00034cf78)
    /build/kubernetes.go:102 +0x24c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00005ff60)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c5f60, 0x22c56c0, 0xc00038e330, 0x1c6d901, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00005ff60, 0x3b9aca00, 0x0, 0x1, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005da380)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000598c30, 0xc000614910)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x65
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1a24ab7]
goroutine 65 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x109
panic(0x1cc9be0, 0x305f4e0)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
main.processPersistentVolumeClaim(0xc00034cf78, 0xc000000004, 0xc0006c5c28, 0x1, 0x1, 0x0)
    /build/kubernetes.go:253 +0x677
main.watchForPersistentVolumeClaims.func1(0x1f890a0, 0xc00034cf78)
    /build/kubernetes.go:102 +0x24c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00005ff60)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c5f60, 0x22c56c0, 0xc00038e330, 0x1c6d901, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00005ff60, 0x3b9aca00, 0x0, 0x1, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005da380)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000598c30, 0xc000614910)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x65

Crash when adding any new PVC

Describe the bug
Any time I create a pod with a new pvc, the k8s-pvc-tagger pod crashes:

E1207 21:21:37.388710       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 121 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c1ace0?, 0x32f11e0})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0003ca000?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1c1ace0, 0x32f11e0})
	/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0004d08f0)
	/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0004d08f0})
	/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00051ae98?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051af38?, {0x235c4e0, 0xc0004284b0}, 0x1, 0xc0005a60c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00051af88?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005d6600?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1935b1f]

goroutine 121 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0003ca000?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xd8
panic({0x1c1ace0, 0x32f11e0})
	/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0004d08f0)
	/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0004d08f0})
	/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00051ae98?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051af38?, {0x235c4e0, 0xc0004284b0}, 0x1, 0xc0005a60c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00051af88?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005d6600?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85

I don't see any useful debugging info there, so I'm not sure how to debug.

Additional context

Here's how k8s-pvc-tagger was deployed via TerraForm and Helm:

resource "helm_release" "k8s-pvc-tagger" {
  name = "k8s-pvc-tagger"
  namespace = "k8s-pvc-tagger"
  create_namespace = true
  repository = "https://mtougeron.github.io/helm-charts"
  chart = "k8s-pvc-tagger"
  version = "2.0.1"
  set {
    name  = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
    value = aws_iam_role.foo.arn
  }

  set {
    name  = "serviceAccount.name"
    value = "k8s-pvc-tagger-sa"
  }

}

(plus of course the respective IAM service policy and role)

Our pods are also created via Helm. Here's their PVC definition, which seems to trigger the crash:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
  annotations:
    volume.beta.kubernetes.io/storage-class: gp2
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
    k8s-pvc-tagger/tags: |
      {
        "foo/bu": "dc",
        "foo/consumer": "bar",
        "foo/expiry": "9999-01-01",
        "foo/created_by": "helm",
        "foo/environment": "unknown"
      }
  labels:
    managed-by: helm
    foo/bu: dc
    foo/consumer: bar
    foo/stage: unknown
    foo/expiry: "9999-01-01"
    foo/created_by: helm
    foo/environment: unknown
    app: {{ .Release.Name }}
spec:
  storageClassName: gp2
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: {{ .Values.persistentVolumeClaim.requestedSize | default "60Gi" }}

This error seems awfully similar to #37

k8s-pvc-tagger crashing

Describe the bug
Getting the ff error after a pvc has been deleted and recreated

E0117 23:07:54.796497       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 69 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c1ace0?, 0x32f11e0})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00013b800?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1c1ace0, 0x32f11e0})
	/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0001694d0)
	/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0001694d0})
	/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x10000000011?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000523f38?, {0x235c4e0, 0xc000216060}, 0x1, 0xc000734000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc000523f88?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005c4500?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1935b1f]

To Reproduce
We are currently using k8s-pvc-tagger:v1.0.1.
We installed the k8s pvc tagger which is working fine for a while. We've installed ebs csi driver after.
Previously, the PVCs are created by kubernetes.io/aws-ebs storage-provisioner(PVC annotation: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs).
We have recreated one of the pvcs and it got provisioned by ebs-csi-driver this time (PVC annotation: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com)

The k8s-pvc-tagger started crashing after.

Cannot access PVCs in namespaces in watchNamespace

Describe the bug
First of all, thank you for this! Works great with a minor issue I noticed when using the helm chart.

When setting watchNamespace to a namespace different from where the helm-generated manifests are, the tagger pods can't access the PVCs. For example, I deploy the resources in ns1 and set watchNamespace: "monitoring". This results in errors in the tagger pod logs:

k8s-aws-ebs-tagger-bdd94dd85-t8xqk k8s-aws-ebs-tagger E0930 01:04:41.988660       1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:ns1:k8s-aws-ebs-tagger" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "monitoring"

I believe it just lacks rolebindings in the target namespaces in watchNamespace.

To Reproduce
Steps to reproduce the behavior:

  1. Set watchNamespace: monitoring in chart values.
  2. Deploy resources to ns1.
  3. Print logs of k8s-aws-ebs-tagger pods and see the error above..

Expected behavior
k8s-aws-ebs-tagger pods should be able to access all PVCs in target namespaces in watchNamespace.

Additional context
I'm not a Go developer but it looks like this is purely helm chart-related which I'm familiar with. I could open a PR for this if needed.

Tag existing AWS resources

Problem description
Whenever k8s-pvc-tagger is deployed there might be already many untagged PVCs (and dynamically provisioned PVs / EBS / EFS).

Solution
If a flag (e.g. tag-existing-resources) is set, the tagger lists all the PVCs and tries to tag the underlying EBS/EFS.

Mandatory tags

Is your feature request related to a problem? Please describe.
Let's say I would like for billing purpose to enforce some mandatory tags such as customer, billing-id, department etc. It could be nice to report (using events + Prometheus) the PVCs which does not fit a "Standard".

Describe the solution you'd like

  • Add cli parameter with --mandatory-tags which contain a json of mandatory tags + the format
  • When a PVC is handled and does not meet mandatory tags, an event is sent in the namespace (this is just a warning, the other tags are still added)

Describe alternatives you've considered
N/A

Additional context
N/A

I can work on a PR if needed

Parsing AWS volumeID is failing

Describe the bug
A clear and concise description of what the bug is.
When creating a new EBS volume (EKS v1.23) using the intree provisioner, the request is redirected to te csi driver which creates the PV object with volumeID: vol-xxxxxx without the aws:// prefix. This is expected according to this. Though the k8s-pvc-tagger fails to parse the PV object since the regex doesn't include that case.
I think that the regex can be changed to something like: ^(?:aws:\/\/\w{2}-\w{4,9}-\d\w\/){0,1}(vol-\w+){1}$
To Reproduce
Steps to reproduce the behavior:

  1. Create a new PVC using the intree provisioner
  2. Watch the k8s-pvc-tagger logs

Expected behavior
k8s-pvc-tagger should be able to tag the ebs volume.

Can't use aws-ebs-tagger/tags annotation got "map", expected "string"

Hi
I have problem that I can't use aws-ebs-tagger/tags annotation. This is my PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
aws-ebs-tagger/tags: {"me": "someone else", "another tag": "some value"}
name: foo
spec:
storageClassName: gp3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

error: error validating "pvc.yaml": error validating data: ValidationError(PersistentVolumeClaim.metadata.annotations.aws-ebs-tagger/tags): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false

It is like there.
https://grepmymind.com/introducing-the-k8s-aws-ebs-tagger-3ec2502cf40e
Maybe something changed?

Here doc
https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

Tag EFS access Points in aws for PVC created using EFS storage class

Is your feature request related to a problem? Please describe.
Tool currently tags PVC that are created in aws using EBS storage class, needed functionality to tag PVC created using EFS storage class.

Describe the solution you'd like
Using existing Informer that is watching all PVC, add condition for efs, and fetch efs access point details from persistent volume and tag particular access point using same annotation logic.

Describe alternatives you've considered
remove repo to k8s-aws-pvc tagger for generic use

Additional context
I have made changes in my fork project

Setup prometheus metrics

Track the following:

  • tags actions (add and remove)
  • PVC processed
  • PVC ignored

Are there others?

Add support to specify labels for ServiceMonitor

Is your feature request related to a problem? Please describe.
Yes, Prometheus Operator will read ServiceMonirot resources labeled with sepcifc label, in my case.

Describe the solution you'd like
Add support to specify labels in the helm chart

Describe alternatives you've considered
Manualy add those labels

Refactor project from k8s-aws-ebs-tagger to k8s-pvc-tagger

Refactor project from k8s-aws-ebs-tagger to k8s-pvc-tagger so that it can be used for multiple PVC types and used for multiple cloud providers. While this rebranding/renaming sucks, it's better to do it now than later.

  • document timeline (maybe mid-July?)
  • rename project
  • rename metrics to k8s_pvc_tagger
  • rename annotations to k8s-pvc-tagger
  • support legacy annotations & metrics for 2 releases
  • add storage class label to metrics
  • update documentation
  • create new docker hub repo
  • create new github artifact repo (is this possible?
  • create deprecation plan for legacy docker/github image repos

Allow templated tags

Allow setting a tag that uses tpl vars to substitute values from a label or annotation. e.g., "mytag": "{{ metadata.namespace}}-{{ labels.app }}" or something like that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.