GithubHelp home page GithubHelp logo

zakkg3 / clustersecret Goto Github PK

View Code? Open in Web Editor NEW
393.0 8.0 74.0 356 KB

Kubernetes ClusterSecret operator.

License: Apache License 2.0

Python 90.22% Dockerfile 0.61% Makefile 5.19% Smarty 3.98%

clustersecret's Introduction

ClusterSecret

CI Docker Repository on Quay Artifact Hub CII Best Practices License Kubernetes - v1.24.15 | v1.25.11 | v1.26.6 | v1.27.3

Kubernetes ClusterSecret

clustersecret.io

Cluster wide secrets

ClusterSecret operator makes sure all the matching namespaces have the secret available and up to date.

  • New namespaces, if they match the pattern, will also have the secret.
  • Any change on the ClusterSecret will update all related secrets. Including changing the match pattern.
  • Deleting the ClusterSecret deletes "child" secrets (all cloned secrets) too.

Full documentation available at https://clustersecret.io

Clustersecret diagram


Here is how it looks like:

kind: ClusterSecret
apiVersion: clustersecret.io/v1
metadata:
  namespace: clustersecret
  name: default-wildcard-certifiate
matchNamespace:
  - prefix_ns-*
  - anothernamespace
avoidNamespaces:
  - supersecret-ns
data:
  tls.crt: BASE64
  tls.key: BASE64

Use cases.

Use it for certificates, registry pulling credentials and so on.

when you need a secret in more than one namespace. you have to:

1- Get the secret from the origin namespace. 2- Edit the the secret with the new namespace. 3- Re-create the new secret in the new namespace.

This could be done with one command:

kubectl get secret <secret-name> -n <source-namespace> -o yaml \
| sed s/"namespace: <source-namespace>"/"namespace: <destination-namespace>"/\
| kubectl apply -n <destination-namespace> -f -

Clustersecrets automates this. It keep track of any modification in your secret and it will also react to new namespaces.

installation

Requirements

Current version 0.0.10 is tested for Kubernetes >= 1.19 up to 1.27.3 For ARM architectures use <tag>_arm32 tag

For older kubernetes (<1.19) use the image tag 0.0.6 in your helm values file.

Install

Using the official helm chart

helm repo add clustersecret https://charts.clustersecret.io/
helm install clustersecret clustersecret/cluster-secret --version 0.4.0 -n clustersecret --create-namespace

with just kubectl

clone the repo and apply

cd ClusterSecret
kubectl apply -f ./yaml

quick start:

create a ClusterSecret object yaml like the one above, or in the example in yaml/Object_example/obj.yaml and apply it in your cluster kubectl apply -f yaml/Object_example/obj.yaml

The ClusterSecret operator will pick it up and will create the secret in every matching namespace: match matchNamespace but not matching avoidNamespaces RegExp's.

You can specify multiple matching or non-matching RegExp. By default it will match all, same as defining matchNamespace = *

Get the clustersecrets

$> kubectl get csec -n clustersecret
NAME            TYPE
global-secret

Minimal example

apiVersion: clustersecret.io/v1
kind: ClusterSecret
metadata:
  name: global-secret
  namespace: my-fav-namespce
data:
  username: MTIzNDU2Cg==
  password: Nzg5MTAxMTIxMgo=

images

Images are build and push on tag ('git tag') with Github Actions. You can find them here:

https://quay.io/repository/clustersecret/clustersecret

default archs :

the following archetecures:

  • linux/386
  • linux/amd64

are under the image:tag : quay.io/clustersecret/clustersecret:0.0.10

Alternative architecrues:

Known bugs:

  • check this on issues tab

Roadmap:

TO-DO: enable super linter -> DISABLE_ERRORS

Support

If you need support, start with the troubleshooting guide: Run it in debug mode. You can open issues and we will try to address them.

That said, if you have questions, or just want to establish contact, reach out one way or another. https://flag5.com || nico at flag5.com

Global inter-namespace cluster secrets - Secrets that work across namespaces - Cluster wide secrets

clustersecret's People

Contributors

adamency avatar axel7083 avatar cybergrind avatar eugst avatar lvijnck avatar m00nyone avatar mefjush avatar mig4ng avatar mkjmdski avatar natalyamyraan avatar negative-creep90 avatar nicolas-geniteau avatar oneilsh avatar patrungel avatar sammaranth avatar savf avatar snyk-bot avatar zakkg3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clustersecret's Issues

ClusterSecret stops working over time

ClusterSecret does not seem to pick-up new matching namespaces after initial deployment: I deployed ClusterSecret to an existing cluster and added the clustersecret my-namespace/wildcard-tls which refers the source secret my-namespace/wildcard-tls and matches namespaces with '.*customer.*'.

On initial deployment I can see the secret deployed to the namespaces and new namespaces were detected correctly yesterday. However, when I deployed other namespaces today, the secret was no longer copied to the new namespaces - even though they match the criteria.

Any advice on how to figure out whats going wrong would be great.

ClusterSecret v0.0.8
Cloud: Microsoft Azure
Kubernetes Version: 1.22.6

Secrets not updating when modified secrets are deployed

I'm using the following code to set secret in demo namespace:

apiVersion: clustersecret.io/v1
kind: ClusterSecret
metadata:
  name: secret-identity
matchNamespace:
  - demo
data:
  MESSAGE: SSdtIEJhdG1hbg==

But when I modify the MESSAGE value and redeploy using kubectl apply -f demo-secret.yml secrets are not updated. I need to delete the secret and redeploy them for changes to take affect.

Outdated api versions are used in manifests for rbac, crd

Outdated api versions are used in manifests for rbac, crd

rbac
rbac.authorization.k8s.io/v1beta => rbac.authorization.k8s.io/v1
crd
apiextensions.k8s.io/v1beta1 => apiextensions.k8s.io/v1

There's no problem to change the api version for rbac.

But after changing the api version of the crd I get the error:

The CustomResourceDefinition "clustersecrets.clustersecret.io" is invalid: spec.versions[0].schema.openAPIV3Schema: Required value: schemas are required

Still can't seem to sync new namespaces

I'm using 0.0.5, it still seems like new namespaces can't be sync'd. I'm using K3S on bare metal. When I add a new namespace to the cluster secret resource that was created after clustersecret, I get the following errors in the log:

[2020-11-06 07:31:27,430] kopf.reactor.queuein [ERROR ] functools.partial(<function process_resource_event at 0x7efcf39a5d40>, lifecycle=<function asap at 0x7efcf3bc2710>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7efcf25b3b10>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, client_timeout=None, connect_timeout=None, reconnect_backoff=0.1), batching=BatchingSettings(worker_limit=None, idle_timeout=5.0, batch_window=0.1, exit_timeout=2.0), execution=ExecutionSettings(executor=<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efcf25b3650>, _max_workers=None), background=BackgroundSettings(cancellation_polling=60, instant_exit_timeout=None, instant_exit_zero_time_cycles=10), persistence=PersistenceSettings(finalizer='kopf.zalando.org/KopfFinalizerMarker', progress_storage=<kopf.storage.progress.SmartProgressStorage object at 0x7efcf25b37d0>, diffbase_storage=<kopf.storage.diffbase.AnnotationsDiffBaseStorage object at 0x7efcf2611fd0>)), memories=<kopf.structs.containers.ResourceMemories object at 0x7efcf26172d0>, resource=Resource(group='', version='v1', plural='namespaces'), event_queue=<Queue at 0x7efcf2617710 maxsize=0 _getters[1] tasks=82>) failed with an exception. Ignoring the event. Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker await processor(raw_event=raw_event, replenished=replenished) File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event replenished=replenished, File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes await patching.patch_obj(resource=resource, patch=patch, body=body) File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper return await fn(*args, **kwargs, context=context) File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 67, in patch_obj raise_for_status=True, File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request resp.raise_for_status() File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status headers=self.headers) aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/elk/status')

Secret is dumped in logs on error

Hello!

I noticed that apparently, in case of error, the pod dumps the whole secret in the logs.
I'm not sure this is desired behavior since we're handling secrets
image

This is the line responsible for the log:

logger.error(f'Can not create a secret, it is base64 encoded? data: {data}')

I also couldn't help but notice that this behavior is also present in another two instances:

logger.error(f'Data keys with ValueFrom error: {data.keys()} len {len(data.keys())}')

logger.error (f'ERROR reading data from remote secret = {data}')

Plus another one, but this one is in debugging-only

logger.debug(f'Going to create with data: {data}')

I'm not sure that dumping the secret is a good approach
I think that we could discuss about dumping the secrets in the logs only in debug-level verbosity (and maybe with an alert when starting the pod)

The best approach in my opinion would be the same the KubeAPI adopted: redacting the secret
image

Add helm

Would be nice to have a helm chart for it

ClusterSecret shows error on startup

Hi

Great idea on creating ClusterSecret CRD. I'm trying to get it work but I get a bunch of errors.

Environment: Linux, Debian
K8s: Latest version of k3s, running on Docker.
RBAC: Yes

I suspect this has to do with the permissions granted to the CRD, but I don't know how to fix or debug this.

Here is the debugs.

[2020-06-27 04:34:49,685] kopf.reactor.activit [INFO    ] Initial authentication has been initiated.
[2020-06-27 04:34:49,686] kopf.activities.auth [INFO    ] Activity 'login_via_pykube' succeeded.
[2020-06-27 04:34:49,689] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2020-06-27 04:34:49,689] kopf.reactor.activit [INFO    ] Initial authentication has finished.
[2020-06-27 04:34:49,698] kopf.engines.peering [WARNING ] Default peering object not found, falling back to the standalone mode.
[2020-06-27 04:34:49,814] kopf.objects         [INFO    ] [None/default] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,814] kopf.objects         [INFO    ] [None/default] All handlers succeeded for creation.
[2020-06-27 04:34:49,817] kopf.objects         [INFO    ] [None/kube-system] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,818] kopf.objects         [INFO    ] [None/kube-system] All handlers succeeded for creation.
[2020-06-27 04:34:49,820] kopf.objects         [INFO    ] [None/kube-public] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,821] kopf.objects         [INFO    ] [None/kube-public] All handlers succeeded for creation.
[2020-06-27 04:34:49,823] kopf.objects         [INFO    ] [None/kube-node-lease] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,823] kopf.objects         [INFO    ] [None/kube-node-lease] All handlers succeeded for creation.
[2020-06-27 04:34:49,825] kopf.objects         [INFO    ] [None/dev-services] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,825] kopf.objects         [INFO    ] [None/dev-services] All handlers succeeded for creation.
[2020-06-27 04:34:49,827] kopf.objects         [INFO    ] [None/home-services] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,827] kopf.objects         [INFO    ] [None/home-services] All handlers succeeded for creation.
[2020-06-27 04:34:49,833] kopf.objects         [INFO    ] [None/clustersecret] Handler 'namespace_watcher' succeeded.
[2020-06-27 04:34:49,833] kopf.objects         [INFO    ] [None/clustersecret] All handlers succeeded for creation.
[2020-06-27 04:34:49,851] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/dev-services')
[2020-06-27 04:34:49,853] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,855] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/default')
[2020-06-27 04:34:49,855] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/kube-system')
[2020-06-27 04:34:49,856] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/kube-public')
[2020-06-27 04:34:49,856] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/kube-node-lease')   
[2020-06-27 04:34:49,857] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/home-services')
[2020-06-27 04:34:49,858] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/api/v1/namespaces/clustersecret')
[2020-06-27 04:34:49,860] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,861] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,863] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,864] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,866] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,867] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,868] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,870] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,871] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,873] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,874] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,875] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,877] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,878] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,880] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:34:49,881] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message="Handler 'namespace_watcher' succeeded.".
[2020-06-27 04:34:49,882] kopf.clients.events  [WARNING ] Failed to post an event. Ignoring and continuing. Status: 403. Message: Forbidden. Event: type='Normal', reason='Logging', message='All handlers succeeded for creation.'.
[2020-06-27 04:44:26,527] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7fb3091fab00>, lifecycle=<function asap at 0x7fb309419dd0>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7fb307e84a50>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, cl
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 59, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.43.0.1:443/apis/clustersecret.io/v1/namespaces/clustersecret/clustersecrets/test-cluster-secret')   

On the last block, this is what happened when I tried to create a ClusterSecret. But running kubectl get secrets -A doesn't show any new secrets.

Synchronisation on startup

Hi,

I'm not sure, but it seems like the CRD handler doesn't attempt to update/sync any existing ClusterSecret to all the matching namespaces when the handler starts.

Questions:
1 - Is this the expected behavior?
2 - If yes, then what would be the correct workaround?

Thanks

Rewrite in Go

Hello! You list "rewrite in Go" on the roadmap (https://clustersecret.io/roadmap/).

Would you mind if I try that out? It sounds like fun.

Also, are you looking for a new maintainer? I could help with that, if you wish.

Allow secrets to be copied from approved source namespace only

Could it be possible to limit the source Secret or ConfigMap used by ClusterSecret to a namespace? For example, only want ClusterSecret to copy a source Secret to other namespaces ONLY if it comes from a specific namespace.

This would be for security reasons as you wouldn't want any user with access to a namespace to apply the correct annotations and copy those secrets to other namespaces (spamming other namespaces).

I'm assuming this would have to be enabled in the controller via the values files.

Couldn't find this feature in the latest release .

kopf._core.reactor.o: Not enough permissions to watch for resources

Hello,

Creating an issue here based on the suggestion from the slack channel.

We recently deployed the Cluster Secret Operator v0.0.9 to one of our GKE clusters and I see a couple of issues..

At the start of the pod logs I see:

kopf._core.reactor.o [WARNING ] Not enough permissions to watch for resources: changes (creation/deletion/updates) will not be noticed; the resources are only refreshed on operator restarts.

and later after the sync happens, I see an error:

Patching failed with inconsistencies: (('remove', ('status', 'namespace_watcher'), {'syncedns':['ns1', 'ns2']}, None),)

We used the helm chart for the deployment and default values.

Any help is appreciated.

Best Regards,
Ravi

Wrong API version in the documentation ?

The README's example uses apiVersion: v2 which gives the following error:

error: unable to recognize ".\[[REDACTED]].yaml": no matches for kind "ClusterSecret" in version "v2"

The example objects in the yaml folder use apiVersion: clustersecret.io/v1 which workds.
Shouldn't the README be updated ?

Bad array iterator

Hi.

Using the follow configuration, you get the secret deployed to namespaces that are part of the avoidNamespaces list:

matchNamespace:
  - '.*'
avoidNamespaces:
  - 'default'
  - 'auth'
  - 'cert-manager'
  - 'kube-system'
  - 'kube-public'

In the follow array, since you are modifying the list while you are iterating it, some of the elements get lost, because of current "pointer".

    # purge
    for ns in matchedns:
        if ns in avoidedns:
            matchedns.remove(ns)

    return matchedns

Source from secret

There are many cases where the secret data is generated by a third party tool that you cannot alter to make use of this CRD. Is there a way to have the contents be "the contents of X secret in Y namespace" ? The operator would then watch the source secret for changes and sync it to all the namespaces that the ClusterSecret crd specifies?

Can't sync new namespaces

Hello.
I taked a last version of operator https://github.com/zakkg3/ClusterSecret/blob/fef0f305577540d51c404505ae879d749547e6c2/src/handlers.py and tried to deploy it. All was well, i create clustersecret:

---
apiVersion: clustersecret.io/v1
kind: ClusterSecret
metadata:
  name: clustersecret-dev-autobp-tls-cert
  namespace: clustersecret
matchNamespace:
  - "clustersecret*"
avoidNamespaces:
  - kube-*
data:
  tls.crt: some_cert
  tls.key: some_key

Secrets was created in 2 namespace: clustersecret and clustersecret2. But when i tried to create new namespace clustersecret3 i got errors in operator:

[2020-09-04 08:16:44,580] kopf.objects         [DEBUG   ] [None/clustersecret3] Handler 'namespace_watcher' is invoked.
[2020-09-04 08:16:44,580] kopf.objects         [DEBUG   ] [None/clustersecret3] New namespace created: clustersecret3 re-syncing
[2020-09-04 08:16:44,581] kopf.objects         [ERROR   ] [None/clustersecret3] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 149, in namespace_watcher
    matcheddns = v['body']['status']['create_fn']['syncedns']
KeyError: 'create_fn'
[2020-09-04 08:16:44,583] kopf.objects         [DEBUG   ] [None/clustersecret3] Patching with: {'metadata': {'annotations': {'kopf.zalando.org/namespace_watcher': '{"started": "2020-09-04T08:16:44.580408", "delayed": "2020-09-04T08:17:44.582888", "retries": 1, "success": false, "failure": false, "message": "\'create_fn\'"}'}}, 'status': {'kopf': {'progress': {'namespace_watcher': {'started': '2020-09-04T08:16:44.580408', 'stopped': None, 'delayed': '2020-09-04T08:17:44.582888', 'retries': 1, 'success': False, 'failure': False, 'message': "'create_fn'"}}}}}
[2020-09-04 08:16:44,610] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7f9ebab9ed40>, lifecycle=<function asap at 0x7f9ebadfe050>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7f9eb9526f10>, settings=OperatorSettings(logging=LoggingSettings(), posting=PostingSettings(enabled=True, level=20), watching=WatchingSettings(server_timeout=None, client_timeout=None, connect_timeout=None, reconnect_backoff=0.1), batching=BatchingSettings(worker_limit=None, idle_timeout=5.0, batch_window=0.1, exit_timeout=2.0), execution=ExecutionSettings(executor=<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f9eb98c1350>, _max_workers=None), background=BackgroundSettings(cancellation_polling=60, instant_exit_timeout=None, instant_exit_zero_time_cycles=10), persistence=PersistenceSettings(finalizer='kopf.zalando.org/KopfFinalizerMarker', progress_storage=<kopf.storage.progress.SmartProgressStorage object at 0x7f9eb98c1490>, diffbase_storage=<kopf.storage.diffbase.AnnotationsDiffBaseStorage object at 0x7f9eb98d9bd0>)), memories=<kopf.structs.containers.ResourceMemories object at 0x7f9eb98d9e90>, resource=Resource(group='', version='v1', plural='namespaces'), event_queue=<Queue at 0x7f9eb98e1150 maxsize=0 _getters[1] tasks=11>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 187, in worker
    await processor(raw_event=raw_event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 204, in process_resource_event
    replenished=replenished,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/processing.py", line 227, in apply_reaction_outcomes
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 67, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.220.128.1:443/api/v1/namespaces/clustersecret3/status')
[2020-09-04 08:16:44,713] kopf.objects         [DEBUG   ] [None/clustersecret3] Creation event: {'kind': 'Namespace', 'apiVersion': 'v1', 'metadata': {'name': 'clustersecret3', 'selfLink': '/api/v1/namespaces/clustersecret3', 'uid': '9ea022c1-4546-4b02-a711-c83519953bbf', 'resourceVersion': '68088144', 'creationTimestamp': '2020-09-04T08:16:44Z', 'annotations': {'kopf.zalando.org/namespace_watcher': '{"started": "2020-09-04T08:16:44.580408", "delayed": "2020-09-04T08:17:44.582888", "retries": 1, "success": false, "failure": false, "message": "\'create_fn\'"}'}}, 'spec': {'finalizers': ['kubernetes']}, 'status': {'phase': 'Active'}}
[2020-09-04 08:16:44,714] kopf.objects         [DEBUG   ] [None/clustersecret3] Sleeping for 59.868061 seconds for the delayed handlers.
[2020-09-04 08:16:45,346] kopf.objects         [DEBUG   ] [None/clustersecret3] Sleeping was interrupted by new changes, 59.237326397079286 seconds left.
[2020-09-04 08:16:45,449] kopf.objects         [DEBUG   ] [None/clustersecret3] Creation event: {'kind': 'Namespace', 'apiVersion': 'v1', 'metadata': {'name': 'clustersecret3', 'selfLink': '/api/v1/namespaces/clustersecret3', 'uid': '9ea022c1-4546-4b02-a711-c83519953bbf', 'resourceVersion': '68088159', 'creationTimestamp': '2020-09-04T08:16:44Z', 'annotations': {'kopf.zalando.org/namespace_watcher': '{"started": "2020-09-04T08:16:44.580408", "delayed": "2020-09-04T08:17:44.582888", "retries": 1, "success": false, "failure": false, "message": "\'create_fn\'"}', 'logging.csp.vmware.com/fluentd-status': ''}}, 'spec': {'finalizers': ['kubernetes']}, 'status': {'phase': 'Active'}}
[2020-09-04 08:16:45,452] kopf.objects         [DEBUG   ] [None/clustersecret3] Sleeping for 59.130856 seconds for the delayed handlers.

Help please to fix this error. Thank you.

Immutable ClusterSecrets

Support ClusterSecrets that create immutable Secrets.

This might require extra CustomResourceDefinition support for controlling field-level mutability; I haven't checked.

enhancement: adding admission webhook to validate data while applying

In the current scenario when applying a ClusterSecret, if something is misconfigured (with valueFrom example with a secret which does not exist) the ClusterSecret is still deployed, but will have an error as an event.

If we add an admission webhook, we could prevent the resource to be validated since it is invalid. Preventing user to deploy something with a bad configuration.

deleting basic config

When deleting basic config (with no matchNamespace key) stuck with error:
for matchns in matchNamespace:
TypeError: 'NoneType' object is not iterable

Option to add suffix in MatchNamespace/avoidNamespaces

I have namespaces that have the format <app_name>-.
I would like to apply a environment specific certificate and put matchnamespace as '*-qa'
It fails with below error:

[ERROR ] [clustersecret/tls-secret] Handler 'create_fn' failed with an exception. Will retry.
Error
Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/execution.py", line 283, in execute_handler_once result = await invoke_handler( File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/execution.py", line 378, in invoke_handler result = await invocation.invoke( File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/invocation.py", line 117, in invoke result = await fn(**kwargs) # type: ignore File "/src/handlers.py", line 73, in create_fn matchedns = get_ns_list(logger,body,v1) File "/src/csHelper.py", line 34, in get_ns_list if re.match(matchns, ns.metadata.name): File "/usr/local/lib/python3.9/re.py", line 191, in match return _compile(pattern, flags).match(string) File "/usr/local/lib/python3.9/re.py", line 304, in _compile p = sre_compile.compile(pattern, flags) File "/usr/local/lib/python3.9/sre_compile.py", line 764, in compile p = sre_parse.parse(p, flags) File "/usr/local/lib/python3.9/sre_parse.py", line 948, in parse p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0) File "/usr/local/lib/python3.9/sre_parse.py", line 443, in _parse_sub itemsappend(_parse(source, state, verbose, nested + 1, File "/usr/local/lib/python3.9/sre_parse.py", line 668, in _parse raise source.error("nothing to repeat", re.error: nothing to repeat at position 0

How to remove ClusterSecret

When version 0.0.8 came I changed configuration but made mistake in config. I tried to remove clustersecret but it stuck. Status of clustersecret is "Resource scheduled for deletion" and I see

2022-04-16 08:38:30,485] kopf.objects [INFO ] [clustersecret/aws-ecr-secret] Re Syncing secret aws-ecr-secret in ns cattle-fleet-system
--
[2022-04-16 08:38:30,502] kopf.objects [ERROR ] [clustersecret/aws-ecr-secret] Handler 'on_field_data/data' failed with an exception. Will retry.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/execution.py", line 283, in execute_handler_once
result = await invoke_handler(
File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/execution.py", line 378, in invoke_handler
result = await invocation.invoke(
File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/invocation.py", line 140, in invoke
await asyncio.shield(future) # slightly expensive: creates tasks
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/src/handlers.py", line 62, in on_field_data
response = v1.replace_namespaced_secret(name,ns,body)
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 27733, in replace_namespaced_secret
return self.replace_namespaced_secret_with_http_info(name, namespace, body, **kwargs) # noqa: E501
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 27836, in replace_namespaced_secret_with_http_info
return self.api_client.call_api(
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 399, in request
return self.rest_client.PUT(url,
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/rest.py", line 285, in PUT
return self.request("PUT", url,
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/rest.py", line 234, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request


HTTP response headers: HTTPHeaderDict({'Audit-Id': 'ID', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': 'UID', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'UID', 'Date': 'Sat, 16 Apr 2022 07:33:17 GMT', 'Content-Length': '405'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Secret in version \"v1\" cannot be handled as a Secret: v1.Secret.Data: base64Codec: invalid input, error found in #10 byte of ...|lueFrom\": {\"secretKe|..., bigger context ...|{\"apiVersion\": \"v1\", \"data\": {\"valueFrom\": {\"secretKeyRef\": {\"name\": \"global-secret\", \"names|...","reason":"BadRequest","code":400}

in clustersecret deployment's logs, I've tried to remove namespace. It was difficult, but I did. After I created NS again with new configs, I saw old clustersecret again.

I created clustersecret with new name and it seems work. But I want to delete the old

Feature: Use ownerReference on secrets

By setting ownerReference on the secrets pointing to the source ClusterSecret Custom Resource adds a coupling that is both useful when just looking at the YAML of a secret, as well as then the operator could offload the deletion of secrets when the parent ClusterSecret is removed to the kubernetes built-in APIs.

The ownerReference field can also be added on existing objects, so the secrets wont have to be recreated when adding this feature, only updated.

ClusterSecret doesn't create secret across all namespaces

Documentation says that
"You can specify multiple matching or non-matching RegExp. By default it will match all, same as defining matchNamespace = *"

I'm using the following ClusterSecret manifest to create secret in all namespaces

kind: ClusterSecret
apiVersion: clustersecret.io/v1
metadata:
name: regcred-secret
avoidNamespaces:
- default
data:
<some secret here>

kubectl -n test apply -f clustersecret.yaml

and pod throws the error
Handler 'create_fn' failed with an exception

Full error here
https://pastebin.com/SeEFsUUe

Can't delete ClusterSecret

I have a ClusterSecret I added to the default namespace, however, I can't delete it, kubectl delete csec re-creds --force just hangs forever

security: cluster secret side effects

Security concern

Resource scope

The CustomResourceDefinition ClusterSecret is Namespaced, this is probably not the best choice since namespaces provides a mechanism for isolating groups of resources within a single cluster1

Resource side effect

Any user in any namespace with the authorization of creating custom resource would be able to create secrets in all Kubernetes namespaces. This could be a security concern since some option like "REPLACE_EXISTING" have been implemented recently, and could potentially creates some issues with the system namespaces.

Possible solution

The first one would be to set the scope of the CustomResourceDefinition to cluster2. This would allows to only allow administrator to be able to manage those resources and prevent namespaced resource to have side-effect on others.

Footnotes

  1. Namespaces in Kubernetes โ†ฉ

  2. Creating customresourcedefinition โ†ฉ

bug: nothing to repeat

Using default helm chart with the following

helm install cluster-secret ./charts/cluster-secret -n cluster-secret --create-namespace

I get this error when creating a ClusterSecret

[2023-09-03 09:53:11,282] kopf.objects         [ERROR   ] [example-1/simple-cluster-secret] Handler 'create_fn' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/execution.py", line 283, in execute_handler_once
    result = await invoke_handler(
  File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/execution.py", line 378, in invoke_handler
    result = await invocation.invoke(
  File "/usr/local/lib/python3.9/site-packages/kopf/_core/actions/invocation.py", line 117, in invoke
    result = await fn(**kwargs)  # type: ignore
  File "/src/handlers.py", line 114, in create_fn
    matchedns = get_ns_list(logger,body,v1)
  File "/src/csHelper.py", line 66, in get_ns_list
    if re.match(matchns, ns.metadata.name):
  File "/usr/local/lib/python3.9/re.py", line 191, in match
    return _compile(pattern, flags).match(string)
  File "/usr/local/lib/python3.9/re.py", line 304, in _compile
    p = sre_compile.compile(pattern, flags)

Secrets are not updated in new namespaces for a modified clustersecret object

Hi,

I think I found a bug. When I create a new namespace It received secrets automatically, but when I add/update/delete data in clustersecret file, the new namespace doesn't apply these changes, if I delete the pod of clustersecret in cluster and is re-created, when I change data in the clustersecret file, the new namespace applies this changes.

I am going to describe step by step to reproduce it.

  1. Create the clustersecret file, and apply it.
apiVersion: clustersecret.io/v1
kind: ClusterSecret
metadata:
  namespace: clustersecret
  name: default-wildcard-certificate
matchNamespace:
  - prefix-ns-*
  - anothernamespace
avoidNamespaces:
  - supersecret-ns
data:
  username: MTIzNDU2Cg==
  password: Nzg5MTAxMTIxMgd=
  1. Create a namespace, for example, called prefix-ns-test1.
$ kubectl get secret -n prefix-ns-test1 default-wildcard-certificate -o yaml
apiVersion: v1
data:
  password: Nzg5MTAxMTIxMgc=
  username: MTIzNDU2Cg==
kind: Secret
metadata:
  creationTimestamp: "2022-09-27T11:28:22Z"
  name: default-wildcard-certificate
  namespace: prefix-ns-test1
  resourceVersion: "2310"
  uid: 2a86735a-9894-4276-8ea7-907e60cb5f15
type: Opaque

imagen

Possible bug notified in clusersecret pod log
[2022-09-27 11:28:22,965] kopf.objects [WARNING ] [prefix-ns-test1] Patching failed with inconsistencies: (('remove', ('status', 'namespace_watcher'), {'syncedns': ['prefix-ns-test1']}, None),)

  1. Change variable password by password2 in clustersecret file, and apply changes
  2. Get data from secret for namespace prefix-ns-test1, but It doesn't seem updated.
$ kubectl get secret -n prefix-ns-test1 default-wildcard-certificate -o yaml
apiVersion: v1
data:
  password: Nzg5MTAxMTIxMgc=
  username: MTIzNDU2Cg==
kind: Secret
metadata:
  creationTimestamp: "2022-09-27T11:28:22Z"
  name: default-wildcard-certificate
  namespace: prefix-ns-test1
  resourceVersion: "2310"
  uid: 2a86735a-9894-4276-8ea7-907e60cb5f15
type: Opaque
  1. Delete clustersecret pod.
  2. Get data from secret for namespace prefix-ns-test1, but It doesn't seem updated yet.
  3. Change variable password2 by password3 in clustersecret file, and apply changes
  4. Get data from secret for namespace prefix-ns-test1, but It seems updated.
$ kubectl get secret -n prefix-ns-test1 default-wildcard-certificate -o yaml
apiVersion: v1
data:
  password3: Nzg5MTAxMTIxMgc=
  username: MTIzNDU2Cg==
kind: Secret
metadata:
  creationTimestamp: "2022-09-27T11:28:22Z"
  name: default-wildcard-certificate
  namespace: prefix-ns-test1
  resourceVersion: "2849"
  uid: 2a86735a-9894-4276-8ea7-907e60cb5f15
type: Opaque

Its happen when every namespace created.

Hope it help you.

Handler 'create_fn' and 'on_delete' failed on deleting clustersecret

errors occurred when deleting clustersecret
and it can not be deleted, even --force

command:

$ kubectl delete -f tls-my-cert.yaml
> clustersecret.clustersecret.io "tls-my-cert" deleted
(hang)
2020-09-01 06:22:53,762] kopf.objects         [ERROR   ] [clustersecret/tls-my-cert] Handler 'create_fn' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 44, in create_fn
    matchedns = get_ns_list(logger,body,v1)
  File "/src/handlers.py", line 83, in get_ns_list
    if re.match(matchns, ns.metadata.name):
  File "/usr/local/lib/python3.7/re.py", line 175, in match
    return _compile(pattern, flags).match(string)
  File "/usr/local/lib/python3.7/re.py", line 288, in _compile
    p = sre_compile.compile(pattern, flags)
  File "/usr/local/lib/python3.7/sre_compile.py", line 764, in compile
    p = sre_parse.parse(p, flags)
  File "/usr/local/lib/python3.7/sre_parse.py", line 924, in parse
    p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
  File "/usr/local/lib/python3.7/sre_parse.py", line 420, in _parse_sub
    not nested and not items))
  File "/usr/local/lib/python3.7/sre_parse.py", line 645, in _parse
    source.tell() - here + len(this))
re.error: nothing to repeat at position 0
[2020-09-01 06:23:08,195] kopf.objects         [ERROR   ] [clustersecret/tls-my-cert] Handler 'on_delete' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 148, in invoke
    await asyncio.shield(future)  # slightly expensive: creates tasks
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/src/handlers.py", line 8, in on_delete
    syncedns = body['status']['create_fn']['syncedns']
KeyError: 'create_fn'
[2020-09-01 06:24:08,351] kopf.objects         [ERROR   ] [clustersecret/tls-my-cert] Handler 'on_delete' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 148, in invoke
    await asyncio.shield(future)  # slightly expensive: creates tasks
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/src/handlers.py", line 8, in on_delete
    syncedns = body['status']['create_fn']['syncedns']
KeyError: 'create_fn'
[2020-09-01 06:25:08,523] kopf.objects         [ERROR   ] [clustersecret/tls-my-cert] Handler 'on_delete' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 148, in invoke
    await asyncio.shield(future)  # slightly expensive: creates tasks
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/src/handlers.py", line 8, in on_delete
    syncedns = body['status']['create_fn']['syncedns']
KeyError: 'create_fn'

after restart the pod

[2020-09-01 06:32:48,788] kopf.reactor.activit [INFO    ] Initial authentication has been initiated.
[2020-09-01 06:32:48,789] kopf.activities.auth [INFO    ] Activity 'login_via_pykube' succeeded.
[2020-09-01 06:32:48,790] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2020-09-01 06:32:48,790] kopf.reactor.activit [INFO    ] Initial authentication has finished.
[2020-09-01 06:32:48,798] kopf.engines.peering [WARNING ] Default peering object not found, falling back to the standalone mode.
[2020-09-01 06:32:48,940] kopf.objects         [ERROR   ] [clustersecret/tls-my-cert] Handler 'on_delete' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 148, in invoke
    await asyncio.shield(future)  # slightly expensive: creates tasks
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/src/handlers.py", line 8, in on_delete
    syncedns = body['status']['create_fn']['syncedns']
KeyError: 'create_fn'

btw, missing Kind: ClusterSecret field in first section on readme

Not working on ARM architectue

Could you build the docker image also for ARM architecture? Currently it doesn't run on my Raspberry Pi.
:-(

kubectl -n clustersecret logs clustersecret-5f98958756-mxs8b
standard_init_linux.go:228: exec user process caused: exec format error

Error when installing

Minikube v1.27.0 running Kubernetes v1.25.0 on containerd 1.6.8

Installing with kubectl apply -f ./yaml returned this:

namespace/clustersecret created
serviceaccount/clustersecret-account created
clusterrole.rbac.authorization.k8s.io/clustersecret-role-cluster created
role.rbac.authorization.k8s.io/clustersecret-role-namespaced created
clusterrolebinding.rbac.authorization.k8s.io/clustersecret-rolebinding-cluster created
rolebinding.rbac.authorization.k8s.io/clustersecret-rolebinding-namespaced created
customresourcedefinition.apiextensions.k8s.io/clustersecrets.clustersecret.io created
Error from server (BadRequest): error when creating "yaml/02_deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.matchLabels"

This error persisted until I ran this version of the deployment manifest, with the offending "spec.matchLabels" removed:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: clustersecret
  namespace: clustersecret
  labels:
    app: clustersecret
spec:
    selector:
      matchLabels:
        app: clustersecret
    template:
      metadata:
        labels:
          app: clustersecret
      spec:
        serviceAccountName: clustersecret-account
        # imagePullSecrets:
        # - name: regcred
        containers:
        - name: clustersecret
          image: flag5/clustersecret:0.0.8-beta
          # imagePullPolicy: Always
          # Uncomment next lines for debug:
          # command:
          #   - "kopf"
          #   - "run"
          #   - "-A"
          #   - "/src/handlers.py"
          #   - "--verbose"

The operator works after this, and new ClusterSecrets are cloned. I wonder if it's a bug, or am I missing something?

No support for stringData

Secrets support stringData, meaning you don't have to Base64 everything, but ClusterSecret doesn't seem to.

ClusterSecret doesn't apply secret in newly created namespaces

Usecase
TLS wildcard certificate which needs to be available in all, including newly created, namespaces.

Issue
I've applied the yaml files as instructed on a RBAC enables AKS cluster.
When the clustersecret pod starts it does apply the ClusterSecret on all namespaces.
But when a namespace is created, the ClusterSecret isn't applied to this namespace.

It looks like it is missing the events of namepspace creation. But I don't have a clue on how to debug this.

Workaround
Obviously not desired; delete pod and during start of the pod it applies secrets to all namespaces matching the given pattern.

Expected behaviour
When a new namespace is created the ClusterSecret is applied to this namespace as well.

yaml

apiVersion: clustersecret.io/v1
type: kubernetes.io/tls
metadata:
  namespace: clustersecret
  name: wildcard-dev-x
matchNamespace:
  - "content"
  - ".*"
data:
  tls.crt: X
  tls.key: X

Logs
In the output of the ClusterSecret pod I only find ERROR similar to this one:

[2021-01-11 19:06:20,450] kopf.objects         [ERROR   ] [None/content-admin-backend-22957138-staging] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 172, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment

Let me know if you need more details (and how to retrieve them).

Use a smaller base image

Hi

In base-image/Dockerfile, there's a reference to python:3.7. This is fine but it produces large images with many unused dependencies.

Instead and provided there's no caveats, it would be useful to use python:3.7-slim instead.

Thank you.

Error "UnboundLocalError: local variable 'ns_new_list' referenced before assignment" in log (with 0.0.6)

Because of #16, I'm using version 0.0.6 with the same kind cluster as in #16. After kubectl apply -f ./yaml I see the following errors in the log. Things look to be working though...

$ kubectl logs -n clustersecret clustersecret-5f98958756-xw45w
[2021-12-07 17:04:53,762] kopf.reactor.activit [INFO    ] Initial authentication has been initiated.
[2021-12-07 17:04:53,765] kopf.activities.auth [INFO    ] Activity 'login_via_pykube' succeeded.
[2021-12-07 17:04:53,766] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2021-12-07 17:04:53,767] kopf.reactor.activit [INFO    ] Initial authentication has finished.
[2021-12-07 17:04:53,781] kopf.engines.peering [WARNING ] Default peering object not found, falling back to the standalone mode.
[2021-12-07 17:04:53,892] kopf.objects         [ERROR   ] [None/clustersecret] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 176, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment
[2021-12-07 17:04:53,900] kopf.objects         [ERROR   ] [None/default] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 176, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment
[2021-12-07 17:04:53,904] kopf.objects         [ERROR   ] [None/kube-node-lease] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 176, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment
[2021-12-07 17:04:53,908] kopf.objects         [ERROR   ] [None/kube-public] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 176, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment
[2021-12-07 17:04:53,912] kopf.objects         [ERROR   ] [None/kube-system] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 176, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment
[2021-12-07 17:04:53,917] kopf.objects         [ERROR   ] [None/local-path-storage] Handler 'namespace_watcher' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 125, in invoke
    result = await fn(*args, **kwargs)  # type: ignore
  File "/src/handlers.py", line 176, in namespace_watcher
    return {'syncedns': ns_new_list}
UnboundLocalError: local variable 'ns_new_list' referenced before assignment

CRD is defined in templates

The CRD in the Helm Chart is under the templates directory. This means that Helm does not make sure the CRD is installed before any resources, and therefore a chart that has ClusterSecret as a sub-chart cannot create resources of type ClusterSecret as it does not exist at template apply time.

Would it please be possible to move the CRD definition into the CRD directory and publish a new version?

Thanks.

On creation, data field watcher throw error.

[2020-05-25 19:30:37,534] kopf.objects         [ERROR   ] [default/global-secret] Handler 'on_field_data/data' failed with an exception. Will retry.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 266, in execute_handler_once
    lifecycle=lifecycle,  # just a default for the sub-handlers, not used directly.
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 363, in invoke_handler
    **kwargs,
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 148, in invoke
    await asyncio.shield(future)  # slightly expensive: creates tasks
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/src/handlers.py", line 20, in on_field_data
    syncedns = body['status']['create_fn']['syncedns']
  File "/usr/local/lib/python3.7/site-packages/kopf/structs/dicts.py", line 231, in __getitem__
    return resolve(self._src, self._path + (item,))
  File "/usr/local/lib/python3.7/site-packages/kopf/structs/dicts.py", line 67, in resolve
    result = result[key]
KeyError: 'status'

Forbidden errors

Hello,

First of all thanks for this great initiative.
I'm struggling to make it work, the pod is constantly failing with 403 errors, of two kinds:

RBAC setup worked fine, but seems still some permissions issue I'm not able to solve.

Thanks in advance for your help.

Can't apply master's ./yaml/01_crd.yaml using 1.19 or 1.21

I've created a brand-new v1.21.1 cluster with kind, that I installed with brew:

$ kind create cluster                                                                                                                                                       130 โ†ต
Deleting cluster "kind" ...
kind delete cluster  0.07s user 0.09s system 120% cpu 0.129 total
Creating cluster "kind" ...
 โœ“ Ensuring node image (kindest/node:v1.21.1) ๐Ÿ–ผ 
 โœ“ Preparing nodes ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing CNI ๐Ÿ”Œ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Not sure what to do next? ๐Ÿ˜…  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

When I try to apply ./yaml, it starts with 00_rbac.yaml and then 01_crd.yaml . But 01_crd.yaml fails as shown below:

$ kubectl apply -f 00_rbac.yaml 
namespace/clustersecret created
serviceaccount/clustersecret-account created
clusterrole.rbac.authorization.k8s.io/clustersecret-role-cluster created
role.rbac.authorization.k8s.io/clustersecret-role-namespaced created
clusterrolebinding.rbac.authorization.k8s.io/clustersecret-rolebinding-cluster created
rolebinding.rbac.authorization.k8s.io/clustersecret-rolebinding-namespaced created

$ kubectl apply -f 01_crd.yaml 
The CustomResourceDefinition "clustersecrets.clustersecret.io" is invalid: spec.versions[0].schema.openAPIV3Schema: Required value: schemas are required

It looks like this problem was introduced in commit dab8b48.

$ git log -1 01_crd.yaml                                                                                                                                                                                commit dab8b484d430e67ca50ecab1a1d5211fe605e0dc
Author: Nico <[email protected]>
Date:   Wed Nov 24 21:51:58 2021 +0100

    Update object definitions up to k8s 1.21.4

$ git checkout dab8b484d430e67ca50ecab1a1d5211fe605e0dc^
Note: switching to 'dab8b484d430e67ca50ecab1a1d5211fe605e0dc^'.
You are in 'detached HEAD' state. Bla bla bla

$ kubectl apply -f 01_crd.yaml                    
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/clustersecrets.clustersecret.io created

I've also tried with a 1.22.2 cluster (which is newer than the 1.21.4 mentioned in the log for dab8b48) and it fails with the same error.

Add labels in the metadata of the created secrets

Is it possible to add labels in the created secret pls..I thought when you add labels inside the clustersecret manifest it will also add the labels in the generated secret but I have tried and it doesn't work.

My assumption is to maybe add them here if I am correct..

`

    for ns in syncedns:
        logger.info(f'Re Syncing secret {name} in ns {ns}')
        metadata = {'name': name, 'namespace': ns, 'labels': labels } # HERE
        api_version = 'v1'
        kind = 'Secret'
        data = new
        body = client.V1Secret(
            api_version=api_version,
            data=data ,
            kind=kind,
            metadata=metadata,
            type = secret_type
        )
        response = v1.replace_namespaced_secret(name,ns,body)
        logger.debug(response)

dockerconfigjson secret errors when trying to replicate

Using version 0.0.6-arm as cluster is running on a RPi

apiVersion: clustersecret.io/v1
kind: ClusterSecret
metadata:
  name: registry-credentials
matchNamespace:
  - .*
data:
  .dockerconfigjson: Base64EncodedValue
type: kubernetes.io/dockerconfigjson
| [2022-07-31 22:43:44,519] kopf.objects         [INFO    ] [None/clustersecret] cloning secret in namespace clustersecret                                                                                                                                                     โ”‚
โ”‚ [2022-07-31 22:43:44,531] kopf.objects         [ERROR   ] [None/clustersecret] Can not create a secret, it is base64 encoded? data: {'.dockerconfigjson': 'Base64EncodedValue' โ”‚
โ”‚ [2022-07-31 22:43:44,532] kopf.objects         [ERROR   ] [None/clustersecret] Kube exception (400)                                                                                                                                                                          โ”‚
โ”‚ Reason: Bad Request                                                                                                                                                                                                                                                          โ”‚
โ”‚ HTTP response headers: HTTPHeaderDict({'Audit-Id': '0aa71bce-2cb7-4965-88a1-c266a476b392', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '705073d9-be2c-4d18-b7bf-a3d03c601464', 'X-Kubernetes-Pf-Priorityleve โ”‚
โ”‚ HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the object provided is unrecognized (must be of type Secret): couldn't get version/kind; json parse error: json: cannot unmarshal object into Go struct field .kind of ty                        

Latest chart not available via index.yaml

The latest chart you published (0.2.1) is not available in your official index.yaml:

https://charts.clustersecret.io/index.yaml

I can see its in the gh-pages repo, but for some reason its not updated in charts.clustersecret.io.

Issue:

helm repo add clustersecret https://charts.clustersecret.io/
helm search repo clustersecret

Undesired output:

NAME                            CHART VERSION   APP VERSION     DESCRIPTION           
clustersecret/ClusterSecret     0.1.1           0.0.9           ClusterSecret Operator
clustersecret/cluster-secret    0.1.0           0.8.0           ClusterSecret Operator

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.