GithubHelp home page GithubHelp logo

piraeusdatastore / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
27.0 27.0 17.0 189 KB

Collection of useful charts for Piraeus and similar projects

License: Apache License 2.0

Smarty 92.88% Makefile 7.12%
csi-driver helm

helm-charts's People

Contributors

bodgit avatar crimsonfez avatar dependabot[bot] avatar dmcdii avatar druchoo avatar dsrojo avatar elfolink avatar gohmc avatar janpfischer avatar joelcolledge avatar josedev-union avatar krmichelos avatar onedr0p avatar phoenix-bjoern avatar sea-you avatar starlightromero avatar wanzenbug avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

snapshot-controller chart: Add possiblity to override default Chart name and fullname

Hi,
firstly, thanks for the great helm chart which is handy as there is no official one provided by kubernetes-csi/external-snapshotter project.

What about adding the possibility of overriding the "name" and "fullName" which are taken from the Chart name?
So one could specify Values.fullnameOverride and Values.nameOverride variables which could override resource names which are usually derived from Chart name and release during templating.

Motivation: I cannot set Chart release name in our templating pipeline. Other charts we use have the possibility of name override. Kyverno helm chart as example:
https://github.com/kyverno/kyverno/blob/main/charts/kyverno/templates/_helpers/_names.tpl

It would probably need something like this in the helper file:

{{- define "snapshot-controller.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- if contains .Chart.Name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

1.3.0 fails because no docker imager is present for snapshot-validation-webhook:v5.0.0

When upgrading to helm chart version 1.3.0, the docker image snapshot-validation-webhook:v5.0.0 cannot be found.

docker pull k8s.gcr.io/sig-storage/snapshot-validation-webhook:v5.0.0
Error response from daemon: manifest for k8s.gcr.io/sig-storage/snapshot-validation-webhook:v5.0.0 not found: manifest unknown: Failed to fetch "v5.0.0" from request "/v2/sig-storage/snapshot-validation-webhook/manifests/v5.0.0".

The image for the snapshot controller for 5.0.0 is present and loads correctly.

linstor-controller run-migration - failed to create secret Request entity too large: limit is 3145728

Hello!
I wanted to upgrade linstor-cluster chart to the latest version, also tried downgrading to 1.0.0.
(My current version is 0.0.1)

When upgrading I get the following error.

kubectl -n data-store logs linstor-controller-67b986477c-rx9zl run-migration

time="2024-04-25T06:02:16Z" level=info msg="running k8s-await-election" version=refs/tags/v0.4.1
time="2024-04-25T06:02:16Z" level=info msg="no status endpoint specified, will not be created"
I0425 06:02:16.314334       1 leaderelection.go:250] attempting to acquire leader lease data-store/linstor-controller...
I0425 06:02:16.333855       1 leaderelection.go:260] successfully acquired lease data-store/linstor-controller
time="2024-04-25T06:02:16Z" level=info msg="long live our new leader: 'linstor-controller-67b986477c-rx9zl'!"
time="2024-04-25T06:02:16Z" level=info msg="starting command '/usr/bin/piraeus-entry.sh' with arguments: '[runMigration]'"
Importing keystore /tmp/tmp.YgtyHwfvrE to /etc/linstor/ssl/keystore.jks...
Entry for alias linstor successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
Certificate was added to keystore
Importing keystore /tmp/tmp.2hLABv2aQJ to /etc/linstor/https/keystore.jks...
Entry for alias linstor successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
Certificate was added to keystore
Loading configuration file "/etc/linstor/linstor.toml"
INFO:    Attempting dynamic load of extension module "com.linbit.linstor.modularcrypto.FipsCryptoModule"
INFO:    Extension module "com.linbit.linstor.modularcrypto.FipsCryptoModule" is not installed
INFO:    Attempting dynamic load of extension module "com.linbit.linstor.modularcrypto.JclCryptoModule"
DEBUG:   Constructing instance of module "com.linbit.linstor.modularcrypto.JclCryptoModule" with default constructor
INFO:    Dynamic load of extension module "com.linbit.linstor.modularcrypto.JclCryptoModule" was successful
INFO:    Cryptography provider: Using default cryptography module
INFO:    Kubernetes-CRD connection URL is "k8s"
06:02:18.584 [main] DEBUG io.fabric8.kubernetes.client.Config -- Trying to configure client from Kubernetes config...
06:02:18.587 [main] DEBUG io.fabric8.kubernetes.client.Config -- Did not find Kubernetes config at: [/root/.kube/config]. Ignoring.
06:02:18.588 [main] DEBUG io.fabric8.kubernetes.client.Config -- Trying to configure client from service account...
06:02:18.588 [main] DEBUG io.fabric8.kubernetes.client.Config -- Found service account host and port: 10.234.0.1:443
06:02:18.588 [main] DEBUG io.fabric8.kubernetes.client.Config -- Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt}].
06:02:18.588 [main] DEBUG io.fabric8.kubernetes.client.Config -- Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
06:02:18.588 [main] DEBUG io.fabric8.kubernetes.client.Config -- Trying to configure client namespace from Kubernetes service account namespace path...
06:02:18.589 [main] DEBUG io.fabric8.kubernetes.client.Config -- Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
06:02:18.594 [main] DEBUG io.fabric8.kubernetes.client.utils.HttpClientUtils -- Using httpclient io.fabric8.kubernetes.client.okhttp.OkHttpClientFactory factory
needs migration
TRACE:   Found database version 17
Error from server (NotFound): secrets "linstor-backup-for-linstor-controller-67b986477c-rx9zl" not found
crds.yaml
ebsremotes.internal.linstor.linbit.com.yaml
files.internal.linstor.linbit.com.yaml
keyvaluestore.internal.linstor.linbit.com.yaml
layerbcachevolumes.internal.linstor.linbit.com.yaml
layercachevolumes.internal.linstor.linbit.com.yaml
layerdrbdresourcedefinitions.internal.linstor.linbit.com.yaml
layerdrbdresources.internal.linstor.linbit.com.yaml
layerdrbdvolumedefinitions.internal.linstor.linbit.com.yaml
layerdrbdvolumes.internal.linstor.linbit.com.yaml
layerluksvolumes.internal.linstor.linbit.com.yaml
layeropenflexresourcedefinitions.internal.linstor.linbit.com.yaml
layeropenflexvolumes.internal.linstor.linbit.com.yaml
layerresourceids.internal.linstor.linbit.com.yaml
layerstoragevolumes.internal.linstor.linbit.com.yaml
layerwritecachevolumes.internal.linstor.linbit.com.yaml
linstorremotes.internal.linstor.linbit.com.yaml
linstorversion.internal.linstor.linbit.com.yaml
nodeconnections.internal.linstor.linbit.com.yaml
nodenetinterfaces.internal.linstor.linbit.com.yaml
nodes.internal.linstor.linbit.com.yaml
nodestorpool.internal.linstor.linbit.com.yaml
propscontainers.internal.linstor.linbit.com.yaml
resourceconnections.internal.linstor.linbit.com.yaml
resourcedefinitions.internal.linstor.linbit.com.yaml
resourcegroups.internal.linstor.linbit.com.yaml
resources.internal.linstor.linbit.com.yaml
rollback.internal.linstor.linbit.com.yaml
s3remotes.internal.linstor.linbit.com.yaml
satellitescapacity.internal.linstor.linbit.com.yaml
schedules.internal.linstor.linbit.com.yaml
secaccesstypes.internal.linstor.linbit.com.yaml
secaclmap.internal.linstor.linbit.com.yaml
secconfiguration.internal.linstor.linbit.com.yaml
secdfltroles.internal.linstor.linbit.com.yaml
secidentities.internal.linstor.linbit.com.yaml
secidrolemap.internal.linstor.linbit.com.yaml
secobjectprotection.internal.linstor.linbit.com.yaml
secroles.internal.linstor.linbit.com.yaml
sectyperules.internal.linstor.linbit.com.yaml
sectypes.internal.linstor.linbit.com.yaml
spacehistory.internal.linstor.linbit.com.yaml
storpooldefinitions.internal.linstor.linbit.com.yaml
trackingdate.internal.linstor.linbit.com.yaml
volumeconnections.internal.linstor.linbit.com.yaml
volumedefinitions.internal.linstor.linbit.com.yaml
volumegroups.internal.linstor.linbit.com.yaml
volumes.internal.linstor.linbit.com.yaml
error: failed to create secret Request entity too large: limit is 3145728
time="2024-04-25T06:02:47Z" level=fatal msg="failed to run" err="exit status 1"

snapshot-controller: error while generating certificate with cert manager

When using the webhook's certManagerIssuerRef configuration, the following error occurs while generating the certificate:

Name:         snapshot-validation-webhook
Namespace:    snapshot-controller
Labels:       app.kubernetes.io/instance=snapshot-controller
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=snapshot-validation-webhook
              app.kubernetes.io/version=v6.3.3
              helm.sh/chart=snapshot-controller-2.0.4
Annotations:  meta.helm.sh/release-name: snapshot-controller
              meta.helm.sh/release-namespace: snapshot-controller
API Version:  cert-manager.io/v1
Kind:         Certificate
Metadata:
  Creation Timestamp:  2024-01-08T16:35:43Z
  Generation:          1
  Resource Version:    2103731
  UID:                 953009bf-dad7-47ae-aac0-678ab7191808
Spec:
  Dns Names:
    snapshot-validation-webhook.snapshot-controller.svc
  Issuer Ref:
    Kind:  ClusterIssuer
    Name:  cloudflare
  Private Key:
    Rotation Policy:  Always
  Secret Name:        snapshot-validation-webhook-tls
Status:
  Conditions:
    Last Transition Time:    2024-01-08T16:35:43Z
    Message:                 Issuing certificate as Secret does not exist
    Observed Generation:     1
    Reason:                  DoesNotExist
    Status:                  False
    Type:                    Ready
    Last Transition Time:    2024-01-08T16:35:44Z
    Message:                 The certificate request has failed to complete and will be retried: Failed to wait for order resource "snapshot-validation-webhook-1-112280392" to become ready: order is in "errored" state: Failed to create Order: 400 urn:ietf:params:acme:error:rejectedIdentifier: Error creating new order :: Cannot issue for "snapshot-validation-webhook.snapshot-controller.svc": Domain name does not end with a valid public suffix (TLD)
    Observed Generation:     1
    Reason:                  Failed
    Status:                  False
    Type:                    Issuing
  Failed Issuance Attempts:  1
  Last Failure Time:         2024-01-08T16:35:44Z
Events:
  Type     Reason     Age   From                                       Message
  ----     ------     ----  ----                                       -------
  Normal   Issuing    35s   cert-manager-certificates-trigger          Issuing certificate as Secret does not exist
  Normal   Generated  35s   cert-manager-certificates-key-manager      Stored new private key in temporary Secret resource "snapshot-validation-webhook-m87kf"
  Normal   Requested  35s   cert-manager-certificates-request-manager  Created new CertificateRequest resource "snapshot-validation-webhook-1"
  Warning  Failed     34s   cert-manager-certificates-issuing          The certificate request has failed to complete and will be retried: Failed to wait for order resource "snapshot-validation-webhook-1-112280392" to become ready: order is in "errored" state: Failed to create Order: 400 urn:ietf:params:acme:error:rejectedIdentifier: Error creating new order :: Cannot issue for "snapshot-validation-webhook.snapshot-controller.svc": Domain name does not end with a valid public suffix (TLD)

It looks like Cert Manager is unhappy with the svc suffix ... I can't see how this has worked previously.

Missing enable-distributed-snapshotting support

External Snapshotter v5.0.0 introduces an ability to create "distributed" snapshots. These are snapshots of volumes that are bound to a specific node. The feature is enabled by adding enable-distributed-snapshotting arg to the external snapshotter container.

Make CRDs part of helm charts

CRDs outside of chart make installation via CI pipelines rather tedious process, as we need to manage CRD lifecycle separately from rest of the chart.

Snapshot CRDs are referenced two charts

I've stumbled over a pretty wired issue: The snapshot-controller and the snapshot-validation-webhook chart both import the "external snapshot CRDs" within their Makefile. When declaring both charts as a dependency in Chart.yaml and run helm template an error is returned.

Steps to reproduce:

  1. Create a Chart.yaml in an empty directory:
apiVersion: v2
name: snapshot-test
description: A Helm chart for Piraeus Operator
type: application
version: 0.1.0
appVersion: 0.1.0
dependencies:
  - name: "snapshot-controller"
    version: "1.9.1"
    repository: https://piraeus.io/helm-charts/
  - name: "snapshot-validation-webhook"
    version: "1.8.2"
    repository: https://piraeus.io/helm-charts/
  1. Run helm dep up to pull the dependencies.
  2. Generate the template and write them to an output dir:
$ helm template --include-crds --output-dir test .
wrote test/snapshot-test/charts/snapshot-controller/crds/groupsnapshot.storage.k8s.io_volumegroupsnapshotclasses.yaml
wrote test/snapshot-test/charts/snapshot-controller/crds/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
wrote test/snapshot-test/charts/snapshot-controller/crds/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
wrote test/snapshot-test/charts/snapshot-controller/crds/snapshot.storage.k8s.io_volumesnapshots.yaml
Error: open test/snapshot-test/charts/snapshot-validation-webhook/crds/groupsnapshot.storage.k8s.io_volumegroupsnapshotclasses.yaml: no such file or directory

I will also file a bug in the Helm Github repo, but from my tests I'm pretty sure this issue is caused by importing the same CRDs in both charts.

The question here is: Is this intended or should the CRDs only exist in the snapshot-controller chart?

snapshot-validation-webhook-test-valid-body is failing

Installed snapshot-validation-webhook-1.4.3 (app version v6.0.1) which is properly deployed:

NAME                       	NAMESPACE           	REVISION	UPDATED                              	STATUS  	CHART                            	APP VERSION
snapshot-validation-webhook	external-snapshotter	1       	2022-06-07 17:54:25.994741 +0200 CEST	deployed	snapshot-validation-webhook-1.4.3	v6.0.1

when running helm test snapshot-validation-webhook the operation fails:

Error: pod snapshot-validation-webhook-test-valid-body failed

Missing features

Unable to set via chart values:

  • Pod's resources.limits
  • Pod's topologySpreadConstrains
  • PodDisruptionBudget policies

snapshot-controller: caBundle field changes on every `helm upgrade` or `helm diff`

The caBundle field introduced in 2.0.0 for snapshot-validation-webhook changes on every helm diff or helm upgrade. This causes unnecessary deploys with continuous reconcilliation gitops tools and drift detection workflows.

Full `helm diff` (click to expand)
kube-system, snapshot-validation-webhook, ValidatingWebhookConfiguration (admissionregistration.k8s.io) has changed:
  # Source: snapshot-controller/templates/webhook.yaml
  apiVersion: admissionregistration.k8s.io/v1
  kind: ValidatingWebhookConfiguration
  metadata:
    name: snapshot-validation-webhook
    labels:
      helm.sh/chart: snapshot-controller-2.0.0
      app.kubernetes.io/name: snapshot-validation-webhook
      app.kubernetes.io/instance: snapshot-controller
      app.kubernetes.io/version: "v6.3.1"
      app.kubernetes.io/managed-by: Helm
  webhooks:
    - name: snapshot-validation-webhook.snapshot.storage.k8s.io
      rules:
        - apiGroups:
          - snapshot.storage.k8s.io
          apiVersions:
          - v1
          - v1beta1
          operations:
          - CREATE
          - UPDATE
          resources:
          - volumesnapshots
          - volumesnapshotclasses
          - volumesnapshotcontents
          scope: "*"
      clientConfig:
        service:
          namespace: kube-system
          name: snapshot-validation-webhook
          path: "/volumesnapshot"
-       caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURiVENDQWxXZ0F3SUJBZ0lRQVVYZTZRcEhYK041Q0hIYTBCdGNBakFOQmdrcWhraUc5dzBCQVFzRkFEQTIKTVRRd01nWURWUVFERXl0emJtRndjMmh2ZEMxMllXeHBaR0YwYVc5dUxYZGxZbWh2YjJzdWEzVmlaUzF6ZVhOMApaVzB1YzNaak1CNFhEVEl6TVRBeU9URTNNRGN3TlZvWERUTXpNVEF5TmpFM01EY3dOVm93TmpFME1ESUdBMVVFCkF4TXJjMjVoY0hOb2IzUXRkbUZzYVdSaGRHbHZiaTEzWldKb2IyOXJMbXQxWW1VdGMzbHpkR1Z0TG5OMll6Q0MKQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFKZkswUncyc3ZhckhqaStMYzJ4VzBiNwpsdHdkSDlBUWlwRndxQXdySUljaGJjM3BoRkhUVnJHTVhGQXlUQWxJdXl1MlQvb0w5QmxtNmt4ZTBNdlpoRCtiCjBYaUx0YVNHY1lvOUJwRn
BCbi9WQWhreDB1QzM3WDNrZzFvUmthS09ZMUZGb0l1MDRPL2FCQk5PV3pVb3Y2ekEKRGYyK1ZYTkV4NnhDY0tmYVlObU5mWXlYeXBlQTJkREM2MXRHN3hCaE9JNTdNUEFMQmRpUklOYUJkcjNqdjhpcgpzMXhBMlY1NURSRElzcXp6N2diNUJzSUwvR1Zwdjh0Z1VWc0hQMGdSNVJWZ0g5ZjFSQzlPVzBKd0REUzRVSFlQClZFam5hRGUzSDJOMWJRT0ZwN0VHYXpjaTUydXJoOFJHdUJ2VmpvSjRWUnR6akZWaE5vQzhxbk5SLzJxbDU1RUMKQXdFQUFhTjNNSFV3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdncgpCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTURZR0ExVWRFUVF2TUMyQ0szTnVZWEJ6YUc5MExYWmhiR2xrCllYUnBiMjR0ZDJWaWFHOXZheTVyZFdKbExYTjVjM1JsYlM1emRtTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUIKQUdVMlgycX
pKZHVDU0JRUCtuOFpmUGt2ZW9qc3ZQWWV1dEFWdXlnYVMvRGJobzhoN1gzTlNmSkJuRUl1TWFaYwpqcjJ2bFZwTU11U2tScncyKzBXKzhEeHBieUZrUVhTNm1jMUV5aS9lOGZkTUFlV25DZ2hxRDAzYU5CRE5ienBHClROYmliNHBESDQrZi82Q3B4eWVXMkJqODlHb0tLNTIrR1NkRGFSSUJXbTVYQzIrUXdpZ2FLVHNZTTlIRmdqUkoKUXlNMkVrQU5vbXkrdm93Y0RuSG0veFJSbHlXTU5VSVo1cmc1cTZrODNab2UxWjZDVE0zNFJENGhoQklJMHkrRApzN0NGRCtBdXRNSWxSRE4rcGhkZEl5b0dSRk5mQnp4dDlVdmx1OWthRXhqQVQ5a0d6cFZYdFhXeVhobkhURlNWCmZTSXNORXgxSXlPME54ZGpOTktQNlRvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+       caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURiakNDQWxhZ0F3SUJBZ0lSQUlnbzlDd2huZzZ1NVNpT0FhL1MyMlV3RFFZSktvWklodmNOQVFFTEJRQXcKTmpFME1ESUdBMVVFQXhNcmMyNWhjSE5vYjNRdGRtRnNhV1JoZEdsdmJpMTNaV0pvYjI5ckxtdDFZbVV0YzNsegpkR1Z0TG5OMll6QWVGdzB5TXpFd01qa3hOelUzTURGYUZ3MHpNekV3TWpZeE56VTNNREZhTURZeE5EQXlCZ05WCkJBTVRLM051WVhCemFHOTBMWFpoYkdsa1lYUnBiMjR0ZDJWaWFHOXZheTVyZFdKbExYTjVjM1JsYlM1emRtTXcKZ2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzlBd0dkbFYxUjV2aW05ZWQ5NVgxRgpxMTVnUzE4U1FPN0JUZ1VLVDVTNDVqMi9pVG53aFpHV1Avdm5SaVdJVVYyOXBaTS9GSzBNZXQzeG0rZG1POXhUCndxNEZUYkNnK29KdE9pZE
VMOHBIWVVCYm5HVExER3krajVnQmNBeXBhS0tTRmtRUEZ1eVZweXQ3b1pvb1IvcHYKdEFiNXpHelNwL2tpeXArU2RkWklqUGxVdDJNV3Q1VkxMcDMzdFEraXhMWGhucVlUaUxLZE9Ea0h3dVZUMyt5TgpmbXczU2hSRi80UzZzdHZ0RnVjMHl0cXI5UmxJalhnelh0M3pmN2JlMWRYaElOeUlpVkpYSGVoN2I0S1hVd3FKCjZpd1p6ai9yVk5xd2pKWW1NaXNsK0x0REhOWGVwbkRVZ0dEOWF6K09NWklHWDYrN0lEeVhvQXNNQkFqUjEyUkwKQWdNQkFBR2pkekIxTUE0R0ExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSQpLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBMkJnTlZIUkVFTHpBdGdpdHpibUZ3YzJodmRDMTJZV3hwClpHRjBhVzl1TFhkbFltaHZiMnN1YTNWaVpTMXplWE4wWlcwdWMzWmpNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUIKQVFBZjBTUG
5ET0dVUVVQY3JPcmRwVjNCR0VPakt3QjFPdjRpL0RNNUJOcFRUN2JOL29NcmFtSXczK3JKeERNNApwTkViWFdwTWwrd2VvcVRKM05yNVJ4azVESVBBV0RJbTQzQlpvejIxcW95SlVMQ3RlRTF6aHhocm1rcjRjb2IzCm1hS3ZReHdZK1VuK01QM2dFSDNuT0dEVFNMNFpicThPSHpZd3FSQklMKzFIc3lLSThocUNuYUlEcUdlK0lYbE0KWWloSmJjNEdLcW4yaHFiaGpSblh6WjE2eDhpZjhlcWZycDJoQjlmT0U0SW5yRjJuVlVGbG0xWTUvZGlBTXRpcQpCR1JjeURlSWFIZWpQdUV2VWdZQWJTWlhreUZEUnltREtHbUNwNUt6V1JibVd4bnozWFcyb3VweDdaS1pJNm1rCnh1MnMxeTNROEp2VURKT2lCRktvdytxUgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
      admissionReviewVersions:
        - v1
        - v1beta1
      sideEffects: None
      failurePolicy: Fail
      timeoutSeconds: 2
    - name: snapshot-validation-webhook.groupsnapshot.storage.k8s.io
      rules:
        - apiGroups:
            - groupsnapshot.storage.k8s.io
          apiVersions:
            - v1alpha1
          operations:
            - CREATE
            - UPDATE
          resources:
            - volumegroupsnapshots
            - volumegroupsnapshotcontents
            - volumegroupsnapshotclasses
          scope: "*"
      clientConfig:
        service:
          namespace: kube-system
          name: snapshot-validation-webhook
          path: "/volumegroupsnapshot"
-       caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURiVENDQWxXZ0F3SUJBZ0lRQVVYZTZRcEhYK041Q0hIYTBCdGNBakFOQmdrcWhraUc5dzBCQVFzRkFEQTIKTVRRd01nWURWUVFERXl0emJtRndjMmh2ZEMxMllXeHBaR0YwYVc5dUxYZGxZbWh2YjJzdWEzVmlaUzF6ZVhOMApaVzB1YzNaak1CNFhEVEl6TVRBeU9URTNNRGN3TlZvWERUTXpNVEF5TmpFM01EY3dOVm93TmpFME1ESUdBMVVFCkF4TXJjMjVoY0hOb2IzUXRkbUZzYVdSaGRHbHZiaTEzWldKb2IyOXJMbXQxWW1VdGMzbHpkR1Z0TG5OMll6Q0MKQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFKZkswUncyc3ZhckhqaStMYzJ4VzBiNwpsdHdkSDlBUWlwRndxQXdySUljaGJjM3BoRkhUVnJHTVhGQXlUQWxJdXl1MlQvb0w5QmxtNmt4ZTBNdlpoRCtiCjBYaUx0YVNHY1lvOUJwRn
BCbi9WQWhreDB1QzM3WDNrZzFvUmthS09ZMUZGb0l1MDRPL2FCQk5PV3pVb3Y2ekEKRGYyK1ZYTkV4NnhDY0tmYVlObU5mWXlYeXBlQTJkREM2MXRHN3hCaE9JNTdNUEFMQmRpUklOYUJkcjNqdjhpcgpzMXhBMlY1NURSRElzcXp6N2diNUJzSUwvR1Zwdjh0Z1VWc0hQMGdSNVJWZ0g5ZjFSQzlPVzBKd0REUzRVSFlQClZFam5hRGUzSDJOMWJRT0ZwN0VHYXpjaTUydXJoOFJHdUJ2VmpvSjRWUnR6akZWaE5vQzhxbk5SLzJxbDU1RUMKQXdFQUFhTjNNSFV3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdncgpCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTURZR0ExVWRFUVF2TUMyQ0szTnVZWEJ6YUc5MExYWmhiR2xrCllYUnBiMjR0ZDJWaWFHOXZheTVyZFdKbExYTjVjM1JsYlM1emRtTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUIKQUdVMlgycX
pKZHVDU0JRUCtuOFpmUGt2ZW9qc3ZQWWV1dEFWdXlnYVMvRGJobzhoN1gzTlNmSkJuRUl1TWFaYwpqcjJ2bFZwTU11U2tScncyKzBXKzhEeHBieUZrUVhTNm1jMUV5aS9lOGZkTUFlV25DZ2hxRDAzYU5CRE5ienBHClROYmliNHBESDQrZi82Q3B4eWVXMkJqODlHb0tLNTIrR1NkRGFSSUJXbTVYQzIrUXdpZ2FLVHNZTTlIRmdqUkoKUXlNMkVrQU5vbXkrdm93Y0RuSG0veFJSbHlXTU5VSVo1cmc1cTZrODNab2UxWjZDVE0zNFJENGhoQklJMHkrRApzN0NGRCtBdXRNSWxSRE4rcGhkZEl5b0dSRk5mQnp4dDlVdmx1OWthRXhqQVQ5a0d6cFZYdFhXeVhobkhURlNWCmZTSXNORXgxSXlPME54ZGpOTktQNlRvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+       caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURiakNDQWxhZ0F3SUJBZ0lSQUlnbzlDd2huZzZ1NVNpT0FhL1MyMlV3RFFZSktvWklodmNOQVFFTEJRQXcKTmpFME1ESUdBMVVFQXhNcmMyNWhjSE5vYjNRdGRtRnNhV1JoZEdsdmJpMTNaV0pvYjI5ckxtdDFZbVV0YzNsegpkR1Z0TG5OMll6QWVGdzB5TXpFd01qa3hOelUzTURGYUZ3MHpNekV3TWpZeE56VTNNREZhTURZeE5EQXlCZ05WCkJBTVRLM051WVhCemFHOTBMWFpoYkdsa1lYUnBiMjR0ZDJWaWFHOXZheTVyZFdKbExYTjVjM1JsYlM1emRtTXcKZ2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzlBd0dkbFYxUjV2aW05ZWQ5NVgxRgpxMTVnUzE4U1FPN0JUZ1VLVDVTNDVqMi9pVG53aFpHV1Avdm5SaVdJVVYyOXBaTS9GSzBNZXQzeG0rZG1POXhUCndxNEZUYkNnK29KdE9pZE
VMOHBIWVVCYm5HVExER3krajVnQmNBeXBhS0tTRmtRUEZ1eVZweXQ3b1pvb1IvcHYKdEFiNXpHelNwL2tpeXArU2RkWklqUGxVdDJNV3Q1VkxMcDMzdFEraXhMWGhucVlUaUxLZE9Ea0h3dVZUMyt5TgpmbXczU2hSRi80UzZzdHZ0RnVjMHl0cXI5UmxJalhnelh0M3pmN2JlMWRYaElOeUlpVkpYSGVoN2I0S1hVd3FKCjZpd1p6ai9yVk5xd2pKWW1NaXNsK0x0REhOWGVwbkRVZ0dEOWF6K09NWklHWDYrN0lEeVhvQXNNQkFqUjEyUkwKQWdNQkFBR2pkekIxTUE0R0ExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSQpLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBMkJnTlZIUkVFTHpBdGdpdHpibUZ3YzJodmRDMTJZV3hwClpHRjBhVzl1TFhkbFltaHZiMnN1YTNWaVpTMXplWE4wWlcwdWMzWmpNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUIKQVFBZjBTUG
5ET0dVUVVQY3JPcmRwVjNCR0VPakt3QjFPdjRpL0RNNUJOcFRUN2JOL29NcmFtSXczK3JKeERNNApwTkViWFdwTWwrd2VvcVRKM05yNVJ4azVESVBBV0RJbTQzQlpvejIxcW95SlVMQ3RlRTF6aHhocm1rcjRjb2IzCm1hS3ZReHdZK1VuK01QM2dFSDNuT0dEVFNMNFpicThPSHpZd3FSQklMKzFIc3lLSThocUNuYUlEcUdlK0lYbE0KWWloSmJjNEdLcW4yaHFiaGpSblh6WjE2eDhpZjhlcWZycDJoQjlmT0U0SW5yRjJuVlVGbG0xWTUvZGlBTXRpcQpCR1JjeURlSWFIZWpQdUV2VWdZQWJTWlhreUZEUnltREtHbUNwNUt6V1JibVd4bnozWFcyb3VweDdaS1pJNm1rCnh1MnMxeTNROEp2VURKT2lCRktvdytxUgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
      admissionReviewVersions:
        - v1
        - v1beta1
      sideEffects: None
      failurePolicy: Fail
      timeoutSeconds: 2
kube-system, snapshot-validation-webhook-tls, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

Other charts like ingress-nginx with validating webhooks use a patch Job instead of encoding the CA in the helm template: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx/templates/admission-webhooks. This avoids diffs on subsequent helm runs.

Unable to set webhook pod's hostNetwork for Amazon EKS

When using a custom CNI (such as Weave or Calico) on Amazon EKS, the webhook cannot be reached.

Internal error occurred: failed calling webhook "snapshot-validation-webhook.snapshot.storage.k8s.io": failed to call webhook: Post "https://snapshot-validation-webhook.kube-system.svc:443/volumesnapshot?timeout=2s": Address is not allowed

Setting hostNetwork: true, dnsPolicy: ClusterFirstWithHostNet for the webhook pod addresses this issue. However, the Chart doesn't allow setting those parameters.

PR #41 submitted for review to update Chart.

Charts will not deploy automatically

This looks like a great solution but I've been struggling to get anything but the base piraeus-operator to deploy properly.

The first challenge was figuring out that the operator was not in the helm-charts repo. Easily fixed but it did take a few tries to realize it wasn't in the repo and that there wasn't another repo available. I just ended up creating my own helm repo for it.

I tried making the following as Chart dependencies for my application:

  • piraeus-ha-controller
  • linstor-affinity-controller
  • linstor-scheduler

I also tried creating my own helm repo for the piraeus-operator and setting the above as Chart dependencies of the operator. In both cases, it will not deploy with the following error:

Please specify linstor.endpoint, no default URL could be determined

However, in the documentation it states that the linstor.endpoint does not need to be specified if it is deployed alongside and in the same namespace as the operator, which mine is.

I would set the linstor.endpoint if I knew what to set it to. It's not clear how I can determine the correct value.

Appreciate any insight into a typical approach to get this deployed using helm. I'd like to not resort to kubectl since it is less automated.

Thanks in advance...

Insufficient RBAC permissions after upgrading webhook to 1.8.0

Hey,

after upgrading to the new snapshot-validation-webhook and snapshot-controller helm chart versions the snapshot-validation-webhook pod is crash looping with the following error:

W0918 18:46:25.038149       1 reflector.go:535] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:133: failed to list *v1alpha1.VolumeGroupSnapshotClass: volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:snapshot-validation-webhook" cannot list resource "volumegroupsnapshotclasses" in API group "groupsnapshot.storage.k8s.io" at the cluster scope
E0918 18:46:25.039066       1 reflector.go:147] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:133: Failed to watch *v1alpha1.VolumeGroupSnapshotClass: failed to list *v1alpha1.VolumeGroupSnapshotClass: volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:snapshot-validation-webhook" cannot list resource "volumegroupsnapshotclasses" in API group "groupsnapshot.storage.k8s.io" at the cluster scope

It seems like the template is missing the API permissions in the cluster role.
I've manually changed the cluster role to the following:

rules:
  - verbs:
      - list
      - watch
    apiGroups:
      - snapshot.storage.k8s.io
      - groupsnapshot.storage.k8s.io
    resources:
      - volumesnapshotclasses
      - volumegroupsnapshotclasses

After that the error is gone but replaced with a new one:

W0918 18:49:17.261333       1 reflector.go:535] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:133: failed to list *v1alpha1.VolumeGroupSnapshotClass: the server could not find the requested resource (get volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io)
E0918 18:49:17.261362       1 reflector.go:147] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:133: Failed to watch *v1alpha1.VolumeGroupSnapshotClass: failed to list *v1alpha1.VolumeGroupSnapshotClass: the server could not find the requested resource (get volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io

After manually applying the mentioned CRD from here the pod is not crash looping anymore.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.