GithubHelp home page GithubHelp logo

cert-manager / csi-driver Goto Github PK

View Code? Open in Web Editor NEW
192.0 14.0 46.0 1.21 MB

A Kubernetes CSI plugin to automatically mount signed certificates to Pods using ephemeral volumes

Home Page: https://cert-manager.io/docs/usage/csi-driver/

License: Apache License 2.0

Go 62.34% Makefile 32.82% Shell 4.31% Mustache 0.53%
cert-manager kubernetes certificate

csi-driver's Introduction

cert-manager project logo

csi-driver godoc Go Report Card Artifact Hub

csi-driver

csi-driver is a Container Storage Interface (CSI) driver plugin for Kubernetes to work along cert-manager. The goal for this plugin is to facilitate requesting and mounting certificate key pairs to pods seamlessly. This is useful for facilitating mTLS, or otherwise securing connections of pods with guaranteed present certificates whilst having all of the features that cert-manager provides.

Why a CSI Driver?

  • Ensure private keys never leave the node and are never sent over the network. All private keys are stored locally on the node.
  • Unique key and certificate per application replica with a grantee to be present on application run time.
  • Reduce resource management overhead by defining certificate request spec in-line of the Kubernetes Pod template.
  • Automatic renewal of certificates based on expiry of each individual certificate.
  • Keys and certificates are destroyed during application termination.
  • Scope for extending plugin behaviour with visibility on each replica's certificate request and termination.

Documentation

Please follow the documentation at cert-manager.io for installing and using csi-driver.

Release Process

The release process is documented in RELEASE.md.

csi-driver's People

Contributors

basert avatar cert-manager-bot avatar cert-manager-prow[bot] avatar charlieegan3 avatar chriscur avatar cornfeedhobo avatar dependabot[bot] avatar gtaylor avatar inteon avatar irbekrm avatar jahrlin avatar jakexks avatar jetstack-bot avatar joshvanl avatar maesterz avatar mattiasgees avatar micahhausler avatar munnerz avatar nzbr avatar rcanderson23 avatar sdrik avatar sgtcodfish avatar sitaramkm avatar thatsmrtalbot avatar wallrj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver's Issues

pkcs12 file is not created when annotation is set

After issuing a certificate from a ClusterIssuer, the ca.crt, tls.crt and tls.key files are present, but not the set
csi.cert-manager.io/keystore-pkcs12-file: "crt.p12"

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: somens
  labels:
    app: my-pod
spec:
  containers:
    - name: some-component
      image: ubuntu
      volumeMounts:
      - mountPath: "/tls"
        name: tls
      command: [ "sleep", "100000000000" ]
  volumes:
    - name: tls
      csi:
        readOnly: true
        driver: csi.cert-manager.io
        volumeAttributes:
              csi.cert-manager.io/issuer-name: some-issuer
              csi.cert-manager.io/issuer-kind: ClusterIssuer
              csi.cert-manager.io/dns-names: mypod.somens.svc.cluster.local
              csi.cert-manager.io/keystore-pkcs12-password: "testpass"
              csi.cert-manager.io/keystore-pkcs12-enable: "true"
              csi.cert-manager.io/keystore-pkcs12-file: "crt.p12"

The issue is the file is not present on the volume:

root@my-pod:/tls# ls
ca.crt  tls.crt  tls.key

As you can see the request was successfully completed, the issuer:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: some-issuer
spec:
  ca:
    secretName: my-ca-secret

e2e tests

Create e2e tests with kind for full conformance.

Ideally we should also incorporate the conformance tests available upstream. (albeit with limited scope since we fail full complience only supporting ephemeral inline volumes).

Add priorityClassName to helm chart

On a graceful shutdown, it's possible that the csi-driver would evict before cleaning up the csi volumes. Would it be prudent to add priorityClassName to helm chart to cover that? Clients could then choose whether they need it and set it to system-node-critical or other as needed.

Unable to read files if user is not root

When deploying a container with a non-privileged user: either via the image itself or using securityContext in the deployment - you cannot access any of the files in the mounted folder due to its permissions.

Example:

ls / -lash
...
0 drwxr--r--    2 root     root         100 May  4 08:26 tls
...

root:root is the owner, and allows only root to access the folder (744). The folder requires 755 permissions, otherwise you cannot access the files in the folder as a different user:

Example:

/ $ cat /tls/ca.pem
cat: can't open '/tls/ca.pem': Permission denied

This can be reproduced as followed:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: csi
  namespace: cert-verification
  labels:
    app: csi
spec:
  replicas: 2
  selector:
    matchLabels:
      app: csi
  template:
    metadata:
      labels:
        app: csi
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
        - name: busybox
          image: busybox
          volumeMounts:
          - mountPath: "/tls"
            name: tls
          command: [ "sleep", "1000000" ]
      volumes:
        - name: tls
          csi:
            # https://cert-manager.io/docs/usage/csi/
            driver: csi.cert-manager.io
            volumeAttributes:
              csi.cert-manager.io/issuer-name: k8s-pki
              csi.cert-manager.io/issuer-kind: ClusterIssuer
              # per default the following certificates will be created: ca.pem, crt.pem, key.pem
              # the location is defined in the volume mounts. the name of the cert + key can be configured via annotations.
              csi.cert-manager.io/dns-names: service-name.cert-verification.svc.cluster.local,service-name.cert-verification.svc,service-name.cert-verification

I believe the easiest fix would be to mount the destination folder with 755 permissions and keeping the files to 744.

node-driver-registrar preStop hook fails

Hey,

we are seeing issues in our kubelet error logs when we shutdown a node. The preStop hook (

command: ["/bin/sh", "-c", "rm -rf /registration/cert-manager-csi-driver /registration/cert-manager-csi-driver-reg.sock"]
) for the node-driver-registrar fails as the container image does not include /bin/sh binary: https://github.com/kubernetes-csi/node-driver-registrar/blob/master/Dockerfile

SubPath support is broken or missing

I have successfully installed the CSI driver, but when I make a pod there is a creation error:

Error: stat /data/kubelet/pods/71acd9f8-0469-4b44-8523-7276a646691f/volumes/kubernetes.io~csi/tls/mount: no such file or directory

I can see the directory only has vol_data.json:

{"attachmentID":"csi-d757e34a4db1de6adb6903587d8310b2f726fe24dc0c0d6cd97abad0017b95d0","driverName":"csi.cert-manager.io","nodeName":"k3s","specVolID":"tls","volumeHandle":"csi-510ce9dd2d1838b8d7126fa802b89a4b759abab419ee815edcc1fb98cb0d0c31","volumeLifecycleMode":"Ephemeral"}

I can list CertificateRequests and see that they have been made, approved, and report ready.

I've never seen a mount file referenced before and google doesn't turn up much.

Is it possible that using SubPath is causing this?

I've confirmed that removing subPath at least allows the pod to initialize.

Unable to mount and read only file error

Warning FailedMount 11m (x329 over 16h) kubelet Unable to attach or mount volumes: unmounted volumes=[tls], unattached volumes=[tls kube-api-access-2v6n8]: timed out waiting for the condition
Warning FailedMount 88s (x482 over 16h) kubelet MountVolume.SetUp failed for volume "tls" : rpc error: code = Unknown desc = chmod /var/lib/kubelet/pods/0fd27403-622b-457c-b43f-606472572c59/volumes/kubernetes.io~csi/tls/mount: read-only file system

Deleting a pod with a cert-manager-csi volume mounted results in the pod termination hanging.

Deleting a pod with a cert-manager-csi volume mounted results in the pod termination hanging.

I end up having to force delete the pod.

I am willing to submit whatever information is needed to fix this.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"archive", BuildDate:"2020-07-28T17:11:53Z", GoVersion:"go1.14.5", Compiler:"gc", Platform:"linux/amd64"}
cert-manager-csi deployment:
 - latest deploy/cert-manager-csi-driver.yaml from maste
 - namespace changed to kube-system
issuer:  vault (running on metal)

Possible bug with watchCert implementation ?

Notice watchCert gets a metaData reference and passes it to a go routine.
https://github.com/jetstack/cert-manager-csi/blob/4132849858d478fc4e2bb607d8a51248bac34c24/pkg/renew/renew.go#L176

Since the metaData is a reference, will it cause a problem when we call renewFunc ?
https://github.com/jetstack/cert-manager-csi/blob/4132849858d478fc4e2bb607d8a51248bac34c24/pkg/renew/renew.go#L206

I have not tested with the CSI driver yet. But I have a concept code here, demonstrating we might need pass by value for the go routine here ?
https://play.golang.org/p/X8aaNbnQghY

@munnerz FYI

How to specify pod name in subject alt names

From the documentation, it appears that the DNS names to specify for the subject alt names are manually specified in the volumeAttribute: csi.cert-manager.io/dns-names

Is the Pod name also automatically added?

The use case is a StatefulSet with a Headless service. Pod will be directly accessed, and hence needs the unique ordinal name added to the certificate.

JKS support

Noticed that we have JKS/PKCS12 support in cert-manager but only PKCS12 support in trust-manager. Could the same options be ported over? Thanks!

Release Helm Chart v0.5.1 / v0.6.0

There are useful changes to the Helm chart pending a release. Specifically, the features in #126.

That PR was merged 4 months ago. Please release a new Helm chart version!

Thank you 🙏

certificates for openvpn server

Hello,

I am wondering if I can use that project to handle openvpn certificates.
I understand that I can mount certificates handled by cert-manager into a volume inside a container's pod using documented attributes.

This project (kylemanna/docker-openvpn) uses easy-rsa to handle its certificates, following its documentation, it's easy (one command and some inputs), but still a manual (or initContainer on k8s) step to initialise the PKI, and then comes the renewal issue, I think it's just not scalable to create a cronjob, or worse, do it manually, to renew certificates when I'm using cert-manager to handle my ingress certificates in the same cluster.

Is there a way to delegate to cert-manager/csi-driver openvpn server PKI ?
I can mount, via attribute, a public CA, derived from said CA, public & private certificates, I guess I could mount the private CA from the secret needed by cert-manager to create an Issuer but I don't think it it necessary in openvpn server's case once the public CA is created.

Will those certificates will be renewed by cert-manager without having to restart the pod ?

Is my case ever been an intended purpose for this project or I got it wrong and may be used strictly for, like, a service mesh proxy certificates ?

Cannot `chmod` a read only filesystem

I get a chmod error: read-only file system when using the CSI driver. This error was not there 3 months ago. The image hash that I see the problem is 71845a27f96b. The image that worked fine before was 15fb01aae1da. Both are tagged the same v0.1.0-alpha.1.

I have tried on k8s 1.16.7 and 1.17.7. Cert-Manager 0.13.1, 0.15 and today 0.16. The only constant is the CSI driver so I guess the error is here.

This is the pod that I'm using, pretty simple:

apiVersion: v1
kind: Pod
metadata:
  name: my-csi-app
  namespace: default
  labels:
    app: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/tls"
        name: tls
      command: [ "sleep", "1000000" ]
  volumes:
    - name: tls
      csi:
        driver: csi.cert-manager.io
        volumeAttributes:
              csi.cert-manager.io/issuer-name: ca-issuer
              csi.cert-manager.io/issuer-kind: ClusterIssuer
              csi.cert-manager.io/dns-names: my-service.sandbox.svc.cluster.local

Here 's the log from the cert-manager-csi container. I tried to trace the error down to mount.go but I cannot understand who calls chmod. I am not familiar with Go language :(

I0731 17:16:54.923292       1 server.go:129] server: call: /csi.v1.Node/NodePublishVolume
I0731 17:16:54.923332       1 server.go:130] server: request: {"target_path":"/var/lib/kubelet/pods/c192b6d3-ea53-4956-b624-7c2697b10c9a/volumes/kubernetes.io~csi/tls/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.cert-manager.io/dns-names":"my-service.sandbox.svc.cluster.local","csi.cert-manager.io/issuer-kind":"ClusterIssuer","csi.cert-manager.io/issuer-name":"ca-issuer","csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"my-csi-app","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"c192b6d3-ea53-4956-b624-7c2697b10c9a","csi.storage.k8s.io/serviceAccount.name":"default"},"volume_id":"csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da"}
I0731 17:16:54.925704       1 nodeserver.go:100] node: created volume: /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da
I0731 17:16:54.925736       1 nodeserver.go:102] node: creating key/cert pair with cert-manager: /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da
I0731 17:16:55.454180       1 certmanager.go:80] cert-manager: waiting for CertificateRequest to become ready csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da
I0731 17:16:55.454321       1 certmanager.go:293] cert-manager: polling CertificateRequest csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da/default for ready status
I0731 17:16:55.457314       1 certmanager.go:90] cert-manager: metadata written to file /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da/metadata.json
I0731 17:16:55.457584       1 certmanager.go:105] cert-manager: CA certificate written to file /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da/data/ca.pem
I0731 17:16:55.459810       1 certmanager.go:113] cert-manager: certificate written to file /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da/data/crt.pem
I0731 17:16:55.459960       1 certmanager.go:120] cert-manager: private key written to file: /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da/data/key.pem
E0731 17:16:55.459973       1 renew.go:181] volume already being watched, aborting second watcher: csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da
I0731 17:16:55.460178       1 nodeserver.go:147] node: publish volume request ~ target:/var/lib/kubelet/pods/c192b6d3-ea53-4956-b624-7c2697b10c9a/volumes/kubernetes.io~csi/tls/mount volumeId:csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da attributes:map[csi.cert-manager.io/ca-file:ca.pem csi.cert-manager.io/certificate-file:crt.pem csi.cert-manager.io/dns-names:my-service.sandbox.svc.cluster.local csi.cert-manager.io/duration:2160h0m0s csi.cert-manager.io/is-ca:false csi.cert-manager.io/issuer-group:cert-manager.io csi.cert-manager.io/issuer-kind:ClusterIssuer csi.cert-manager.io/issuer-name:ca-issuer csi.cert-manager.io/privatekey-file:key.pem csi.cert-manager.io/renew-before:720h0m0s csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:my-csi-app csi.storage.k8s.io/pod.namespace:default csi.storage.k8s.io/pod.uid:c192b6d3-ea53-4956-b624-7c2697b10c9a csi.storage.k8s.io/serviceAccount.name:default]
I0731 17:16:55.460205       1 mount.go:84] Mounting cmd (mount) with arguments ([-o bind,ro /csi-data-dir/csi-f2084c47363e5076b4aa1039f57947a57e3520c681faed7f25743b971bba22da/data /var/lib/kubelet/pods/c192b6d3-ea53-4956-b624-7c2697b10c9a/volumes/kubernetes.io~csi/tls/mount])
E0731 17:16:55.474124       1 server.go:133] server: error: chmod /var/lib/kubelet/pods/c192b6d3-ea53-4956-b624-7c2697b10c9a/volumes/kubernetes.io~csi/tls/mount: read-only file system

Volume empty

Environment

Software

  1. Kubernetes: v1.23
  2. cert-manager: v1.10.1 installed using helm chart jetstack/cert-manager from https://charts.jetstack.io with all default values.
  3. csi-drover: v0.5.0 installed using helm chart jetstack/cert-manager-csi-driver from https://charts.jetstack.io with all default values.

Resources

The following resources are truncated by some meta and status information.

ClusterIssuer/self-signer

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: self-signer
status:
  conditions:
    - type: Ready
spec:
  selfSigned: {}

Certificate/cert-manager/cluster-ca

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cluster-ca
  namespace: cert-manager
status:
  conditions:
    - type: Ready
  notAfter: '2023-03-28T14:18:47Z'
  notBefore: '2022-12-28T14:18:47Z'
  renewalTime: '2023-02-26T14:18:47Z'
  revision: 1
spec:
  commonName: CA de1.engity.red
  isCA: true
  issuerRef:
    group: cert-manager.io
    kind: ClusterIssuer
    name: self-signer
  privateKey:
    algorithm: ECDSA
    size: 256
  secretName: cluster-ca

ℹ️ Secret cert-manager/cluster-ca exists and has all required fields. ✅

ClusterIssuer/ca

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ca
status:
  conditions:
    - type: Ready
spec:
  ca:
    secretName: cluster-ca

Relevant PODs

  1. cert-manager/cert-manager-64d459d7f-5rtc6: Running ✅
  2. cert-manager/cert-manager-cainjector-7d9466748-m79r5: Running ✅
  3. cert-manager/cert-manager-csi-driver-5fgxg and cert-manager-csi-driver-w56v6: Running ✅
  4. cert-manager/cert-manager-webhook-d77bbf4cb-25j2h: Running ✅

Scenario

Steps to reproduce

  1. I've create a POD with the following config:
    apiVersion: v1
    kind: Pod
    metadata:
      name: a-test-pod
      namespace: sandbox
    spec:
      containers:
        - name: app
          image: busybox
          volumeMounts:
          - mountPath: "/tls"
            name: tls
          command: [ "sleep", "1000000" ]
      volumes:
        - name: tls
          csi:
            driver: csi.cert-manager.io
            readOnly: true
            volumeAttributes:
                  csi.cert-manager.io/issuer-name: ca
                  csi.cert-manager.io/issuer-kind: ClusterIssuer
                  csi.cert-manager.io/common-name: a-test

Observed behavior

  1. Pod sandbox/sandbox is created.
  2. Executing ls -la /tls at this POD will shows an empty folder
    image

Expected behavior

  1. Pod sandbox/sandbox is created.
  2. Executing ls -la /tls at this POD will show:
    1. tls.key
    2. tls.crt
    3. ca.crt

Debug data

cert-manager/cert-manager-csi-driver-w56v6/cert-manager-csi-driver

I1228 15:04:14.354697       1 nodeserver.go:83] driver "msg"="Registered new volume with storage backend" "pod_name"="a-test-pod"
I1228 15:04:14.354780       1 manager.go:302] manager "msg"="Processing issuance" "volume_id"="csi-6404d0e86c3f01668d1899859188730a22e0f179db1f7022a36072b0d9c254c4"
I1228 15:04:14.547266       1 manager.go:340] manager "msg"="Created new CertificateRequest resource" "volume_id"="csi-6404d0e86c3f01668d1899859188730a22e0f179db1f7022a36072b0d9c254c4"
I1228 15:04:15.548181       1 nodeserver.go:100] driver "msg"="Volume registered for management" "pod_name"="a-test-pod"
I1228 15:04:15.548199       1 nodeserver.go:113] driver "msg"="Ensuring data directory for volume is mounted into pod..." "pod_name"="a-test-pod"
I1228 15:04:15.548412       1 nodeserver.go:132] driver "msg"="Bind mounting data directory to the pod's mount namespace" "pod_name"="a-test-pod"
I1228 15:04:15.549875       1 nodeserver.go:138] driver "msg"="Volume successfully provisioned and mounted" "pod_name"="a-test-pod"

cert-manager/cert-manager-64d459d7f-5rtc6/cert-manager-controller

I1228 15:04:14.497104       1 conditions.go:263] Setting lastTransitionTime for CertificateRequest "49fdaf0d-8548-45bc-ab8f-0eef4793b125" condition "Approved" to 2022-12-28 15:04:14.497097003 +0000 UTC m=+2757.852054054
I1228 15:04:14.577304       1 conditions.go:263] Setting lastTransitionTime for CertificateRequest "49fdaf0d-8548-45bc-ab8f-0eef4793b125" condition "Ready" to 2022-12-28 15:04:14.57729735 +0000 UTC m=+2757.932254386

Rebuild the cert-manager-csi driver using csi-lib

We have started csi-lib in order to provide common parts of building a csi-driver related to cert-manager. This project should use that library for the parts that it provides, gluing the specifics of this project in to the generic code in the library.

MountVolume.SetUp failed: cannot set blockOwnerDeletion: cannot find RESTMapping for APIVersion core/v1 Kind Pod

I'm attempting to run cert-manager-csi with cert-manager v0.14.3 on OpenShift 4.4.1.

When attempting to deploy the cert-manager-csi/deploy/example/example-app.yaml, I get the following error message in the Pod status

Warning FailedMount 3s (x7 over 36s) kubelet, worker1.cdj-ocp441a.cp.fyre.ibm.com MountVolume.SetUp failed for volume "tls" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Unknown desc = failed to create new certificate: certificaterequests.cert-manager.io "csi-8b7360bf145d2c9b73d6aa33d309c2c4bfdb15e32a6211d06437b83c4dca4e5a" is forbidden: cannot set blockOwnerDeletion in this case because cannot find RESTMapping for APIVersion core/v1 Kind Pod: no matches for kind "Pod" in version "core/v1"

To recreate on OpenShift 4.1.1:

  1. Create the following resources to allow the pod to mount a csi volume:
kind: SecurityContextConstraints
metadata:
  annotations:
    kubernetes.io/description: restricted + csi
  name: cert-manager-csi-client-scc
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
  type: MustRunAsRange
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
users: []
groups: []
volumes:
- configMap
- downwardAPI
- csi
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities: null
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
  type: MustRunAs
---
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cert-manager-csi-client-scc
rules:
- apiGroups:
  - security.openshift.io
  resourceNames:
  - cert-manager-csi-client
  resources:
  - securitycontextconstraints
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cert-manager-csi-rolebinding
  namespace: sandbox
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cert-manager-csi-client-scc
subjects:
- kind: Group
  name: system:serviceaccounts:sandbox
  1. Apply the sample files: kubectl apply -f deploy/example/example-app.yaml

Result: The Pod fails to start with the fore-mentioned error.

Feature Request: Add volumeAttributes to the generated CertificateRequest

I have a custom external issuer that requires the collection of non-standard data as part of the certificate request. I have implemented this in the issuer as annotations on the Certificate CRD, which get passed on to the CertificateRequest and subsequently on to my external issuer.

When using the CSI driver, there is no way to influence the attributes passed on to the CertificateRequest, therefore this issue is asking for this scenario to be supported.

        - name: my-cert-via-csi-driver
          csi:
            readOnly: true
            driver: csi.cert-manager.io
            volumeAttributes:
              csi.cert-manager.io/issuer-name: external-custom-issuer
              csi.cert-manager.io/issuer-kind: ExternalCustomIssuer
              csi.cert-manager.io/issuer-group: external.custom.issuer.com
              csi.cert-manager.io/common-name: not-used
              csi.cert-manager.io/duration: 8760h
              csi.cert-manager.io/renew-before: 8759h
              external.custom.issuer.com/serviceId: 12d780c0-4c9d-4fbc-a54b-ad88f120e4c7
              external.custom.issuer.com/teamName: TestTeam1
              external.custom.issuer.com/serviceName: TestService1
              external.custom.issuer.com/environment: DEV

In the above example, I would expect the non-cert-manager annotations to be forwarded on to the certificate requests. instead the annotations list is set to nil. (see https://github.com/cert-manager/csi-driver/blob/main/pkg/requestgen/generator.go#L94)

I may be missing something obvious, or a better way to supplying this custom data. If not I would be happy to contribute a PR if this suggestion is appropriate.

Edit to add an example of what i'd like to see. I have validated this locally.

diff --git a/pkg/requestgen/generator.go b/pkg/requestgen/generator.go
index 8ae241a..6bc069a 100644
--- a/pkg/requestgen/generator.go
+++ b/pkg/requestgen/generator.go
@@ -72,6 +72,15 @@ func RequestForMetadata(meta metadata.Metadata) (*manager.CertificateRequestBund
        if err != nil {
                return nil, fmt.Errorf("%q: %w", csiapi.IPSANsKey, err)
        }
+       passthroughAnnotations := make(map[string]string)
+
+       for key, val := range attrs {
+               group := strings.Split(key, "/")[0]
+
+               if group != "csi.cert-manager.io" {
+                       passthroughAnnotations[key] = val
+               }
+       }

        return &manager.CertificateRequestBundle{
                Request: &x509.CertificateRequest{
@@ -91,7 +100,7 @@ func RequestForMetadata(meta metadata.Metadata) (*manager.CertificateRequestBund
                        Kind:  attrs[csiapi.IssuerKindKey],
                        Group: attrs[csiapi.IssuerGroupKey],
                },
-               Annotations: nil,
+               Annotations: passthroughAnnotations,
        }, nil
 }

Again, i'm happy to PR this change, and welcome feedback. There is probably no reason to omit the csi.cert-manager.io namespaced attributes.

Edit: PR link: #212

Is it too late to align cert-manager annotations?

There is some drift between the main cert-manager controller annotations and the CSI driver volume attributes. Is there any desire to attempt to unify them? I'll be opening some PRs that will relate to this question.

Edit: it would also be nice to shore up the API constants before v1alpha is locked in. I'd be happy to take on writing the v1beta api around the cert-manager code, that way they all match.

rpc error: code = Unknown desc = mkdir /mnt: read-only file system

Hello,

I came across a weird error I desperately need help with. The error message is:

Apr 22 15:41:30 node1 k3s[2369]: E0422 15:41:30.624820    2369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/4c705953-5bab-4634-94a5-4f24a2e37989-tls podName:4c705953-5bab-4634-94a5-4f24a2e37989 nodeName:}" failed. No retries permitted until 2024-04-22 15:42:02.624778249 +0100 BST m=+154788.170284607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tls" (UniqueName: "kubernetes.io/csi/4c705953-5bab-4634-94a5-4f24a2e37989-tls") pod "test-pvtsz" (UID: "4c705953-5bab-4634-94a5-4f24a2e37989") : rpc error: code = Unknown desc = mkdir /mnt: read-only file system

Error message with log-level 5:

E0423 08:32:00.502979       1 server.go:109] "msg"="failed processing request" "error"="mkdir /mnt: read-only file system" "logger"="driver" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume"

I managed to identify where the error originates from: https://github.com/cert-manager/csi-lib/blob/bff76660c0a7288b185ec888313cd97b228664e2/driver/nodeserver.go#L114

Prior the above mentioned error I see the following informational log:

I0423 12:57:27.142902       1 nodeserver.go:113] "msg"="Ensuring data directory for volume is mounted into pod..." "logger"="driver" "pod_name"="test-vkjjq"

Given I do not see the info log either from line 127 or 132 and based on the error message I have to assume that this is where the permission error coming from: https://github.com/cert-manager/csi-lib/blob/bff76660c0a7288b185ec888313cd97b228664e2/driver/nodeserver.go#L117

Accordingly, I assumed that req.GetTargetPath() for some reason returns /mnt.

I tried to reproduce the permission error on one of the nodes by extracting the code into a standalone executable and running it on the node but the permission issue did not show.

I'm not sure why the CSI driver gets a permission error given it's running as root. Also, not sure what it wants from the /mnt directory... is it normal that req.GetTargetPath() returns /mnt? Where is req coming from? What sends that request?

Environment

$ kubectl version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3+k3s1
  • Cert Manager version: v1.14.4
  • CSI Driver version: v0.8.0

I tried upgrading to v1.15.0-alpha.0 but I experienced the same error.

Issuers and Certificate Manifest

apiVersion: v1
kind: Namespace
metadata:
  name: test
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: test-cluster-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cluster-ca
  namespace: test
spec:
  isCA: true
  commonName: ca.svc.cluster.local
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: test-cluster-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: test-issuer
  namespace: test
spec:
  ca:
    secretName: root-secret

Replicaset/Pod Manifest

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: test
  namespace: test
  labels:
    app: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      serviceAccountName: test
      containers:
        - name: test
          image: ubuntu:latest
          ports:
            - containerPort: 8000
          volumeMounts:
            - mountPath: "/tls"
              name: tls
      volumes:
        - name: tls
          csi:
            driver: csi.cert-manager.io
            readOnly: true
            volumeAttributes:
              csi.cert-manager.io/issuer-name: test-issuer
              csi.cert-manager.io/dns-names: ${POD_NAME}.${POD_NAMESPACE}.svc.cluster.local
              csi.cert-manager.io/uri-sans: "spiffe://cluster.local/ns/${POD_NAMESPACE}/pod/${POD_NAME}/${POD_UID}"
              csi.cert-manager.io/common-name: "${SERVICE_ACCOUNT_NAME}.${POD_NAMESPACE}"

ability to specify pod IP in volume attributes

It is possible to specify some specific IPs in volume attributes using csi.cert-manager.io/ip-sans anotation. Would it be possible to use podIP or podIPs from pod status field here to make sure the certificate is issued exactly for IP that is assigned to the pod requesting the certificate?

Support nodeSelector in helm chart

To ensure helm release succeeds when not all types of nodes are supported by csi-driver (e.g. windows nodes, weird processor architectures, etc.), it makes sense to support node selector in addition to existing tolerations.

Push new tag for chart fixes

Currently, the v0.5.0 chart apiVersion is broken for lint, which is preventing it from being included in our clusters. This is the latest available chart version.

$ helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended
[ERROR] Chart.yaml: chart type is not valid in apiVersion 'v1'. It is valid in apiVersion 'v2'
[WARNING] templates/csidriver.yaml: storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver

Error: 1 chart(s) linted, 1 chart(s) failed

I noticed that this has been fixed in this commit months ago, but we're still waiting on a tag so we can get the chart.
17c9f74

TLS Files not being renewed

Using cert-manager-csi to mount short-lived Certs signed by Vault. Using the following in my deployment manifest:


  volumes:
    - name: tls
      csi:
        driver: csi.cert-manager.io
        volumeAttributes:
          csi.cert-manager.io/issuer-name: dbc-vault-issuer
          csi.cert-manager.io/duration: 320s
          csi.cert-manager.io/renew-before: 190s
          csi.cert-manager.io/dns-names: redapi.dbclient.com,redapi2.dbclient.com


Packet capture shows the approle login used, and response below:

{
  "request_id": "edfe5ddb-76c6-7b20-9795-5142bd10d725",
  "lease_id": "",
  "renewable": false,     #Assume this is for the AppRole itself
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "s.uQIFnphv9EHAD88gdtCA3Dvm",
    "accessor": "Vs9hLekvYn6hFt0XhNYHM30n",
    "policies": [
      "default",
      "kube-allow-sign"
    ],
    "token_policies": [
      "default",
      "kube-allow-sign"
    ],
    "metadata": {
      "role_name": "kube-role"
    },
    "lease_duration": 300,
    "renewable": true,
    "entity_id": "d8b5ab38-7979-b149-7494-f0c19c5b1e1a",
    "token_type": "service",
    "orphan": true
  }
}

The returned certs:

{
  "request_id": "33094682-002d-6347-f4de-fa4f19b94107",
  "lease_id": "pki_int/sign/dbclients/8jcXTkY4tG28L91d3frYzzu8",
  "renewable": false,
  "lease_duration": 319,
  "data": {
    "ca_chain": [
 {truncated}

The TLS creds are mounted successfully and the cert-manager-csi logs indicate a second request has been made prior to expiry of the certs:

I0129 14:24:23.314302       1 server.go:114] server: request: {"target_path":"/var/lib/kubelet/pods/8b3cb69b-eb44-437c-9e7c-06e9c212c84f/volumes/kubernetes.io~csi/tls/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.cert-manager.io/dns-names":"redapi.dbclient.com,redapi2.dbclient.com","csi.cert-manager.io/duration":"320s","csi.cert-manager.io/issuer-name":"dbc-vault-issuer","csi.cert-manager.io/renew-before":"190s","csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"app-example-deployment-7dd587b8b7-hwwjl","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"8b3cb69b-eb44-437c-9e7c-06e9c212c84f","csi.storage.k8s.io/serviceAccount.name":"web"},"volume_id":"csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1"}
I0129 14:24:23.318975       1 nodeserver.go:84] node: created volume: /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:24:23.319012       1 nodeserver.go:86] node: creating key/cert pair with cert-manager: /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:24:23.657499       1 certmanager.go:141] cert-manager: created CertificateRequest csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:24:23.657530       1 certmanager.go:143] cert-manager: waiting for CertificateRequest to become ready csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:24:23.657543       1 certmanager.go:267] cert-manager: polling CertificateRequest csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/default for ready status
I0129 14:24:24.662819       1 certmanager.go:267] cert-manager: polling CertificateRequest csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/default for ready status
I0129 14:24:24.667999       1 certmanager.go:160] cert-manager: metadata written to file /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/metadata.json
I0129 14:24:24.668568       1 certmanager.go:181] cert-manager: certificate written to file /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/data/crt.pem
I0129 14:24:24.668678       1 certmanager.go:188] cert-manager: private key written to file: /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/data/key.pem
I0129 14:24:24.668694       1 renew.go:172] renewer: starting to watch certificate for renewal: "csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1"
I0129 14:24:24.669069       1 nodeserver.go:131] node: publish volume request ~ target:/var/lib/kubelet/pods/8b3cb69b-eb44-437c-9e7c-06e9c212c84f/volumes/kubernetes.io~csi/tls/mount volumeId:csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1 attributes:map[csi.cert-manager.io/ca-file:ca.pem csi.cert-manager.io/certificate-file:crt.pem csi.cert-manager.io/dns-names:redapi.dbclient.com,redapi2.dbclient.com csi.cert-manager.io/duration:320s csi.cert-manager.io/is-ca:false csi.cert-manager.io/issuer-group:cert-manager.io csi.cert-manager.io/issuer-kind:Issuer csi.cert-manager.io/issuer-name:dbc-vault-issuer csi.cert-manager.io/privatekey-file:key.pem csi.cert-manager.io/renew-before:190s csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:app-example-deployment-7dd587b8b7-hwwjl csi.storage.k8s.io/pod.namespace:default csi.storage.k8s.io/pod.uid:8b3cb69b-eb44-437c-9e7c-06e9c212c84f csi.storage.k8s.io/serviceAccount.name:web]
I0129 14:24:24.669152       1 mount.go:68] Mounting cmd (mount) with arguments ([-o bind,ro /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/data /var/lib/kubelet/pods/8b3cb69b-eb44-437c-9e7c-06e9c212c84f/volumes/kubernetes.io~csi/tls/mount])
I0129 14:24:24.677405       1 nodeserver.go:143] node: mount successful default:app-example-deployment-7dd587b8b7-hwwjl:csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:24:24.677438       1 server.go:119] server: response: {}
I0129 14:24:28.168855       1 server.go:113] server: call: /csi.v1.Node/NodeGetCapabilities
I0129 14:24:28.168877       1 server.go:114] server: request: {}
E0129 14:24:28.169495       1 server.go:117] server: error: rpc error: code = Unimplemented desc =
I0129 14:24:36.959593       1 server.go:113] server: call: /csi.v1.Node/NodeUnpublishVolume
I0129 14:24:36.959631       1 server.go:114] server: request: {"target_path":"/var/lib/kubelet/pods/c0be1b32-3af6-4f48-a002-321ee0f503ed/volumes/kubernetes.io~csi/tls/mount","volume_id":"csi-f6efa90d1d8d57da6b3ee148b9db840ab0864dec3ae9d712fbd5ece7c4090761"}
I0129 14:24:36.977444       1 renew.go:207] renewer: killing watcher for "csi-f6efa90d1d8d57da6b3ee148b9db840ab0864dec3ae9d712fbd5ece7c4090761"
I0129 14:24:36.977472       1 mount.go:54] Unmounting /var/lib/kubelet/pods/c0be1b32-3af6-4f48-a002-321ee0f503ed/volumes/kubernetes.io~csi/tls/mount
I0129 14:24:36.985463       1 nodeserver.go:170] node: volume /var/lib/kubelet/pods/c0be1b32-3af6-4f48-a002-321ee0f503ed/volumes/kubernetes.io~csi/tls/mount/csi-f6efa90d1d8d57da6b3ee148b9db840ab0864dec3ae9d712fbd5ece7c4090761 has been unmounted.
I0129 14:24:36.985507       1 nodeserver.go:172] node: deleting volume csi-f6efa90d1d8d57da6b3ee148b9db840ab0864dec3ae9d712fbd5ece7c4090761
I0129 14:24:36.986412       1 server.go:119] server: response: {}
I0129 14:25:09.782289       1 server.go:113] server: call: /csi.v1.Node/NodeGetCapabilities
I0129 14:25:09.782367       1 server.go:114] server: request: {}
E0129 14:25:09.783236       1 server.go:117] server: error: rpc error: code = Unimplemented desc =
I0129 14:25:20.888931       1 server.go:113] server: call: /csi.v1.Node/NodeGetCapabilities
I0129 14:25:20.888960       1 server.go:114] server: request: {}
E0129 14:25:20.890100       1 server.go:117] server: error: rpc error: code = Unimplemented desc =
I0129 14:26:29.866556       1 server.go:113] server: call: /csi.v1.Node/NodeGetCapabilities
I0129 14:26:29.866595       1 server.go:114] server: request: {}
E0129 14:26:29.867498       1 server.go:117] server: error: rpc error: code = Unimplemented desc =
I0129 14:26:33.000903       1 certmanager.go:197] cert-manager: renewing certicate csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:26:34.445308       1 certmanager.go:141] cert-manager: created CertificateRequest csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:26:34.445393       1 certmanager.go:143] cert-manager: waiting for CertificateRequest to become ready csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:26:34.445410       1 certmanager.go:267] cert-manager: polling CertificateRequest csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/default for ready status
I0129 14:26:34.450772       1 certmanager.go:160] cert-manager: metadata written to file /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/metadata.json
I0129 14:26:34.451772       1 certmanager.go:181] cert-manager: certificate written to file /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/data/crt.pem
I0129 14:26:34.452034       1 certmanager.go:188] cert-manager: private key written to file: /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/data/key.pem
E0129 14:26:34.452064       1 renew.go:158] volume already being watched, aborting second watcher: csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1
I0129 14:26:53.022217       1 server.go:113] server: call: /csi.v1.Node/NodeGetCapabilities

The wireshark capture shows no attempt by cert-manager to sign a new certificate or an attempt to login again using the approle. The files written inside /csi-data-dir/csi-be368c20fe7f8a57a200b808062dc31328db59ad75ff38341fd388c8b14a96f1/data/ also do not change (so a new cert isnt being conjured up out of nowhere either)

Details regarding one of the CSI pods:

[root@BifMaster001 vault-helm]# kud cert-manager-csi-tnnx2 -n cert-manager
Name:         cert-manager-csi-tnnx2
Namespace:    cert-manager
Priority:     0
Node:         bifworker001/10.204.5.88
Start Time:   Thu, 23 Jan 2020 17:41:01 +0000
Labels:       app=cert-manager-csi
              controller-revision-hash=5688994456
              pod-template-generation=5
Annotations:  <none>
Status:       Running
IP:           10.244.1.56
IPs:
  IP:           10.244.1.56
Controlled By:  DaemonSet/cert-manager-csi
Containers:
  node-driver-registrar:
    Container ID:  docker://0ab3aaf5c9abcc61273f481dd4ec023a66b89e390092cc681d2c35c98d4e7b4a
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.2
    Image ID:      docker-pullable://quay.io/k8scsi/csi-node-driver-registrar@sha256:ffecfbe6ae9f446e5102cbf2c73041d63ccf44bcfd72e2f2a62174a3a185eb69
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/plugin/csi.sock
      --kubelet-registration-path=/var/lib/kubelet/plugins/cert-manager-csi/csi.sock
    State:          Running
      Started:      Thu, 23 Jan 2020 17:41:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /plugin from plugin-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from cert-manager-csi-token-8gpbx (ro)
  cert-manager-csi:
    Container ID:  docker://f4abc35f65634a4331d0e400d4879e6941c995d84b7839203ac10d768e41d150
    Image:         gcr.io/jetstack-josh/cert-manager-csi:v0.1.0-alpha.1
    Image ID:      docker-pullable://gcr.io/jetstack-josh/cert-manager-csi@sha256:ff9027232b275e904e970c6ab92a268183ed9be3b70c56cc6548504484d3bc61
    Port:          <none>
    Host Port:     <none>
    Args:
      --node-id=$(NODE_ID)
      --endpoint=$(CSI_ENDPOINT)
      --data-root=/csi-data-dir
      --v=5
    State:          Running
      Started:      Thu, 23 Jan 2020 17:41:02 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NODE_ID:        (v1:spec.nodeName)
      CSI_ENDPOINT:  unix://plugin/csi.sock
    Mounts:
      /csi-data-dir from csi-data-dir (rw)
      /plugin from plugin-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from cert-manager-csi-token-8gpbx (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/cert-manager-csi
    HostPathType:  DirectoryOrCreate
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  Directory
  csi-data-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/cert-manager-csi
    HostPathType:  DirectoryOrCreate
  cert-manager-csi-token-8gpbx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cert-manager-csi-token-8gpbx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:          <none>

Additionally as an aside, if we spin up more than one application it appears occasionally the received certs are swapped (e.g Pod B has Pod As issued certs and vice versa). Perhaps raise a separate issue/query regarding that..

E2E Test Cleanup

The current e2e test runner hasn't been updated for some time, and has several issues:

  1. Old versions being tested
  2. Limited ability to vary tested Kubernetes / cert-manager versions
  3. In-script implementations of dependency provisioning (better to use make / makefile modules)
  4. e2e Suite is a regular go test with no compile flags, so go test ./... in the root of the repo fails

The minimum we should do is bump the versions being used, but it'd be worth changing this more widely. Binary dependencies (kubectl, etc) can be provisioned in make, and the gold standard would be for the kind cluster to be created in Go code when the test is set up.

Explain differences between using this and cert-manager alone

It should be clear to potential users why they may want to use a CSI driver instead of the existing Kubernetes secrets store implementation.

We should explain the differences and trade-offs from a security stand point, and what the implications of using CSI instead are from a security/thread model PoV

csi-driver fail to mount certs volume after running for some time

After multiple days of working correctly, csi-driver stops working on some Kubernetes nodes:

I0124 20:13:27.725185       1 app.go:59] main "msg"="building driver"  
I0124 20:13:27.727245       1 filesystem.go:91] storage "msg"="Mounted new tmpfs"  "path"="csi-data-dir/inmemfs"
I0124 20:13:27.828334       1 app.go:103] main "msg"="running driver"  
E0204 13:48:20.214196       1 server.go:109] driver "msg"="failed processing request" "error"="mkdir csi-data-dir/inmemfs: no such file or directory" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume" 
E0204 13:48:20.315307       1 server.go:109] driver "msg"="failed processing request" "error"="mkdir csi-data-dir/inmemfs: no such file or directory" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume" 
E0204 13:48:20.720442       1 server.go:109] driver "msg"="failed processing request" "error"="mkdir csi-data-dir/inmemfs: no such file or directory" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume" 
E0204 13:48:20.821168       1 server.go:109] driver "msg"="failed processing request" "error"="mkdir csi-data-dir/inmemfs: no such file or directory" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume" 
E0204 13:48:21.732976       1 server.go:109] driver "msg"="failed processing request" "error"="mkdir csi-data-dir/inmemfs: no such file or directory" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume" 
E0204 13:48:21.835129       1 server.go:109] driver "msg"="failed processing request" "error"="mkdir csi-data-dir/inmemfs: no such file or directory" "request"={} "rpc_method"="/csi.v1.Node/NodePublishVolume" 
[...]

The volume csi-data-dir is mounted on Host Directory (/tmp/cert-manager-csi-driver DirectoryOrCreate). This host is configured to auto-cleanup old stuff in /tmp.

Is it possible to either:

  • periodically touch the directory csi-data-dir and/or its content so that it is not deleted by the /tmp cleanup job
  • when receiving the error mkdir csi-data-dir/inmemfs: no such file or directory, recreate the tmpfs filesystem to allow next requests to work

Trim dns-names after spliting

Hello,

I want to use yaml to have a better readbility of the field csi.cert-manager.io/dns-names. For example :

csi:
  readOnly: true
  driver: csi.cert-manager.io
  volumeAttributes:
    csi.cert-manager.io/issuer-name: log-ca-issuer
    csi.cert-manager.io/key-usages: server auth,client auth
    csi.cert-manager.io/key-encoding: PKCS8
    csi.cert-manager.io/common-name: log-opensearch-node
    csi.cert-manager.io/dns-names: >-
      ${POD_NAME},
      opensearch-cluster-master-headless,
      ${POD_NAME}.opensearch-cluster-master-headless,
      ${POD_NAME}.opensearch-cluster-master-headless.${POD_NAMESPACE}.svc.cluster.local,
      opensearch-cluster-master-headless.${POD_NAMESPACE}.svc.cluster.local

This yaml feature will regroup data like this :
${POD_NAME}, opensearch-cluster-master-headless, ${POD_NAME}.opensearch-cluster-master-headless, ${POD_NAME}.opensearch-cluster-master-headless.${POD_NAMESPACE}.svc.cluster.local, opensearch-cluster-master-headless.${POD_NAMESPACE}.svc.cluster.local

Is it possible to trim DNS names after splitting to avoid extra space produce by yaml ?

Investigate and change the default mounted host path for driver

See #73

  1. Investigate and document what happens after changing the host path on an existing driver install
  2. Consider code changes to deal with this case more gracefully- document if and how the driver can be made recoverable.
  3. Change the default, with a clear upgrade path for users.

Additional keystore output formats

We want to use this CSI driver with an application that does not directly support PEM certificates, and would like to see support for requesting additional keystore output format(s) - as you can with the cert-manager Certificate resource.

Of the additional keystore formats that cert-manager Certificate supports, I would put pkcs12 as first priority.

Suggested new attributes:
csi.cert-manager.io/keystore-type (default value: pkcs12)
csi.cert-manager.io/keystore-file
csi.cert-manager.io/keystore-password (default value: " the default keystore password")

This implies that an additional keystore should be created if the attribute csi.cert-manager.io/keystore-file is set.

To actually add an attribute for the keystore password is debatable. I would actually prefer not to, and always set the keystore password to the default value (changeit). Adding a reference to a Secret containing the password, as cert-manager Certificate does, adds too much complexity/abstraction on this level - in my opinion. And I am not sure if we can/want the CSI driver to read a Secret in the pod namespace?

csidriver object 1.16+ support

Needs these flags:

  attachRequired: false
  podInfoOnMount: true
  volumeLifecycleModes:
  - Ephemeral

volumeLifecycleModes only exists 1.16+ though, so can't be there on 1.15 clusters.

When pod is recreated there is created a new CertificateRequest -> letsencrypts limits are exceeded very fast

Hi, we've met the following behavior:

All the time when the Pod is recreated (restarted) there is created a new CertificateRequest and Order. And it works like this even if csi.cert-manager.io/dns-names of the previously killed pod stays the same (i.e. we already created certificate for this domain).

This behavior doesn't seem to be convenient, cause there may be necessity to recreate pod a lot of times during development and it leads to exceeding rate limits to certificate authority very fast. For example, letsencrypt allows to create only 50 certificates per week for one domain (https://letsencrypt.org/docs/rate-limits/).

So what can we do in such a case to avoid certificate recreation all the time on pod restart? Or if we can't do anything with that, I suggest to make a feature to fix this behavior similarly to cert-manager way? For example in cert-manager, certificates are stored in kubernetes and if there is a new CertificateRequest for the domain, which already has certificate in K8S - it makes no real request to certificate authority and uses the one from K8S.

Example of configuring tls volume for pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  labels:
    app.kubernetes.io/component: backend
  annotations:
    argocd.argoproj.io/sync-wave: "10"
spec:
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app.kubernetes.io/component: backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: backend
    spec:
      securityContext:
        fsGroup: 1000

      containers:
        - name: nginx
          image: nginx:1.19.10
          imagePullPolicy: IfNotPresent
          command: ["nginx"]
          livenessProbe:
            tcpSocket:
              port: 443
          readinessProbe:
            tcpSocket:
              port: 443
            initialDelaySeconds: 60
            timeoutSeconds: 5
          ports:
            - name: https
              containerPort: 443
              protocol: TCP
          resources:
            limits:
              cpu: 200m
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: tls
              mountPath: "/tls"

      volumes:
      - name: tls
        csi:
          readOnly: true
          driver: csi.cert-manager.io
          volumeAttributes:
            csi.cert-manager.io/issuer-kind: ClusterIssuer
            csi.cert-manager.io/issuer-name: letsencrypt-prod
            csi.cert-manager.io/dns-names:example.com

Receiving timeout error on Pod

I have a Pod which I am trying to add TLS certificates to and am receiving the following error. I have checked the logs of the csi driver daemonset Pod and am not seeing anything apparent. Also, the documentation for the csi-driver does not seem to include troubleshooting steps and so I am at a loss for what to look into next.

"Unable to attach or mount volumes: unmounted volumes=[tls], unattached volumes=[kube-api-access-6hghg tls]: timed out waiting for the condition"

This is what the configuration on the Pod looks like

    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            csi.cert-manager.io
    FSType:
    ReadOnly:          true
    VolumeAttributes:      csi.cert-manager.io/ca-file=tls-ca.pem
                           csi.cert-manager.io/certificate-file=tls.pem
                           csi.cert-manager.io/dns-names=namespace-mutator.namespace-mutator-dev
                           csi.cert-manager.io/duration=720h
                           csi.cert-manager.io/is-ca=false
                           csi.cert-manager.io/issuer-group=awspca.cert-manager.io
                           csi.cert-manager.io/issuer-kind=AWSPCAClusterIssuer
                           csi.cert-manager.io/issuer-name=awspca-csi-driver
                           csi.cert-manager.io/key-encoding=PKCS1
                           csi.cert-manager.io/key-usages=server auth,client auth
                           csi.cert-manager.io/privatekey-file=tls.key
                           csi.cert-manager.io/renew-before=72h
                           csi.cert-manager.io/reuse-private-key=false```

Generate certificate without DNS SAN

Generating a certificate without the annotation csi.cert-manager.io/dns-names create a certificate with an empty DNS SAN instead of no DNS SAN. This can cause problem for case like etcd when only SAN IP are used.
Is it possible to populate DNS SAN only if csi.cert-manager.io/dns-names is not empty?

Certificate is re-requested when container restarts

When a container restarts the certificate is re-requested and blows through quota for let's encrypt. It would be nice to reuse the cert already mounted when the container restarts, i.e. once per pod kind of thing.

Not sure if this is possible but if someone deploys a bad deployment and leaves it, or an app crashes in the wee hours of the morning for example, i have to wait like 12 hours after i stop the continued crashed container to be able to request a new certificate.

Support for keyEncoding

Support additional csi.cert-manager.io/key-encoding attribute that translates to keyEncoding for the certificate object. I need a key with pkcs8 format and would love to move to this csi driver.

New key being used with old certificate

When a kubernetes node is restarted, it makes another call to NodePublishVolume with the same volume id used previously on the pod's first startup (if the pod has not been cleaned up before it starts again). The CSI then proceeds to create a new key and reuses the existing certificate request (with the same volume id), which leaves on the volume a new private key that is not related to the certificate.

This happens regardless of the certificate an private key being wiped out or not(for example if you are using a temp directory).

I came up with two possible fixes for this:

  1. If you are not using a temp directory, you can re-read the existing key file if you want to reuse the private key, and this coupled with the reusing of the previous certificate request would leave the correct pair on the volume.
  2. You can delete the existing certificate request if it exists, and always make a new one when calling NodePublishVolume.

I'm not sure if this is being maintained but I wanted to create this issue to make sure it's documented somewhere.

Feature Request: Plase support setting the owner, group and permissions of TLS volume

There does not seem to be a way to set owner, group and permissions of TLS volume.

Non-Root containers do not have access to mount point.

$ ls -ld /tls
dr-xr-x---    3 root     root           140 Apr  8 14:41 /tls

Kubernetes stuff like

securityContext:
  fsGroup: 1000
  fsGroupChangePolicy: "OnRootMismatch"

does not seem to have any effect on these kinds of volumes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.