GithubHelp home page GithubHelp logo

jetstack / google-cas-issuer Goto Github PK

View Code? Open in Web Editor NEW
73.0 73.0 29.0 711 KB

cert-manager issuer for Google CA Service

License: Apache License 2.0

Makefile 57.18% Go 34.36% Shell 7.51% Mustache 0.95%

google-cas-issuer's People

Contributors

albertogeniola avatar charlieegan3 avatar dependabot[bot] avatar g-soeldner avatar haydentherapper avatar inteon avatar irbekrm avatar jakexks avatar james-w avatar jetstack-bot avatar joshvanl avatar klemmari1 avatar maelvls avatar maesterz avatar mattbates avatar mattiasgees avatar milesarmstrong avatar sgtcodfish avatar wallrj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

google-cas-issuer's Issues

Certificate revocation from CAS Console

Hey Folks, Firstly thanks for the google-cas-issuer this really simplifies our integration with Google CAS.

While trialing this we observed the following behavior while revoking certificates

  1. if I revoke the certificate that was issued by CAS to cert-manager from the CAS console the issuer doesn't know about it. I am guessing there is no event stream or check to ascertain if the cert is still valid? Just wanted to check if it was on the roadmap or any high level plan for revocation to be handled. A workaround for me would be to set the TTL of the cert to super low ( which will start hitting api aggressively ) but it will minimize the risk with revocation.

  2. After revocation when I delete the certificate object google-cas-issuer starts spitting out errors like below

google-cas-issuer-d866f5f58-45bdm google-cas-issuer 
{
  "level": "error",
  "ts": 1615083602.4735746,
  "logger": "controller-runtime.manager.controller.certificaterequest",
  "msg": "Reconciler error",
  "reconciler group": "cert-manager.io",
  "reconciler kind": "CertificateRequest",
  "name": "demo-certificate-m27js",
  "namespace": "default",
  "error": "CertificateRequest.cert-manager.io \"demo-certificate-m27js\" not found",
  "stacktrace": "github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:297\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:248\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"
}

thing to note is if i just delete the secret, cert-manager gets a new certificate from CAS. so maybe just delete the secret to rotate / revoke the cert?

  1. When I delete the secret cert-manager gets a new cert issued via CAS but leaves the old certificate as is in the CAS issued-certificates list. I have to manually revoke it.

Overall would like to understand what is the best way to handle revocation gracefully via cert-manager.

Thanks

CAS with ingress annotation fails with panic

When trying to deploy jenkins using the ingress-shim and cas issuer, with below annotations. The certificate request get created properly but the google cas issuer pod crashes as soon as the request is created

Ingress object

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      cert-manager.io/issuer: oncp-dev
      cert-manager.io/issuer-group: cas-issuer.jetstack.io
      cert-manager.io/issuer-kind: GoogleCASClusterIssuer
      kubernetes.io/ingress.class: nginx
      meta.helm.sh/release-name: jenkins
      meta.helm.sh/release-namespace: jenkins
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
    creationTimestamp: "2021-01-02T17:25:07Z"
    generation: 1
    labels:
      app.kubernetes.io/component: jenkins-controller
      app.kubernetes.io/instance: jenkins
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: jenkins
      helm.sh/chart: jenkins-3.0.11
    name: jenkins
    namespace: jenkins
    resourceVersion: "169122"
    selfLink: /apis/extensions/v1beta1/namespaces/jenkins/ingresses/jenkins
    uid: ced40d51-0d5a-49fc-8813-24cd95912c8d
  spec:
    rules:
    - host: jenkins-bak-euwe1.mgmt-tst.oncp.dev
      http:
        paths:
        - backend:
            serviceName: jenkins
            servicePort: 8080
    tls:
    - hosts:
      - jenkins-bak-euwe1.mgmt-tst.oncp.dev
      secretName: jenkins-tls
  status:
    loadBalancer:
      ingress:
      - ip: 10.85.128.13
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
  

Certificate object

apiVersion: v1
items:
- apiVersion: cert-manager.io/v1
  kind: Certificate
  metadata:
    creationTimestamp: "2021-01-02T18:08:13Z"
    generation: 1
    labels:
      app.kubernetes.io/component: jenkins-controller
      app.kubernetes.io/instance: jenkins
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: jenkins
      helm.sh/chart: jenkins-3.0.11
    name: jenkins-tls
    namespace: jenkins
    ownerReferences:
    - apiVersion: extensions/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: Ingress
      name: jenkins
      uid: ced40d51-0d5a-49fc-8813-24cd95912c8d
    resourceVersion: "169132"
    selfLink: /apis/cert-manager.io/v1/namespaces/jenkins/certificates/jenkins-tls
    uid: 75606156-d113-4aeb-8687-617a4e18e1ed
  spec:
    dnsNames:
    - jenkins-bak-euwe1.mgmt-tst.oncp.dev
    issuerRef:
      group: cas-issuer.jetstack.io
      kind: GoogleCASClusterIssuer
      name: oncp-dev
    secretName: jenkins-tls
  status:
    conditions:
    - lastTransitionTime: "2021-01-02T18:08:13Z"
      message: Issuing certificate as Secret does not exist
      reason: DoesNotExist
      status: "True"
      type: Issuing
    - lastTransitionTime: "2021-01-02T18:08:13Z"
      message: Issuing certificate as Secret does not exist
      reason: DoesNotExist
      status: "False"
      type: Ready
    nextPrivateKeySecretName: jenkins-tls-nbvpb
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Logs for google-cas-issuer are below

2021-01-02T17:59:23.612Z	INFO	controller	Starting EventSource	{"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASClusterIssuer", "controller": "googlecasclusterissuer", "source": "kind source: /, Kind="}
2021-01-02T17:59:23.712Z	INFO	controller	Starting Controller	{"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASIssuer", "controller": "googlecasissuer"}
2021-01-02T17:59:23.713Z	INFO	controller	Starting workers	{"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASIssuer", "controller": "googlecasissuer", "worker count": 1}
2021-01-02T17:59:23.713Z	INFO	controller	Starting Controller	{"reconcilerGroup": "cert-manager.io", "reconcilerKind": "CertificateRequest", "controller": "certificaterequest"}
2021-01-02T17:59:23.713Z	INFO	controller	Starting workers	{"reconcilerGroup": "cert-manager.io", "reconcilerKind": "CertificateRequest", "controller": "certificaterequest", "worker count": 1}
2021-01-02T17:59:23.713Z	DEBUG	controller	Successfully Reconciled	{"reconcilerGroup": "cert-manager.io", "reconcilerKind": "CertificateRequest", "controller": "certificaterequest", "name": "test-cas-qlndx", "namespace": "default"}
2021-01-02T17:59:23.713Z	INFO	controller	Starting Controller	{"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASClusterIssuer", "controller": "googlecasclusterissuer"}
2021-01-02T17:59:23.713Z	INFO	controller	Starting workers	{"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASClusterIssuer", "controller": "googlecasclusterissuer", "worker count": 1}
2021-01-02T17:59:23.713Z	DEBUG	controller	Successfully Reconciled	{"reconcilerGroup": "cert-manager.io", "reconcilerKind": "CertificateRequest", "controller": "certificaterequest", "name": "test-stage-le-np6jh", "namespace": "default"}
2021-01-02T17:59:23.716Z	DEBUG	controller	Successfully Reconciled	{"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASClusterIssuer", "controller": "googlecasclusterissuer", "name": "oncp-dev", "namespace": ""}
E0102 18:08:13.603404       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 250 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x179c7e0, 0x2686fa0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
panic(0x179c7e0, 0x2686fa0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/jetstack/google-cas-issuer/pkg/controller/certificaterequest.(*CertificateRequestReconciler).Reconcile(0xc000588c90, 0xc0000511f9, 0x7, 0xc0002e1bc0, 0x11, 0xc00055a010, 0xc000198990, 0xc000198908, 0xc000198900)
	/workspace/pkg/controller/certificaterequest/certificaterequest_controller.go:144 +0xa17
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0001d4b40, 0x180b4c0, 0xc000159240, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0001d4b40, 0x203000)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0001d4b40)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000614480)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000614480, 0x1b9b760, 0xc000654120, 0x1, 0xc0001de240)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000614480, 0x3b9aca00, 0x0, 0x1a7d001, 0xc0001de240)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000614480, 0x3b9aca00, 0xc0001de240)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:170 +0x3fa
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1627f57]

goroutine 250 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x179c7e0, 0x2686fa0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/jetstack/google-cas-issuer/pkg/controller/certificaterequest.(*CertificateRequestReconciler).Reconcile(0xc000588c90, 0xc0000511f9, 0x7, 0xc0002e1bc0, 0x11, 0xc00055a010, 0xc000198990, 0xc000198908, 0xc000198900)
	/workspace/pkg/controller/certificaterequest/certificaterequest_controller.go:144 +0xa17
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0001d4b40, 0x180b4c0, 0xc000159240, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0001d4b40, 0x203000)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0001d4b40)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000614480)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000614480, 0x1b9b760, 0xc000654120, 0x1, 0xc0001de240)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000614480, 0x3b9aca00, 0x0, 0x1a7d001, 0xc0001de240)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000614480, 0x3b9aca00, 0xc0001de240)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:170 +0x3fa

GoogleCASClusterIssuer annotations with ingress manifest not working

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: googlecasclusterissuer-sample # issuer name
cert-manager.io/issuer-kind: GoogleCASIssuer # reference to the issuer we deployed in the cluster
cert-manager.io/issuer-group: cas-issuer.jetstack.io

spec:
rules:

  • http:
    paths:
    • path: /
      pathType: Prefix
      backend:
      service:
      name: jenkins
      port:
      number: 8080
      host: redacted
      tls:
    • hosts:
      • redacted
        secretName: jenkins-tls

These annotations do not work with the cluster issuer. They work fine when using a namespace scenario.
cert-manager.io/issuer-kind: google-cas-issuer
cert-manager.io/issuer-kind: GoogleCASClusterIssuer
cert-manager.io/issuer-group: cas-issuer.jetstack.io

produced this error:
Could not determine issuer for ingress due to bad annotations: both "cert-manager.io/cluster-issuer" and "cert-manager.io/issuer-group" may not be set, both "cert-manager.io/cluster-issuer" and "cert-manager.io/issuer-kind" may not be set

We are using Workload Identity for IAM.
Using v0.6.2

GoogleCASIssuer - Intermediate CA for my app

Hey folks!

Some applications deployed in Kubernetes unfortunately require an CA, i.e. an Intermediate CA, in comparison to End-Entity TLS certs.

Here is a sample Certificate for my app.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-app-pki-certificate
  namespace: my-app
spec:
  commonName: my-app-pki-intermediate-ca
  isCA: true
  privateKey:
    algorithm: EDCSA
    size: 256
  issuerRef:
    group: cas-issuer.jetstack.io
    kind: GoogleCASClusterIssuer
    name: my-googlecasissuer
  duration: "2160h" # 90d
  renewBefore: "360h" # 15d
  secretName: my-app-pki-certificate
  subject:
    countries:
    - CH
    organizationalUnits:
    - MyApp Intermediate Certification Authority
    organizations:
    - SoKube SA
  usages:
  - cert sign
  - crl sign

The generated CertificateRequest resource still contains the field isCA: true but the actual CSR is missing the X509 Request Extensions

kubectl -n my-app get CertificateRequest my-app-pki-certificate-5x4gb -o yaml | yq e '.spec.request' | base64 -d | openssl req -noout -text
        Requested Extensions:
            X509v3 Key Usage: 
                Certificate Sign, CRL Sign

The same approach using the Google Cloud CAS service would have generated something like this, which would be the expected behavior

        Requested Extensions:
            X509v3 Key Usage: critical
                Certificate Sign, CRL Sign
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:1

With such a CSR, the CertificateRequest remains obviously in a non-ready status, with the following error

casClient.CreateCertificate failed: rpc error: code = InvalidArgument desc = untrustedSignAndLint failed: generic::invalid_argument: lint failed for certificate: invalid certificate: [RFC 5280: 4.2.1.3 & 4.2.1.9]: if the keyCertSign bit is asserted, then the cA bit in the basic constraints extension MUST also be asserted

On the Google CAS side, the CA is properly configured and is able to generate working Intermediate CAs.

Here it comes to a simple question - Does the current google-cas-issuer implementation support the creation of Intermediate CAs ?

Thanks for your answer
Cheers!

Note: I've been iterating a lot on the usages: adding the server auth extended usage, or remove all of them, but I don't think this is the real problem here.

Weird stack trace for every error

At first, I thought the following logs were due to some panic, but then I realized that it is due to Zap adding a stack trace for every error it handles.

For example:

% k -n cert-manager logs -l app=cert-manager-google-cas-issuer --tail=-1
2021-02-02T18:00:55.915Z        INFO    controller.GoogleCASIssuer      reconciled issuer       {"GoogleCASIssuer": "default/googlecasissuer-sample", "kind": "&TypeMeta{Kind:GoogleCASIssuer,APIVersion:cas-issuer.jetstack.io/v1alpha1,}"}
2021-02-02T18:00:55.930Z        INFO    controller.GoogleCASIssuer      reconciled issuer       {"GoogleCASIssuer": "default/googlecasissuer-sample", "kind": "&TypeMeta{Kind:GoogleCASIssuer,APIVersion:cas-issuer.jetstack.io/v1alpha1,}"}
2021-02-02T18:01:19.317Z        ERROR   controller-runtime.manager.controller.certificaterequest        Reconciler error        {"reconciler group": "cert-manager.io", "reconciler kind": "CertificateRequest", "name": "demo-certificate-6bkkb", "namespace": "default", "error": "CertificateRequest.cert-manager.io \"demo-certificate-6bkkb\" not found"}
github.com/go-logr/zapr.(*zapLogger).Error
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:297
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:248
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185
k8s.io/apimachinery/pkg/util/wait.UntilWithContext
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99

Would it make sense to turn the stack trace off by default? Maybe we could do the same as klog: prefix all the logs with the filename:linenumber?

To be honest this stack trace scared the cr*p out of me 😅

cc @jakexks

Allow to use a custom Service Account

I think a nice addition to the chart would be to allow to skip the default Service Account creation and to specify the name of another one created separately.

My use case is as follows: I am injecting the Google project Id from Argocd to my helm charts, allowing me to reference it inside my Chart's templates without having to explicitly set it in the values.yaml. The issue is that, even if I create a separate Service Account with the correct annotations, it will still create a default Service Account and reference it in the Deployment, RoleBinding and ClusterRoleBinding and from the chart I don't have a way to reference my custom Service Account.

This leaves two options if the Helm Chart doesn't support a custom, separate Service Account:

  • Create my custom service account with the same name as the default one, which makes ArgoCD raise a warning about duplicated resources (and I'm not sure how reconciliation will work in the long run)
  • Directly apply the manifest with my modifications in place, but this will be tiresome to maintain, since I will have to constantly update the manifest whenever a new version of the helm chart is released.

If this sounds like something more people would want I'd be more than happy to work on a PR

Set revisionHistoryLimit to 1 to reduce load on the issuer

We've been running google-cas-issuer for just under a year, to generate certificates to secure Istio workloads. And recently found that it started to reach high memory usage (in the hundreds of MBs) and would get OOMKilled often.

Just before the crash, the deployment logs contain a large amount of log records about existing CertificateRequest already being approved:

INFO	controller.CertificateRequest	CertificateRequest is Ready, ignoring.	{"certificaterequest": ...

Removing limit and clearing up old requests seemed to have helped, but I'd love to see a more permanent solution as I fear this will reccur in a year (or sooner). Our CRs were for the certs such as istio-system/istio-csr and cert-manager/demo-certificate (the latter is likely someone testing out cert-manager and forgot to clean it up).


Noob warning: below is my ramblings trying to understand if I could contribute to the solution - it's probably grossly incorrect

I undertand a solution could be to use CertificateSpec.revisionHistoryLimit, but I couldn't find how the istio-csr Certificate is being created by google-cas-issuer (is it even?), and would love to get some guidance 🙏 .. e.g would it be around this area?:

Certificate: &casapi.Certificate{
CertificateConfig: &casapi.Certificate_PemCsr{
PemCsr: string(csr),
},
Lifetime: &duration.Duration{
Seconds: expiry.Milliseconds() / 1000,
Nanos: 0,
},
},

Or, am I looking in the wrong place? Does this Certificate spec get created in Istio, or is it done somewhere else entirely? I feel like I'm missing some basic understanding of what's going on in my system, is there any documentation I can look at?

Side note: I did notice that Istiod cert for example only keept one CR around, being logged by cert-manager:

cert-manager/controller/certificates-revision-manager "msg"="garbage collecting old certificate request revsion" "key"="istio-system/istiod" "related_resource_kind"="CertificateRequest" "related_resource_name"="istiod-kjt9x" "related_resource_namespace"="istio-system" "resource_kind"="Certificate" "resource_name"="istiod" "resource_namespace"="istio-system" "resource_version"="v1" "revision"=9080 

reconciler error during certificate renewal

I am using version 0.6.2 against along with cert-manager 1.11.0 .

During a certificate renewal attempt ca-issuer plugin logs the below error message , any idea what could be the reason for this ? or what to be adjusted to fix the issue .

"msg"="Reconciler error" "error"="Operation cannot be fulfilled on certificaterequests.cert-manager.io \"tls-keys-75glc\": the object has been modified; please apply your changes to the latest version and try again" "certificateRequest"={"name":"tls-keys-75glc","namespace":"istio-egress"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="tls-keys-75glc" "namespace"="istio-egress" "reconcileID"="7a29c165-01e9-438a-98c4-fb2eeb2a3f70"

I could see that after few seconds it pick the same certificate request (seems by the time CR approval is done) and certificate is successfully issued
controller/CertificateRequest "msg"="Initialising Ready condition" "certificaterequest"={"Namespace":"istio-egress","Name":"tls-keys-75glc"}

Should surface certificateAuthorityId more clearly in docs

This field is useful especially if following the advice on the first paragraph:

It is recommended to create subordinate CAs for signing leaf certificates. See the official documentation.

It requires a little bit of digging to look at config/crd/base to find the option, i believe it should be documented more clearly. perhaps the docs should point to complete api/crd docs with annotations/explanations.

I may have missed another place where this is documented, if so it should be linked more clearly in the README

Certificate chain is not split correctly

Description:

We are using google CAS as a delagated certificate authority and our complete certificate chain is :
[Root CA cert - RCA] -> [Intermediate CA cert - ICA] -> [GoogleCAS CA cert - CASCA] -> [Leaf certificate - CERT]
In certificate secrets, the certificate chain is not split properly :

  • tls.crt contains CASCA and CERT in a single encoded string using a mix of "\r\n" and as new line separator.
    • CERT : this part of the string uses "\r\n" as new line separator and contains the leaf certificate.
    • CASCA : this part of the string uses "\n" as new line separator and contains the Google CAS certificate authaurity public certificate.
  • ca.crt contains RCA + ICA in a single encoded string with "\r\n" as new line separator (except at the end of the string where only "\n" is used.

What's expected:

  • From what we undestood from cert-manager FAQ, the secret should contains :

    • tls.crt with the full certificate chain within (except the CA cert)
    • ca.crt should only contain the root certificate
  • All encoded string should be using the same new line separator (might be an external issue without impact to google-cas-issuer, but...??)

What's happening:

As the web server using the leaf certificate publishes cert.crt as certificate chain, our TLS handshake is timeout, as the client does not trust the ICA, but only the RCA.
If we modify the client config and we add the ICA to its truststore, the TLS handshake ends succesfully, and the TLS connexion is established.

Versions affected:

google-cas-issuer: 0.8.0
cert-manager: 1.14.4

How to reproduce:

Create the CA chain described, and you should reproduce the issue.

Add more status conditions to surface misconfiguration to the users

It would be better to have more status conditions on the (Cluster)Issuer. so that you could tell whether an Issuer is ready to issue certificates, and if not, why. For example, insufficient permissions or incorrect credentials

The issuer spec/status should contain enough information for the CertificateRequest controller to create a CAS client on the fly, rather than configuring a long-lived client at the reconcile step and storing it in a sync.Map as it is now.

TLS certificates cannot be ingested by istio-ingressgateway due to missing newline characters

Description:
During the testing of the integration of the issuer with Istio ingress gateway secrets, we've found that the certificates cannot be ingested by the istio-proxy container in the ingress gateway pod due to a missing newline after each certificate.

What's expected:
Newly-generated certificate secrets should be successfully ingested by istio-proxy upon creation or update (certificate rotation)

What's happening:
Certificates are being rejected by istio-proxy with the following error:
[Envoy (Epoch 0)] [2021-06-10 18:24:34.984][21][warning][config] [external/envoy/source/common/config/grpc_mux_subscription_impl.cc:82] gRPC config for type.googleapis.com/envoy.api.v2.auth.Secret rejected: Failed to load certificate chain from <inline>

Versions affected:
google-cas-issuer: 0.3.0
cert-manager: 1.1.0

How to reproduce:
Generate a new certificate with google-cas-issuer in the istio-system namespace. The ingress-sds secret discovery container detects the secret and passes it to istio-proxy, where it is rejected with the error above.

In testing the issue, I've found that we are running into the issue described here: istio/istio#22530.

In order to validate, I was able to modify the generated secret and add a newline after each certificate in the chain for ca.crt, tls.crt, and tls.key. After adding these newline characters, the istio-proxy container successfully ingested the secret.

Full E2E testing

Add E2E tests that install cert-manager, configure the issuer and issue certificates and configure them to run on PRs.

v1beta1 API support?

I might be totally off with this but it looks like certificate pools are no longer going to be part of the certificate management service google is providing? Looking here there is no mention of it and it is no longer present at all in the gcloud beta privateca command.

$ gcloud beta privateca pools list
ERROR: (gcloud.beta.privateca) Invalid choice: 'pools'.
Maybe you meant:
  gcloud beta privateca certificates list
  gcloud beta privateca certificates create
  gcloud beta privateca certificates describe
  gcloud beta privateca certificates export
  gcloud beta privateca certificates revoke
  gcloud beta privateca certificates update
  gcloud beta privateca roots list
  gcloud beta privateca locations list
  gcloud beta privateca reusable-configs list
  gcloud beta privateca subordinates list

To search the help text of gcloud commands, run:
  gcloud help -- SEARCH_TERMS

It looks like the API in general is also changing in v1beta1 https://cloud.google.com/certificate-authority-service/docs/reference/rest/v1beta1/projects.locations.certificateAuthorities

Would you accept a pull request to try to bring the google-cas-issuer up to date with this newer API? Or do you feel that the API is subject to such change before GA that it is not worth making such a large (breaking) change at this point?

Using CAS certificate templates

I'm configuring the plugin to operate and use workload identity and provide the GSA IAM permissions to use a certificate template I created for TLS certificates specifically for GKE that I'd like cert-manager to use rather than providing certificateRequester against the CA/Pool.

Are there any existing documented examples or reasons why this wouldn't work? I'm currently getting the following issue:

2022-01-27T05:30:06.179Z	ERROR	controller-runtime.manager.controller.certificaterequest	Reconciler error	{"reconciler group": "cert-manager.io", "reconciler kind": "CertificateRequest", "name": "nginx-certificate-hb4mg", "namespace": "<REDACTED>", "error": "casClient.CreateCertificate failed: rpc error: code = PermissionDenied desc = Permission 'privateca.certificates.create' denied on 'projects/<REDACTED>/locations/<REDACTED>/caPools/<REDACTED>'"}

Any assistance here would be greatly appreciated!

High memory usage in issuer pod - OOM Error

I have been experiencing a problem recently with the google-cas-issuer pod memory usage growing extremely high (until it reaches the limit and has an OOM error, meaning it enters a CrashLoopBackoff). There are 2 main issues I have been seeing. The first being the memory climbing and the second being the pod looking for certificate requests that no longer exist. I am adding them both here as I believe they are related to pod memory usage increasing.

I have seen other people having similar issue of pod memory usage being very high this in issue #65. However, when I tried their proposed solution of setting a revisionHistoryLimit to 1, the CSRs that have been removed are then infinitely produced in logs as seen in issue #28 (which is meant to have been fixed in v0.2.0). I am unsure whether this is driving up the pod memory utilisation. The code that is generating the errors for the CSRs that no longer exist can be found here I believe:

func (r *CertificateRequestReconciler) Reconcile(ctx context.Context, req ctrl.Request) (result ctrl.Result, err error) {
log := r.Log.WithValues("certificaterequest", req.NamespacedName)
// Fetch the CertificateRequest resource being reconciled.
// Just ignore the request if the certificate request has been deleted.
var certificateRequest cmapi.CertificateRequest
if err := r.Get(ctx, req.NamespacedName, &certificateRequest); err != nil {
if err := client.IgnoreNotFound(err); err != nil {
log.Info("Certificate Request not found, ignoring", "cr", req.NamespacedName)
}
return ctrl.Result{}, err
}

I have been testing with certificates that expire every hour and renew 30 minute before to see the pod behaviour. Each time the certs are signed the memory utilisation spikes (which is not a problem) but after this operation happens, the pod memory decreases to higher rate than it was before and the pod looks for the previous certificate request objects. This is signalling to me that there is some sort of caching of Certifcate Request objects happening? Could this be driving up the memory?

This is a snippet of the error logs:

E0123 16:19:26.756349       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-tcm7h-3\" not found" "certificateRequest"={"name":"certificate-tcm7h-3","namespace":"cas-test"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="certificate-tcm7h-3" "namespace"="cas-test" "reconcileID"="fcd9761d-de68-4f1f-8de2-2a73a861beee"
E0123 16:19:26.765695       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-jngw9-3\" not found" "certificateRequest"={"name":"certificate-jngw9-3","namespace":"cas-test"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="certificate-jngw9-3" "namespace"="cas-test" "reconcileID"="ccb2e6e7-3fd6-4a21-b649-dc1017ae9fea"
E0123 16:19:26.765790       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-8j5jw-3\" not found" "certificateRequest"={"name":"certificate-8j5jw-3","namespace":"cas-test"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="certificate-8j5jw-3" "namespace"="cas-test" "reconcileID"="18ac50d4-e3c9-427d-a372-8c858c53abbd"
E0123 16:19:26.765835       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-84254-3\" not found" "certificateRequest"={"name":"certificate-84254-3","namespace":"cas-test"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="certificate-84254-3" "namespace"="cas-test" "reconcileID"="24fc9d66-e934-47fe-a583-51be95d64801"
E0123 16:19:26.765869       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-l9rg9-3\" not found" "certificateRequest"={"name":"certificate-l9rg9-3","namespace":"cas-test"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="certificate-l9rg9-3" "namespace"="cas-test" "reconcileID"="1b553b69-2355-408c-bacf-9238a85b3164"
E0123 16:24:35.839887       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-z4tjc-3\" not found" "certificateRequest"={"name":"certificate-z4tjc-3","namespace":"cas-test"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="certificate-z4tjc-3" "namespace"="cas-test" "reconcileID"="65c55be3-f8e5-45c9-a3da-77c7baa3e23d"
E0123 16:24:37.801408       1 controller.go:326]  "msg"="Reconciler error" "error"="CertificateRequest.cert-manager.io \"certificate-hdrlg-3\" not found" "certificateRequest"={"name":"certificate-hdrlg-3","namespace":"cas-test"}

Proposal: Helm chart for installing the cas issuer

It might be more convenient for people to have this project also distributed as a helm chart.
Since it is a standard add-on that you would likely only tweak a few parameters for it might fit better in the add-on installation flow if helm charts are being used.

GKE updating K8S API to v1.22

Hi all,

Google will force an update of K8S API to 1.22. This means that a number of v1beta resources won't be available any longer.
Have a look at here.

After a very quick research on this repo, I've seen there are various resources that must be upgraded in order to keep this workload functional:

Is there any plan to upgrade the current version of the component so that it'll support the v1.22 K8S API Level?

Wrong tag name in manifest file

What happend

I tried to apply v0.7.1 manifest.
I successfully applied that manifest.

But, I got following error from deployed Pod in that manifest.

message: 'rpc error: code = NotFound desc = failed to pull and unpack image
  "quay.io/jetstack/cert-manager-google-cas-issuer:v0.7.1": failed to resolve
   reference "quay.io/jetstack/cert-manager-google-cas-issuer:v0.7.1": quay.io/jetstack/cert-manager-google-cas-issuer:v0.7.1:
    not found'
reason: ErrImagePull

When I looked for the cause of that error, I found that manifest specifies image as quay.io/jetstack/cert-manager-google-cas-issuer:v0.7.1.
On the other hand, the tag in the image repository is quay.io/jetstack/cert-manager-google-cas-issuer:0.7.1, and there is no quay.io/jetstack/cert-manager-google-cas-issuer:v0.7.1.

What to expect

That corrections be made to use the correct image in the manifest.

use quay.io/jetstack/cert-manager-google-cas-issuer:0.7.1 on behalf of quay.io/jetstack/cert-manager-google-cas-issuer:v0.7.1.

README is slightly incomplete

There seems to be a step missing from the README. Enabling workload identity on an existing cluster (here) misses a step mentioned in the Google docs here.

tldr; you need to also enable workload-metadata on the nodepools, only updating the cluster is insufficient.

It'd be great to add this step to the docs since it had me stumped for a couple of days.

No v0.8.0 Helm chart

It seems the v0.8.0 container image was published, but not the accompanying Helm chart version. This is important because RBAC has changed and is missing from v0.7.1 of the Helm chart.

❯ helm repo add jetstack https://charts.jetstack.io --force-update
"jetstack" has been added to your repositories

❯ helm search repo cert-manager-google-cas-issuer
NAME                                   	CHART VERSION	APP VERSION	DESCRIPTION                                
jetstack/cert-manager-google-cas-issuer	v0.7.1       	v0.7.1     	A Helm chart for jetstack/google-cas-issuer

Client cert generates correctly in CAS issuer, cannot find the client cert in secret

I've have created a couple of client cert for mutual auth signed by teh CAS issuer, here is one of them:

Spec:
  Duration:  2160h0m0s
  Issuer Ref:
    Group:       cas-issuer.jetstack.io
    Kind:        GoogleCASIssuer
    Name:        googlecasissuer-thanos-mtls
  Renew Before:  360h0m0s
  Secret Name:   thanos-query-mtls-client-cert
  Subject:
    Organizations:
      jetstack
  Uris:
    spiffe://cluster.local/ns/monitoring/sa/thanos-querier
  Usages:
    server auth
    client auth
Status:
  Conditions:
    Last Transition Time:  2021-08-17T20:29:04Z
    Message:               Certificate is up to date and has not expired
    Observed Generation:   2
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2021-11-15T20:29:02Z
  Not Before:              2021-08-17T20:29:03Z
  Renewal Time:            2021-10-31T20:29:02Z
  Revision:                1

As you can see it is generated fine, i have set up a subordinate CA under a root for signing these certs.

However it is not clear to me how to access the client cert in secret.. the secret it generates is a regular tls secret in which i can find:

  • ca.crt (containing the root ca)
  • tls.crt (containing the subordinate ca cert, without the client cert at the leaf)
  • key.crt (what i assume is the key for the client cert)

Here is the contents of certs above:
ca.crt (this corresponds to the root CA cert, which seems fine to me)

-----BEGIN CERTIFICATE-----
MIICFjCCAZ2gAwIBAgIUAJDFjFunC3BKOQTojnznXyiyeFEwCgYIKoZIzj0EAwMw
OTEZMBcGA1UEChMQSmV0c3RhY2stSG91c3NlbTEcMBoGA1UEAxMTdGhhbm9zLW10
bHMtcm9vdC1DQTAeFw0yMTA4MTcxNzU5MTZaFw0yMjA4MTcyMzQ4MDFaMDkxGTAX
BgNVBAoTEEpldHN0YWNrLUhvdXNzZW0xHDAaBgNVBAMTE3RoYW5vcy1tdGxzLXJv
b3QtQ0EwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQg6Ok+coK/sHaMR33cSi59jevI
KDW5SQGS2a2UzXXTrkH/lmV9IWfnxP29y8GY4jr1lKcKLxIe2HRHE5uTYLkcMMM7
Y3zeDFCpHAAAyRnfHZjljYhfzPUAqG22leWp29OjZjBkMA4GA1UdDwEB/wQEAwIB
BjASBgNVHRMBAf8ECDAGAQH/AgECMB0GA1UdDgQWBBQgZ1iJmxi4LPmef42H2QpT
JD3RlTAfBgNVHSMEGDAWgBQgZ1iJmxi4LPmef42H2QpTJD3RlTAKBggqhkjOPQQD
AwNnADBkAjArRpJTBNUrZ6Cj2MooJ//080EyIWQKRyCq8hyLbaNub2LOgp3buKjy
OOACXD/jzlYCMF6c2o2m+x4uY/sQ/sMVDdxLvRth0iGnAzIFOUk3iZ0JC8WkRFsy
P8U1LyvWbXwyGA==
-----END CERTIFICATE-----

tls.crt (this essentially contains the rootCA and the subordinateCA concatenated, but no client cert)

-----BEGIN CERTIFICATE-----
MIIC7jCCAnSgAwIBAgITKMYr+NQXkfnmO1Q8rnBtTwqm2DAKBggqhkjOPQQDAzA7
MRkwFwYDVQQKExBKZXRzdGFjay1Ib3Vzc2VtMR4wHAYDVQQDExV0aGFub3MtbXRs
cy1zaWduZXItQ0EwHhcNMjEwODE3MjAyOTAzWhcNMjExMTE1MjAyOTAyWjATMREw
DwYDVQQKEwhqZXRzdGFjazCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AMc0fuJPLPA173KT4h3qr9xrvVLeJQ7Pu+zHqVKw0KX0BN3KY3twXsYmoshW12oL
dSDgPZcpfxTCSMCsvGnUtv/xmHH2KnriNyBWYjgdKtDN+nikVqRm6wkesFQMLLSE
6v2KcvXJmlOm3PAxWseJx0rCAlY0Xl0rjaZ3wJOswFHNEBr0omMqx59YAehk7C61
IPzajsRQXKJ8OgBgWoF7DSp96bWMIyUaT9eSoSfT/abZ0mkHBonpnkYmlvZ0FETn
mpi29aMl8XqC8xu+2SkppuQpx2UF44Fqyw6xc9el8+OKyP5dNOK/lzn5RhiDmP0m
C4bL4qlRSGwWiR+KAjnAn5kCAwEAAaOBszCBsDAdBgNVHSUEFjAUBggrBgEFBQcD
AQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU6lRbb1NHJyZ65UY+
tanbsLTuuJkwHwYDVR0jBBgwFoAUfjuXHfETyIi6R7/XPNnpMrl+yDAwQQYDVR0R
BDowOIY2c3BpZmZlOi8vY2x1c3Rlci5sb2NhbC9ucy9tb25pdG9yaW5nL3NhL3Ro
YW5vcy1xdWVyaWVyMAoGCCqGSM49BAMDA2gAMGUCMEhsagFfSG4TXhH0HT+jJZoP
xFhuD3PIRUG2VxlJ8O/53ZjNsoXNLqEyv1V0pGVhUgIxAMCqaC4pfX8yc/EsocW3
kf0xKkbYWK4PIsUPAYKZ++IJVtd/9G3UXrtWGUGHDUQ0/w==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIICNzCCAb2gAwIBAgIUAOu2M3fnl/2yoWwfoiYuGV7p6UQwCgYIKoZIzj0EAwMw
OTEZMBcGA1UEChMQSmV0c3RhY2stSG91c3NlbTEcMBoGA1UEAxMTdGhhbm9zLW10
bHMtcm9vdC1DQTAeFw0yMTA4MTcxODAwMzRaFw0yMjA4MTcxODAwMzNaMDsxGTAX
BgNVBAoTEEpldHN0YWNrLUhvdXNzZW0xHjAcBgNVBAMTFXRoYW5vcy1tdGxzLXNp
Z25lci1DQTB2MBAGByqGSM49AgEGBSuBBAAiA2IABIvLdQU1LzHrJOR6ZqSBYQBS
XFmCReUVFbgg9rFU2BANo9eBAlaDCe4OdRG+z7Vw6RJ4CYfMmawK1GOxhKbRVmg8
SFaSmOIib1RkjSbO/41ByxeawFfxG/ra3V6MVucSJqOBgzCBgDAOBgNVHQ8BAf8E
BAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMA8GA1UdEwEB/wQF
MAMBAf8wHQYDVR0OBBYEFH47lx3xE8iIuke/1zzZ6TK5fsgwMB8GA1UdIwQYMBaA
FCBnWImbGLgs+Z5/jYfZClMkPdGVMAoGCCqGSM49BAMDA2gAMGUCMQDgiy+mizAl
uhhs4Fsjz13rGSH5pYWcqYWEPNx1PAgawAgalxTY62E/6XGX38GUPTgCMCxcR9yA
CmYKZq3E9J8dbY9Y8zqZ4qZ1DtoFF/MEV2Lh5ZprefB63sCePUwlkBIw3A==
-----END CERTIFICATE-----

And the tls.key is an RSA key, so no client cert in sight 😕

Google CAS issuer shows that this client certificate has been issued and valid (happy to supply screenshots to confirm)
It has the right spiffe san:

 X509v3 Subject Alternative Name::
                URI:spiffe://cluster.local/ns/monitoring/sa/thanos-sidecar
-----BEGIN CERTIFICATE-----
MIIC7jCCAnWgAwIBAgIUALaVFZltusqK+Whgx+prXYJj4hkwCgYIKoZIzj0EAwMw
OzEZMBcGA1UEChMQSmV0c3RhY2stSG91c3NlbTEeMBwGA1UEAxMVdGhhbm9zLW10
bHMtc2lnbmVyLUNBMB4XDTIxMDgxNzIwMzUyMVoXDTIxMTExNTIwMzUyMFowEzER
MA8GA1UEChMIamV0c3RhY2swggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDeQhPya/l13msrXDXnSYoXyAAAseCsf87+VcZ9DMKN1Gyfhmoyl4iXFbd6Erdy
kRPVyCUIeURsO8nQMPvIkEBmoUQYijFc9mVjkdJfm3DhHn4XAYK2PoE0b/9yHeSj
qWqnoL7P36uZJ6Tm4AX4eN3zEDCYYJyF8qj8QXZlknIBNUcbs7vVTDVsx7aPc69l
sHTPZAhazC/3GRbumC3vaSyUeKQGlJSyVvp/Ncedg8yzbD5p5Mtff0kI4S0bU1ug
ChXyRbXqDSJ5S9B73lM2tsH3ptCGUUxUe1fQvnjwc9La4b6Eo9fh5Zhs7a1uEd55
+XehHL2ImEedpU4L5QD98hnRAgMBAAGjgbMwgbAwHQYDVR0lBBYwFAYIKwYBBQUH
AwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFH264GVzRo3hU4AD
d8H8ItiTqPjEMB8GA1UdIwQYMBaAFH47lx3xE8iIuke/1zzZ6TK5fsgwMEEGA1Ud
EQQ6MDiGNnNwaWZmZTovL2NsdXN0ZXIubG9jYWwvbnMvbW9uaXRvcmluZy9zYS90
aGFub3Mtc2lkZWNhcjAKBggqhkjOPQQDAwNnADBkAjBViIvwHdVJFGYXwi01PiV0
OH/5RE7yuhCdhDj5XNGuw9xOGLTFrVfQ8PjrkAHpHgYCMHeyHD6zQx9isGm3ZVzn
kw3xeSCXXhSfL1xhRz4Q3pbkku9PvG5/ijE9h6eu30PoHg==
-----END CERTIFICATE-----

Am i missing some option in my certificate to indicate this is a client cert and allow the writing the secret properly or is this not supported yet by Cas issuer?

Thanks in advance!

Helm Chart not installing CRD's

When using the helm chart CRD's aren't installed. I'm assuming you need to flatten the templates dir or include the CRD's somewhere.

Without json credential in the cas issuer authentication failing

its been mentioned in the document that if workload identity is enabled credentials in the CAS issuer issue optional. However i tried the option but its failing with permission issue [assuming authenitcation failure as I am not parsing json key]

casClient.CreateCertificate failed: rpc error: code = PermissionDenied desc = Permission 'privateca.certificates.create' denied on 'projects/656342222/locations/europe-west1/caPools/PKIpool

I followed the below execution , not sure whether im missing any procedure in the documentation

gcloud privateca pools add-iam-policy-binding PKIpool --role=roles/privateca.certificateRequester --member="serviceAccount:sa-google-cas-issuer@$(gcloud config get-value project | tr ':' '/').iam.gserviceaccount.com" --location=europe-west1


export PROJECT=$(gcloud config get-value project | tr ':' '/')

gcloud iam service-accounts add-iam-policy-binding \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:$PROJECT.svc.id.goog[cert-manager/ksa-google-cas-issuer]" \
  sa-google-cas-issuer@${PROJECT:?PROJECT is not set}.iam.gserviceaccount.com

kubectl annotate serviceaccount \
  --namespace cert-manager \
  ksa-google-cas-issuer \
  iam.gke.io/gcp-service-account=sa-google-cas-issuer@${PROJECT:?PROJECT is not set}.iam.gserviceaccount.com \
  --overwrite=true

Wrap CRDs in helm chart with installCRDs conditional

I'm having some issues installing this chart in a fresh Kubernetes cluster with GitOps automation, because the chart tries to install CRDs and it's non-optional.

It has become fairly common in helm charts with custom resources to wrap the CRD resource(s) in a conditional to be created only if installCRDs: true. That way I could pass in a value to set up the charts after I've created the CRDs through other means.

I'm happy to raise a PR to fix.

invalid memory address or nil pointer dereference

I am running into the following issue when trying to issue a certificate

E0201 13:26:07.800596       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 263 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x187ee60, 0x27ed950)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
panic(0x187ee60, 0x27ed950)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/jetstack/google-cas-issuer/pkg/controller/certificaterequest.(*CertificateRequestReconciler).Reconcile(0xc0002bf6e0, 0x1cdd760, 0xc00060af60, 0xc000722780, 0xa, 0xc0002b1700, 0x16, 0xc00060af00, 0x0, 0x0, ...)
	/workspace/pkg/controller/certificaterequest/certificaterequest_controller.go:158 +0xb58
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0003405a0, 0x1cdd6a0, 0xc0000aa480, 0x18f0860, 0xc00000d8e0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:293 +0x317
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0003405a0, 0x1cdd6a0, 0xc0000aa480, 0xc00052fd00)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:248 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1(0x1cdd6a0, 0xc0000aa480)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00052ff50)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000755f50, 0x1ca0a00, 0xc00060ab70, 0xc0000aa401, 0xc000101c80)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00052ff50, 0x3b9aca00, 0x0, 0x3b9aca01, 0xc000101c80)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x1cdd6a0, 0xc0000aa480, 0xc000610100, 0x3b9aca00, 0x0, 0x10000c000051c01)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x1cdd6a0, 0xc0000aa480, 0xc000610100, 0x3b9aca00)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:208 +0x4de
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16febf8]

goroutine 263 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x187ee60, 0x27ed950)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/jetstack/google-cas-issuer/pkg/controller/certificaterequest.(*CertificateRequestReconciler).Reconcile(0xc0002bf6e0, 0x1cdd760, 0xc00060af60, 0xc000722780, 0xa, 0xc0002b1700, 0x16, 0xc00060af00, 0x0, 0x0, ...)
	/workspace/pkg/controller/certificaterequest/certificaterequest_controller.go:158 +0xb58
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0003405a0, 0x1cdd6a0, 0xc0000aa480, 0x18f0860, 0xc00000d8e0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:293 +0x317
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0003405a0, 0x1cdd6a0, 0xc0000aa480, 0xc00052fd00)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:248 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1(0x1cdd6a0, 0xc0000aa480)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00052ff50)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000755f50, 0x1ca0a00, 0xc00060ab70, 0xc0000aa401, 0xc000101c80)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00052ff50, 0x3b9aca00, 0x0, 0x3b9aca01, 0xc000101c80)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x1cdd6a0, 0xc0000aa480, 0xc000610100, 0x3b9aca00, 0x0, 0x10000c000051c01)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x1cdd6a0, 0xc0000aa480, 0xc000610100, 0x3b9aca00)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:208 +0x4de

I deployed the latest image and had the following code:

apiVersion: cas-issuer.jetstack.io/v1alpha1
kind: GoogleCASIssuer
metadata:
  name: googlecasissuer
  namespace: google-cas
spec:
  project: <google-project>
  location: europe-west1
  certificateAuthorityID: <certificate-name>
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: demo-certificate
  namespace: google-cas
spec:
  secretName: demo-tls
  dnsNames:
    - google-cas.example.net
  commonName: google-cas.example.net
  issuerRef:
    group: cas-issuer.jetstack.io
    kind: GoogleCASIssuer
    name: googlecasissuer

Leader election is using hostname instead of a stable name

I scaled up Google CAS to 3 deployments and each of them created a configmap for leader election. This is with a base install of the Google CAS issuer.

Pods:

google-cas-issuer-575c8c84cb-7zzj7         1/1     Running   0          8m27s
google-cas-issuer-575c8c84cb-fvhpl         1/1     Running   0          8m27s
google-cas-issuer-575c8c84cb-zqclb         1/1     Running   0          3h23m

CM:

kubectl get cm                                                                               
NAME                                 DATA   AGE
google-cas-issuer-575c8c84cb-7zzj7   0      4s
google-cas-issuer-575c8c84cb-fvhpl   0      4s
google-cas-issuer-575c8c84cb-zqclb   0      3h14m

ClusterIssuer not responding to ingress annotations

Hi guys,

we have successfully installed GCA Issuer in our cluster. We followed the instructions in the readme.
We have already verified the installation by manually deploying a Certificate Manifest in our GKE.
A CertificateRequest was then created and the certificate is also visible in the GCP.

Our cluster issuer:

apiVersion: cas-issuer.jetstack.io/v1beta1
kind: GoogleCASClusterIssuer
metadata:
  name: google-cas-issuer
spec:
  project: XXXXXX
  location: europe-west4
  caPoolId: XXXXX
  credentials:
    name: "googlesa"
    key: "credentials.json"

Deployment:

resource "kubernetes_deployment" "deployment_google_cas_issuer" {
  metadata {
    name      = "google-cas-issuer"
    namespace = kubernetes_namespace.certmanager.metadata.0.name
    labels = {
      app = "google-cas-issuer"
    }
  }

  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "google-cas-issuer"
      }
    }

    template {
      metadata {
        labels = {
          app = "google-cas-issuer"
        }
      }

      spec {
        service_account_name             = kubernetes_service_account.ksa_google_cas_issuer.metadata[0].name
        termination_grace_period_seconds = 10
        container {
          image   = "quay.io/jetstack/cert-manager-google-cas-issuer:latest"
          name    = "google-cas-issuer"
          args    = ["--enable-leader-election", "--zap-devel=true"]
          command = ["/google-cas-issuer"]

          resources {
            limits = {
              cpu    = "100m"
              memory = "100Mi"
            }
            requests = {
              cpu    = "100m"
              memory = "20Mi"
            }
          }
        }
      }
    }
  }
}

This issuer works without problems when manually deploying a Certificate Resource, no matter in which namespace.

In the next step, we wanted to use the Issuer in our Ingress manifests.
To do this, we added the following annotations:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: iti-c-kirby-playground
  namespace: iti-c-kirby-playground
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 64m
    cert-manager.io/cluster-issuer: google-cas-issuer
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
    - hosts:
        - DOMAIN
      secretName: domain-tls
  rules:
  - host: DOMAIN
    http:
      paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: iti-c-kirby-playground
              port:
                number: 80     

When we deploy this Ingress configuration, then we get the following error message within our CertificateRequest:

IssuerNotFound ...

After some research, we found the following issue (#43 ) and added the two suggested annotations to our Ingress:

    cert-manager.io/issuer-kind: GoogleCASClusterIssuer
    cert-manager.io/issuer-group: cas-issuer.jetstack.io
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: iti-c-kirby-playground
  namespace: iti-c-kirby-playground
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 64m
    cert-manager.io/cluster-issuer: google-cas-issuer
    cert-manager.io/issuer-kind: GoogleCASClusterIssuer
    cert-manager.io/issuer-group: cas-issuer.jetstack.io
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
    - hosts:
        - DOMAIN
      secretName: domain-tls
  rules:
  - host: DOMAIN
    http:
      paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: iti-c-kirby-playground
              port:
                number: 80     

If we deploy this Ingress Resource, then nothing happens. The cert manager does not create a certificate resource and also in the logs this part is completely skipped and there are no entries.
As if this ingress resource does not exist or has no annotations at all.

What are we doing wrong or what configuration are we missing?

Thanks for your help.

Greetings,
Daniel

Issue with ClusterRoleBinding and RoleBinding

When using i.e. https://github.com/jetstack/google-cas-issuer/releases/download/v0.6.2/google-cas-issuer-v0.6.2.yaml

Besides the tag is setup as v0.6.2 in the image similar to #88 - the ClusterRoleBinding/cert-manager-google-cas-issuer and RoleBinding/cert-manager-google-cas-issuer are refering to service accounts in the default namespace and not in cert-manager.

`subjects:

  • kind: ServiceAccount
    name: cert-manager-google-cas-issuer
    namespace: default`

admission webhook denied GoogleCASIssuer must be one of Issuer or ClusterIssuer

Error:
cert-manager/ingress-shim "msg"="re-queuing item due to error processing" "error"="admission webhook \"webhook.cert-manager.io\" denied the request: spec.issuerRef.kind: Invalid value: \"GoogleCASIssuer\": must be one of Issuer or ClusterIssuer" "key"="retool/retool-tp"

Logs on the cert-manager controller showing the above error.

kubectl get pods -n cert-manager 
NAME                                              READY   STATUS    RESTARTS   AGE
cert-manager-google-cas-issuer-86667d76f6-fnvkw   1/1     Running   0          55m
cert-manager-tp-5d64fc9cdb-h9xdp                  1/1     Running   0          55m
cert-manager-tp-cainjector-5658d56947-2s2bc       1/1     Running   0          55m
cert-manager-tp-webhook-997d66d67-rlsfj           1/1     Running   0          55m

Problem: GoogleCASIssuer should be able to be approved
Expected behaviour: request approved

casClient.CreateCertificate failed: context deadline exceeded

I am getting the below error in my google cas issuer , not sure where it got broken.
CAS Issuer status is very fine and the permissions on the CAS systems are perfectly fine , unfortunately I do not see anything blocked from the GKE cluster on the CAS side.

I could see it it successfully created CSR and google-cas-issuer tried to create certificate by connecting to CAS but not sure what is blocking there , network side in the is project nothing is blocked as per my knowledge , please find the logs for your reference

"reconciler group": "cert-manager.io",
      "stacktrace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:214",
      "error": "casClient.CreateCertificate failed: context deadline exceeded",
      "namespace": "cert-test",
      "logger": "controller-runtime.manager.controller.certificaterequest"

Error with missing secret

A missing credential secret triggers an ERROR - together with stack trace.

2020-11-11T18:44:56.842Z        ERROR   controller      Reconciler error        {"reconcilerGroup": "cas-issuer.jetstack.io", "reconcilerKind": "GoogleCASClusterIssuer", "controller": "googlecasclusterissuer", "name": "bates-cas", "namespace": "", "error": "Secret \"googlesa\" not found"}
github.com/go-logr/zapr.(*zapLogger).Error
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:237
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90

Improve the installation process

Currently the documented installation process needs the git repo and several commands.

We should have a one-liner that someone can run to get a working install with the CRDs, RBAC and deployment.

The docs should be updated to that, and the existing instructions can be kept for developer docs if we want.

We don't have to create any issuers, that can still be a manual step.

google-cas-issuer pod gets OOM killed with default resource limits

I installed google-cas-issuer using the manifest available in releases:
https://github.com/jetstack/google-cas-issuer/releases/download/v0.5.3/google-cas-issuer-v0.5.3.yaml

It deployed successfully and the google-cas-issuer deployment was healthy. However, when I created a Certificate, the pod crashed with status OOMKilled.

It looks like the default memory limit specified in the manifest is 30Mi. Increasing this to 90Mi fixed the issue on my cluster.

certificate renewal does not work in due to auth issue to privatecaapi end point

msg"="Reconciler error" "error"="failed to sign certificate request: casClient.CreateCertificate failed: rpc error: code = Unauthenticated desc = transport: per-RPC creds failed due to error: compute: Received 500 Internal Server Error\n" "certificateRequest"={"name":"ksqldb-int-cert-1-grc97","namespace":"kafka"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="ksqldb-int-cert-1-grc97" "namespace"="kafka" "reconcileID"="ebd0dbfd-5b13-497a-9b18-0c4cc8b0fa1c"

Facing this issue in version 0.6.2 with cert-manager 1.11.0 in GKE version

work around : after restarting the cert-manager-google-cas-issuer pod , things certificate enrollment is fine . But this again reported after couple of days and cert renewals are again affected . Seems like Auth token refresh does not working internally or so . Anyone faced this issue very recently ?

can't get cas issuer to work doesn't matter which way I go with number of issues

I write this issue after spending my work day on getting it to work, I am running a gke 1.22 cluster with the combo of external-dns, cert-manager, traefik, cilium, kube-prom-stack, and now want to expose grafana using this feature. (it's currently exposed but via http).

Configurations:

  1. my pools and roots setup:
resource "google_privateca_ca_pool" "certs-pool" {
  name = "${var.cluster_name}-certs-pool"
  location = var.region
  tier = "DEVOPS"
  publishing_options {
    publish_ca_cert = true
    publish_crl = false
  }
  labels = {
    environment = var.environment
  }
  depends_on = [
    google_project_service.certificate-authority-service
  ]
}

resource "google_privateca_ca_pool_iam_binding" "binding" {
  ca_pool = google_privateca_ca_pool.certs-pool.id
  role = "roles/privateca.certificateRequester"
  location = var.region
  members = [
    "serviceAccount:${google_service_account.sa-google-cas-issuer.email}",
  ]
}

resource "google_privateca_certificate_authority" "certs-roots" {
  pool = google_privateca_ca_pool.certs-pool.name
  certificate_authority_id = "${var.cluster_name}-certificate-authority"
  location = var.region
  deletion_protection = false

  config  {
    subject_config  {
      subject {
        organization = "org name"
        common_name  = "${var.cluster_name}-certificate-authority"
      }
    }
    x509_config {
      ca_options {
        is_ca = true
        max_issuer_path_length = 2
      }
      key_usage {
        base_key_usage {
          cert_sign = true
          crl_sign = true
        }
        extended_key_usage {
          server_auth = false
        }
      }
    }
  }
  key_spec {
    algorithm = "EC_P384_SHA384"
  }

  depends_on = [
    google_project_service.certificate-authority-service
  ]
}
  1. installed the issuer via the kubectl command (helm install method didn't work)
resource "helm_release" "cm-gci" {
  count = 0 # it is 0 because it doesn't work so disabled
  name             = "cert-manager-google-cas-issuer"
  namespace        = kubernetes_namespace.cm-ns.metadata.0.name
  chart            = "cert-manager-google-cas-issuer"
  repository       = "https://charts.jetstack.io"
  version          = "v0.6.0" # tried also without the v

  depends_on = [
    helm_release.cm,
    google_privateca_ca_pool.certs-pool,
    google_privateca_certificate_authority.certs-roots
  ]
}

I downloaded the yaml from the release section, changed resources limit from 20Mi to 90Mi

k apply -f google-cas-issuer-v0.5.3.yaml -n cert-manager
  1. my issuer manifest
resource "kubectl_manifest" "cas-cluster-issuer" {
  yaml_body = <<YAML
apiVersion: cas-issuer.jetstack.io/v1beta1
kind: GoogleCASClusterIssuer
metadata:
  name: ${local.cluster_issuer_name}
  namespace: ${kubernetes_namespace.cm-ns.metadata.0.name} 
spec:
  project: ${var.project_id}
  location: ${var.region}
  caPoolId: ${google_privateca_ca_pool.certs-pool.name}
  credentials:
    name: google-cas-sa
    key: gci-credentials.json
YAML

  depends_on = [
    kubectl_manifest.google-cas-manifest
  ]
}

clusterissuer deployed successfully

dev-gke-googlecasclusterissuer, READY: True, REASON: CASClientOK, MESSAGE: Successfully constructed CAS client

  1. my Certificate setups:
  • method no.1, didn't work with the error from the issuer: secret "grafana-tls-xlaks" wasn't found
resource "kubectl_manifest" "grafana-certificate" {
    yaml_body = <<YAML
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: grafana-tls
  namespace: ${kubernetes_namespace.monitoring-ns.metadata.0.name}
spec:
  commonName: grafana-tls-cn
  # Duration of the certificate
  duration: "2160h"
  # Renew 8 hours before the certificate expiration
  renewBefore: "360h"
  secretName: grafana-tls
  privateKey:
    algorithm: EDCSA
    size: 256
  subject:
    organizations:
    - org name
  issuerRef:
    group: cas-issuer.jetstack.io
    kind: GoogleCASClusterIssuer
    name: ${local.cluster_issuer_name}
YAML

  depends_on = [
    helm_release.monitoring-stack,
    kubectl_manifest.cas-cluster-issuer,
  ]
}
  • method no.2, didn't work because a secret wasn't created and cm didn't recognize the issuer:
    ingress:  
      enabled: true
      annotations:
        cert-manager.io/cluster-issuer: ${local.cluster_issuer_name}
        cert-manager.io/issuer-kind: GoogleCASClusterIssuer
        cert-manager.io/issuer-group: cas-issuer.jetstack.io
        cert-manager.io/duration: "2160h"
        cert-manager.io/renew-before: "360h"
        acme.cert-manager.io/http01-ingress-class: traefik
        external-dns.alpha.kubernetes.io/hostname: ${local.full_dns} 
        traefik.ingress.kubernetes.io/router.entrypoints: web
        kubernetes.io/ingress.class: traefik
      tls:
        - secretName: grafana-tls
          hosts:
            - ${local.full_dns}

what am I missing here? can someone that managed to get it to work share his configurations?

Certificate Requests not approved when using customized configuration with GKE and workload identity.

I'm trying to deploy google-cas-issuer to a workload identity GKE cluster. When I follow the steps in the README, it works perfectly. Certs are issued and renewed like I would expect. However, when I try to customize the installation to fit with the non-default setup, I receive no certificates, as they are not being approved by the cert-manager controller. I split up the files from the installation YAML blob into templates so that it could be customized and installed via Helm.

The cert-manager controller is using workload identity, also. We create a GCP service account and give it access to our Cloud DNS for letsencrypt DNS challenges. In the case of this customized install, the kubernetes service account is tk-accurate-sloth-cm. Then, for the google-cas-issuer setup, we are creating another GCP service account and giving permissions to request certificates, and associating it with the tk-accurate-sloth-cas kubernetes service account.

While troubleshooting, I tried adding the CLI switch to disable the approval requirement, and certificates were then able to be created. So it really seems to be an RBAC issue.

The cert-manager controller shows the following log message:

Setting lastTransitionTime for Certificate "tk-accurate-sloth-mtls" condition "Issuing" to 2021-07-27 16:13:01.992887246 +0000 UTC m=+2556.823535918
Setting lastTransitionTime for Certificate "tk-accurate-sloth-mtls" condition "Ready" to 2021-07-27 16:13:01.998242329 +0000 UTC m=+2556.828890997
cert-manager/controller/CertificateReadiness "msg"="re-queuing item due to error processing" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"tk-accurate-sloth-mtls\": the object has been modified; please apply your changes to the latest version and try again" "key"="default/tk-accurate-sloth-mtls" 
Setting lastTransitionTime for Certificate "tk-accurate-sloth-mtls" condition "Ready" to 2021-07-27 16:13:02.303237948 +0000 UTC m=+2557.133886616
cert-manager/controller/CertificateReadiness "msg"="re-queuing item due to error processing" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"tk-accurate-sloth-mtls\": the object has been modified; please apply your changes to the latest version and try again" "key"="default/tk-accurate-sloth-mtls" 

The google-cas-issuer deployment shows the following log message:

INFO controller.CertificateRequest Checking whether CR has been approved {"certificaterequest": "default/tk-accurate-sloth-mtls-2qmlg", "cr": {"namespace": "default", "name": "tk-accurate-sloth-mtls-2qmlg"}} 
INFO controller.CertificateRequest certificate request is not approved yet {"certificaterequest": "default/tk-accurate-sloth-mtls-2qmlg", "cr": {"namespace": "default", "name": "tk-accurate-sloth-mtls-2qmlg"}} 
DEBUG controller-runtime.manager.events Warning {"object": {"kind":"CertificateRequest","namespace":"default","name":"tk-accurate-sloth-mtls-2qmlg","uid":"9129e9a3-a893-4c7a-bf46-7bcaca52c836","apiVersion":"cert-manager.io/v1","resourceVersion":"28598"}, "reason": "CRNotApproved", "message": "certificate request is not approved yet"}

I have attempted to diff the RBAC settings and they appear to be the same, aside from the customized service account name changes. Here's a dump of everything from the non-working customized cluster. Please let me know if there are any additional details I can provide.

apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  annotations:
    iam.gke.io/gcp-service-account: [email protected]
  creationTimestamp: "2021-07-27T15:32:04Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:automountServiceAccountToken: {}
      f:metadata:
        f:annotations:
          .: {}
          f:iam.gke.io/gcp-service-account: {}
    manager: HashiCorp
    operation: Update
    time: "2021-07-27T15:32:04Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:secrets:
        .: {}
        k:{"name":"tk-accurate-sloth-cas-token-xkkmr"}:
          .: {}
          f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-07-27T15:32:04Z"
  name: tk-accurate-sloth-cas
  namespace: cert-manager
  resourceVersion: "5319"
  selfLink: /api/v1/namespaces/cert-manager/serviceaccounts/tk-accurate-sloth-cas
  uid: 1ccd9e55-2107-4d7c-9d84-d17493df3da2
secrets:
- name: tk-accurate-sloth-cas-token-xkkmr
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:12Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:rules: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:12Z"
  name: cert-manager-controller-approve:casissuer
  resourceVersion: "5402"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cert-manager-controller-approve%3Acasissuer
  uid: 92c4785d-98c8-41dd-9976-4e97c3e9f76f
rules:
- apiGroups:
  - cert-manager.io
  resourceNames:
  - googlecasclusterissuers.cas-issuer.jetstack.io/*
  - googlecasissuers.cas-issuer.jetstack.io/*
  resources:
  - signers
  verbs:
  - approve
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:13Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:roleRef:
        f:apiGroup: {}
        f:kind: {}
        f:name: {}
      f:subjects: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:13Z"
  name: cert-manager-controller-approve:casissuer
  resourceVersion: "5404"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cert-manager-controller-approve%3Acasissuer
  uid: fbbf4c96-dc7d-431e-a68e-9627bdae81d0
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cert-manager-controller-approve:casissuer
subjects:
- kind: ServiceAccount
  name: tk-accurate-sloth-cm
  namespace: cert-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:12Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:rules: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:12Z"
  name: google-cas-issuer-role
  resourceVersion: "5401"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/google-cas-issuer-role
  uid: fb4ae0cf-a5be-4235-a246-2a46a2b0f638
rules:
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - cas-issuer.jetstack.io
  resources:
  - googlecasclusterissuers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - cas-issuer.jetstack.io
  resources:
  - googlecasclusterissuers/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - cas-issuer.jetstack.io
  resources:
  - googlecasissuers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - cas-issuer.jetstack.io
  resources:
  - googlecasissuers/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - cert-manager.io
  resources:
  - certificaterequests
  verbs:
  - get
  - list
  - update
  - watch
- apiGroups:
  - cert-manager.io
  resources:
  - certificaterequests/status
  verbs:
  - get
  - patch
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:13Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:roleRef:
        f:apiGroup: {}
        f:kind: {}
        f:name: {}
      f:subjects: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:13Z"
  name: google-cas-issuer-rolebinding
  resourceVersion: "5403"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/google-cas-issuer-rolebinding
  uid: af8b5443-b4bd-40ce-a467-734abebea7f7
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: google-cas-issuer-role
subjects:
- kind: ServiceAccount
  name: tk-accurate-sloth-cas
  namespace: cert-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:13Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:rules: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:13Z"
  name: leader-election-role
  namespace: cert-manager
  resourceVersion: "5405"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/cert-manager/roles/leader-election-role
  uid: 4a2c8d34-186b-40bc-b04a-ddc734867f08
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  - configmaps/status
  verbs:
  - get
  - update
  - patch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
  - get
  - list
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:13Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:roleRef:
        f:apiGroup: {}
        f:kind: {}
        f:name: {}
      f:subjects: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:13Z"
  name: leader-election-rolebinding
  namespace: cert-manager
  resourceVersion: "5407"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/cert-manager/rolebindings/leader-election-rolebinding
  uid: 11b00b94-9a71-4022-83d7-96c1ec19b4dc
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: leader-election-role
subjects:
- kind: ServiceAccount
  name: tk-accurate-sloth-cas
  namespace: cert-manager
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: google-cas-issuer
    meta.helm.sh/release-namespace: cert-manager
  creationTimestamp: "2021-07-27T15:32:13Z"
  generation: 1
  labels:
    app: cert-manager-google-cas-issuer
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app: {}
          f:app.kubernetes.io/managed-by: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector:
          f:matchLabels:
            .: {}
            f:app: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"google-cas-issuer"}:
                .: {}
                f:args: {}
                f:command: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources:
                  .: {}
                  f:limits:
                    .: {}
                    f:cpu: {}
                    f:memory: {}
                  f:requests:
                    .: {}
                    f:cpu: {}
                    f:memory: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:serviceAccount: {}
            f:serviceAccountName: {}
            f:terminationGracePeriodSeconds: {}
    manager: Go-http-client
    operation: Update
    time: "2021-07-27T15:32:13Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-07-27T15:32:18Z"
  name: google-cas-issuer
  namespace: cert-manager
  resourceVersion: "5464"
  selfLink: /apis/apps/v1/namespaces/cert-manager/deployments/google-cas-issuer
  uid: 9e4bf84f-0bb5-4709-b60a-2e1426e6213d
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: cert-manager-google-cas-issuer
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: cert-manager-google-cas-issuer
    spec:
      containers:
      - args:
        - --enable-leader-election
        - --zap-devel=true
        command:
        - /google-cas-issuer
        image: quay.io/jetstack/cert-manager-google-cas-issuer:0.5.2
        imagePullPolicy: IfNotPresent
        name: google-cas-issuer
        resources:
          limits:
            cpu: 100m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: tk-accurate-sloth-cas
      serviceAccountName: tk-accurate-sloth-cas
      terminationGracePeriodSeconds: 10
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-07-27T15:32:18Z"
    lastUpdateTime: "2021-07-27T15:32:18Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-07-27T15:32:13Z"
    lastUpdateTime: "2021-07-27T15:32:18Z"
    message: ReplicaSet "google-cas-issuer-98f47c5f9" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

IssuerNotFound with ClusterIssuer and nginx on GKE

I am having an issue configuring google-cas-issuer in GKE.

Here is the script I used to configure cert-manager and google-cas-issuer:

kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.4.0 \
  --set installCRDs=true

kubectl apply -f https://github.com/jetstack/google-cas-issuer/releases/download/v0.5.2/google-cas-issuer-v0.5.2.yaml

(I have also configured workload identity)

And here is the yaml configuration for kubernates:

apiVersion: cas-issuer.jetstack.io/v1beta1
kind: GoogleCASClusterIssuer
metadata:
  name: cert-app-issuercluster
spec:
  project: (my project id here)
  location: europe-west1
  caPoolId: www-my-app
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cert-my-app
spec:
  secretName: my-app-tls
  duration: 24h
  renewBefore: 8h
  commonName: my.app
  dnsNames:
    - my.app
  issuerRef:
    group: cas-issuer.jetstack.io
    kind: GoogleCASClusterIssuer
    name: cert-app-issuercluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-all
  annotations:
    cert-manager.io/cluster-issuer: cert-app-issuercluster
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: my.app
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service: 
            name: angular-app
            port: 
              number: 80
  tls:
  - hosts:
    - my.app
    secretName: my-app-tls

certificate request description:

Normal IssuerNotFound 19s (x5 over 19s) cert-manager Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "cert-app-issuercluster" not found

kubectl get googlecasclusterissuers output:

NAME READY REASON MESSAGE
cert-app-issuercluster True CASClientOK Successfully constructed CAS client

Is there an issue from my configuration ? (I have replaced host names to my.app)

Cannot find helm chart in jetstack charts repo

I am trying to install the chart, following the instructions in the readme, but I cannot find the chart where it says it should be. As you can see even after updating the repo, which --force-update should have done anyways, I cannot find a single issuer in the repo. Am I missing something here?

$ helm repo add jetstack https://charts.jetstack.io --force-update
"jetstack" has been added to your repositories

$ helm repo up
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...
Update Complete. ⎈Happy Helming!⎈

$ helm search repo issuer --devel
No results found

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.