GithubHelp home page GithubHelp logo

external-secrets / kubernetes-external-secrets Goto Github PK

View Code? Open in Web Editor NEW
2.6K 45.0 402.0 3.22 MB

Integrate external secret management systems with Kubernetes

License: MIT License

Dockerfile 0.28% JavaScript 97.28% Shell 1.77% Mustache 0.66%
kubernetes secrets-management aws aws-secrets-manager vault hashicorp kubernetes-external-secrets secrets-manager

kubernetes-external-secrets's Introduction

external-secrets

External Secrets

ci CII Best Practices OpenSSF Scorecard Go Report Card FOSSA Status Artifact Hub operatorhub.io

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, Akeyless, CyberArk Conjur, Pulumi ESC and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.

Multiple people and organizations are joining efforts to create a single External Secrets solution based on existing projects. If you are curious about the origins of this project, check out this issue and this PR.

Documentation

External Secrets Operator guides and reference documentation is available at external-secrets.io. Also see our stability and support policy.

Contributing

We welcome and encourage contributions to this project! Please read the Developer and Contribution process guides. Also make sure to check the Code of Conduct and adhere to its guidelines.

Sponsoring

Please consider sponsoring this project, there are many ways you can help us with: engineering time, providing infrastructure, donating money, etc. We are open to cooperations, feel free to approach as and we discuss how this could look like. We can keep your contribution anonymized if that's required (depending on the type of contribution), and anonymous donations are possible inside Opencollective.

Bi-weekly Development Meeting

We host our development meeting every odd wednesday on Jitsi. We run the meeting with alternating times 8:00 PM Berlin Time and 1:00 PM Berlin Time, we'll announce the time in our Kubernetes Slack channel. Meeting notes are recorded on hackmd.

Anyone is welcome to join. Feel free to ask questions, request feedback, raise awareness for an issue, or just say hi. ;)

Security

Please report vulnerabilities by email to [email protected]. Also see our SECURITY.md file for details.

software bill of materials

We attach SBOM and provenance file to our GitHub release. Also, they are attached to container images.

Adopters

Please create a PR and add your company or project to our ADOPTERS.md file if you are using our project!

Roadmap

You can find the roadmap in our documentation: https://external-secrets.io/main/contributing/roadmap/

Kicked off by

Sponsored by

License

FOSSA Status

kubernetes-external-secrets's People

Contributors

aabouzaid avatar arruzk avatar bchrobot avatar davidcorbin avatar davidholsgrove avatar dependabot[bot] avatar ericabramov avatar flydiverny avatar greenkeeper[bot] avatar jacopodaeli avatar jdamata avatar justinas-b avatar jxpearce-godaddy avatar keweilu avatar klu6-godaddy avatar lukaszbudnik avatar mailtokun avatar megakid avatar moolen avatar muenchhausen avatar nbendafi-yseop avatar nick-triller avatar pluies avatar renovate[bot] avatar rimitchell avatar sbose78 avatar silasbw avatar snyk-bot avatar vladlosev avatar yagrxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-external-secrets's Issues

Useless error message: cannot load secret marked for deletion

You don't say WHICH secret! :)

 {"level":50,"time":1562830012963,"pid":17,"hostname":"dev-external-secrets-kubernetes-external-secrets-794fd9688z2x2n","type":"Error","stack":"InvalidRequest │
│ Exception: You can’t perform this operation on the secret because it was marked for deletion.\n    at Request.extractError (/app/node_modules/aws-sdk/lib/pro │
│ tocol/json.js:51:27)\n    at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n    at Request.emit (/app/node_modules/aws- │
│ sdk/lib/sequential_executor.js:78:10)\n    at Request.emit (/app/node_modules/aws-sdk/lib/request.js:683:14)\n    at Request.transition (/app/node_modules/aw │
│ s-sdk/lib/request.js:22:10)\n    at AcceptorStateMachine.runTo (/app/node_modules/aws-sdk/lib/state_machine.js:14:12)\n    at /app/node_modules/aws-sdk/lib/s │
│ tate_machine.js:26:10\n    at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:38:9)\n    at Request.<anonymous> (/app/node_modules/aws-sdk/lib/ │
│ request.js:685:12)\n    at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:116:18)","message":"You can’t perform this operation o │
│ n the secret because it was marked for deletion.","code":"InvalidRequestException","time":"2019-07-11T07:26:52.963Z","requestId":"cf010d22-7f47-435f-9acb-19e │
│ 0044a384c","statusCode":400,"retryable":false,"retryDelay":31.423678272763624,"msg":"You can’t perform this operation on the secret because it was marked for │
│  deletion.","v":1}                                                                                                                                            │
│ {"level":30,"time":15

Cross account secret retrieval

Hi. I'm trying to use the kubernetes-external-secrets to access some secrets from another AWS account. The trust and secrets retrieval works as expected using the aws cli but it fails with the kubernetes-external-secrets with the error:

{"level":50,"time":1559737055540,"pid":16,"hostname":"kubernetes-external-secrets-68cd796f7b-fjp67","msg":"failure while polling the secrets {\"message\":\"Invalid name. Must be a valid name containing alphanumeric characters, or any of the following: -/_+=.@!\",\"code\":\"ValidationException\",\"time\":\"2019-06-05T12:17:35.540Z\",\"requestId\":\"884b173f-xxxx-xxxx-xxxx-f916def6b609\",\"statusCode\":400,\"retryable\":false,\"retryDelay\":84.08545324727217}","v":1}

My deployment file looks like below:

---
apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: test-secret
secretDescriptor:
  backendType: secretsManager
  data:
    - key: "arn:aws:secretsmanager:eu-west-2:111111111110:secret"
      name: "secret100"

The aws cli pull looks like below:

aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:eu-west-2:111111111110:secret:jenkins100 --region eu-west-2


{
    "Name": "secret100",
    "VersionId": "3397d6e1-xxxx-xxxx-xxxx-293cd1ea6f02",
    "SecretString": "mySecurePassword",
    "VersionStages": [
        "AWSCURRENT"
    ],
    "CreatedDate": 1559259096.015,
    "ARN": "arn:aws:secretsmanager:eu-west-2:111111111110:secret:secret100-jNpM3x"
}

Can I have please some help with this, I'm doing something wrong?

Thanks

Avoid putting secrets to ETCD

Current implementation creates an etcd object with base64 encoded secret, which may potentially leak later.

Allow to put secrets to a volume, e.g. add a mutation webhook which adds an initContainer, which fetches & writes secrets to a volume shared with a container.

Pros:

  • no secrets in etcd
  • iam role can be granularly assigned to each initContainer (based on namespace annotation for example) when using https://github.com/uswitch/kiam , no need to give access to all secrets to the controller

AccessKey and SecretKey location

Does anybody know where to put access_key and secret_access_key ?? I am getting below error:

{"level":50,"time":1561451711429,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"failure while polling the secrets","v":1}
{"level":50,"time":1561451711429,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","type":"Error","stack":"CredentialsError: Missing credentials in config\n at IncomingMessage.<anonymous> (/app/node_modules/aws-sdk/lib/util.js:865:34)\n at IncomingMessage.emit (events.js:194:15)\n at IncomingMessage.EventEmitter.emit (domain.js:441:20)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)","message":"Missing credentials in config","retryable":false,"time":"2019-06-25T08:35:11.429Z","code":"CredentialsError","originalError":{"message":"Could not load credentials from any providers","retryable":false,"time":"2019-06-25T08:35:11.429Z","code":"CredentialsError"},"msg":"Missing credentials in config","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"stopping and removing poller aws-s3-test-inside-k8s_242636","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"stopping poller","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"spinning up poller {\"id\":\"aws-s3-test-inside-k8s_242636\",\"namespace\":\"default\",\"secretDescriptors\":[{\"backendType\":\"secretsManager\",\"data\":[{\"key\":\"aws-s3-test-inside-k8s\",\"name\":\"aws-s3-test-inside-k8s\"}],\"name\":\"aws-s3-test-inside-k8s\"}],\"ownerReference\":{\"apiVersion\":\"kubernetes-client.io/v1\",\"controller\":true,\"kind\":\"ExternalSecret\",\"name\":\"aws-s3-test-inside-k8s\",\"uid\":\"cef0de83-9702-11e9-a5e5-42010a8000aa\"}}","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"starting poller","v":1}
{"level":30,"time":1561451731421,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"running poll","v":1}
{"level":30,"

Support for IAM Roles for self managed cluster with Kiam

Hey Team,

I am working with IAM Roles to access the services of AWS. I am building a self-managed cluster on AWS with help of kops tool. All my app deployments uses IAM roles to access AWS Services using Kiam service.

I have added AWS Role in deployment configuration as follows.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    name: kubernetes-external-secrets
  name: kubernetes-external-secrets
  namespace: kubernetes-external-secrets
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kubernetes-external-secrets
  template:
    metadata:
      labels:
        name: kubernetes-external-secrets
        service: kubernetes-external-secrets
      annotations:
        iam.amazonaws.com/role: arn:aws:iam::12312312312:role/Development-Secrets-Manager-Role
    spec:
      serviceAccountName: kubernetes-external-secrets-service-account
      containers:
        - image: "godaddy/kubernetes-external-secrets:1.2.2"
          imagePullPolicy: Always
          name: kubernetes-external-secrets
          env:
            - name: AWS_REGION
              value: ap-southeast-1

It doesn't work. It is throws errors like unable to find the credentials. But, It works with if I provide AWS_ACCESS_KEY_ID & AWS_SECRET_KEYs as environment vars to deployment.

Please can support or help if I am doing something wrong.

SSM parameters not working

I am trying out the recently added SSM support (thank you @tmxak), however, the examples in the README are not working for me. I installed via helm, overriding the image tag to be 1.3.1 since the chart still installs 1.2.0 by default

Using the commands and files from the readme as-is and I see the following in the logs

{"level":30,"time":1564662741812,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"loading kube specs","v":1} {"level":30,"time":1564662742016,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"successfully loaded kube specs","v":1} {"level":30,"time":1564662742016,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"updating CRD","v":1} {"level":30,"time":1564662742016,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"Upserting custom resource externalsecrets.kubernetes-client.io","v":1} {"level":30,"time":1564662742088,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"successfully updated CRD","v":1} {"level":30,"time":1564662742088,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"starting app","v":1} Thu, 01 Aug 2019 12:32:22 GMT kubernetes-client deprecated .getStream see https://github.com/godaddy/kubernetes-client/blob/master/merging-with-kubernetes.md at lib/external-secret.js:40:10 {"level":30,"time":1564662742091,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"successfully started app","v":1} {"level":30,"time":1564662842812,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"spinning up poller {\"id\":\"a4e32d63-b458-11e9-97b5-0ed027fcb918\",\"namespace\":\"tmp-jst\",\"secretDescriptor\":{\"backendType\":\"systemManager\",\"data\":[{\"key\":\"/hello-service/password\",\"name\":\"password\"}],\"name\":\"hello-service\"},\"ownerReference\":{\"apiVersion\":\"kubernetes-client.io/v1\",\"controller\":true,\"kind\":\"ExternalSecret\",\"name\":\"hello-service\",\"uid\":\"a4e32d63-b458-11e9-97b5-0ed027fcb918\"}}","v":1} {"level":30,"time":1564662842815,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"starting poller","v":1} {"level":30,"time":1564662852820,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"running poll","v":1} {"level":30,"time":1564662852821,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"fetching secret property password","v":1} {"level":50,"time":1564662853292,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"failure while polling the secrets","v":1} {"level":50,"time":1564662853292,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","type":"Error","stack":"ParameterNotFound: null\n at Request.extractError (/app/node_modules/aws-sdk/lib/protocol/json.js:51:27)\n at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n at Request.emit (/app/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/app/node_modules/aws-sdk/lib/request.js:683:14)\n at Request.transition (/app/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/app/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /app/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:38:9)\n at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:685:12)\n at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:116:18)","message":null,"code":"ParameterNotFound","time":"2019-08-01T12:34:13.289Z","requestId":"6a949ea3-ef3d-4bca-89fe-aa47c309c4b8","statusCode":400,"retryable":false,"retryDelay":58.61510348117784,"msg":null,"v":1}

The parameter exists

aws ssm get-parameter --name "/hello-service/password"

{ "Parameter": { "Name": "/hello-service/password", "Type": "String", "Value": "1234", "Version": 2, "LastModifiedDate": 1564664235.796, "ARN": "arn:aws:ssm:us-east-1:XXXX:parameter/hello-service/password" } }

I confirmed the IAM role is working by revoking it's access to ssm:GetParameter which results in the following in the external-secrets logs
.
"msg":"User: arn:aws:sts::XXXX:assumed-role/k8s-parameter_store_readonly/kiam-kiam is not authorized to perform: ssm:GetParameter on resource: arn:aws:ssm:us-west-2:XXXX:parameter/hello-service/password"

What is EVENTS_INTERVAL_MILLISECONDS?

The configuration documentation in README and helm chart refer to a EVENTS_INTERVAL_MILLISECONDS configuration, but nothing in the code seems to use it?

Option to "restart" Pods when ExternalSecrets are updated

It is useful to reload the process running inside a container when an ExternalSecret is updated (e.g., username/password rotation). An API to facilitate this feature has been discussed at length in other places (e.g., kubernetes/kubernetes#24957). In the meantime, there are some hackish approaches that might work well enough:

  • Add an option to ExternalSecrets to declare that Pods should be deleted after an ExternalSecret update (not safe for all types of Pod usage)
  • Add an option to exec a CLI tool in all containers that depend on the ExternalSecret/Secret (exposing exec to the controller seems like a step backwards for security).
  • ...

The first step in making progress on this issue is probably to think through the options above (and maybe others) and augment this issue with a proposal for how to proceed.

Changelog for 1.3.1

Seems like changelog could use some more details for the changes in 1.3.1 from #107 (should probably have been a 1.4.0 as well)
Since it was squashed only the refactor made the changelog.
While it included a few other things for the changelog.

Will this work on google container engine kubernetes?

On EKS I can just use roles but if Im on GKE I cant to that

Can I use the default AWS environment variables to set the secret access key and id so I can create an IAM user my gke clusters can use to get access to secrets manager?

Support to have IAM user Assume a Role when retrieving secret data

External Secrets could better sever multi-account setups if it allows the IAM User specified to retrieve secrets not only based on an IAM User but also an IAM Role.

The following example goes so far as to allow you to specify a role for each value retrieval. Probably all retrievals within a single secret will generally come from the same AWS account but this suggestion allows for maximum flexibility in case you need a secret comprised of secrets stored from multiple accounts. The example of username/password would never be the case but I left it with the basic example. Even just having a roleArn filed that is used for the entire ExternalSecret would be just fine. The IAM user would just have to have permissions to assume these roles and you're good.

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: hello-service
secretDescriptor:
  backendType: secretsManager
  data:
    - key: hello-service/credentials
      name: password
      property: password
      roleArn: arn:aws:iam::123456789012:role/somerole
    - key: hello-service/credentials
      name: username
      property: username
      roleArn: arn:aws:iam::210987654321:role/somerole

The default behavior, should roleArn not be present, should be to just use the IAM creds directly without assuming a role.

OR

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: hello-service
secretDescriptor:
  backendType: secretsManager
  roleArn: arn:aws:iam::123456789012:role/somerole
  data:
    - key: hello-service/credentials
      name: password
      property: password
    - key: hello-service/credentials
      name: username
      property: username

Feature request: ability to generate resulting secrets in a file format

In the example provided in your readme, the resulting secret generated by an external secret looks like so:

apiVersion: v1
kind: Secret
metadata:
  name: hello-service
type: Opaque
data:
  password: MTIzNA==

However, if you try to mount this secret object into a pod, what will happen is that the secret will get mounted as a directory, containing a file called password. If there were 2 keys that you pulled from ASM - e.g., username and password - then the directory would contain 2 files, username, and password.

This isn't really ideal from a usability standpoint. It would be preferable to generate a single file, say config.txt, that contains the 2 keys and values inside of it. E.g.:

username=foo
password=bar

This can accomplished with Kubernetes secrets, by using the "stringData" functionality as described here E.g.:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
stringData:
  config.yaml: |-
    apiUrl: "https://my.api.com/api/v1"
    username: {{username}}
    password: {{password}}

This would make for a better use case, as it would then be possible to just mount a single file (config.yaml in this case) into a container.

For bonus points, it would additionally be nice to be able to specify the format of the resulting file - e.g., txt, json, etc.

Inconsistent error handling for JSON secrets

When using JSON formatted secrets and selecting properties in them the error handling seems inconsistent.

https://github.com/godaddy/kubernetes-external-secrets/blob/d04cf1de5e3793522f3d46a9b9be9f25413e5f15/lib/backends/kv-backend.js#L34-L41

If theres a parsing error its just swallowed with a warning log, while if the property doesn't exist it goes boom, shouldn't these be dealt with in the same way? Not being able to parse the secret seems worse to me than not finding the correct property, though they are probably equally big issues 😄

Unable to Fetch SSM Parameter

Hi, I'm attempting to use External Secrets to fetch a parameter(SecureString) stored in Systems Manager, but I am met with this error message

{"level":50,"time":1563397164220,"pid":16,"hostname":"external-secrets-766d65c4d9-frzb4","type":"Error","stack":"TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined\n at Function.from (buffer.js:207:11)\n at externalData.forEach (/app/lib/backends/kv-backend.js:72:43)\n at Array.forEach ()\n at SystemManagerBackend.getSecretManifestData (/app/lib/backends/kv-backend.js:71:18)\n at process._tickCallback (internal/process/next_tick.js:68:7)","msg":"The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined","v":1}

My secrets YAML is defined as

kind: ExternalSecret
metadata:
  name: test-token
  namesapce: default
secretDescriptor:
  backendType: systemManager
  data:
    - key: /path/to/test
      name: test

Fetching secrets from SecretsManager works as expected, but when attempting to fetch Parameters from SSM I am met with the error above.

Admission webhook server

Create an admission webhook server for kubernetes-external-secrets. This is part of the effort to avoid putting Secrets in ETCD.

With an admission webhook server we can use mutating admission webhooks to interpose on Pod creation and inject external secret data by adding an init container to Pods.

This issue tracks work to get a basic mutating admission webhook server running:

  • interposes on requests to create new Pods
  • identifies Pods with a .metadata.annotations['externalsecrets.kubernetes-client.io'] annotation (value can be anything, see Volume front end API discussion).
  • adds an init container that echos 'hello world'

Init Container / Volume frontend

Implement a merge-patch in the admission webhook server to inject an Init Container that implements the "memory" API. The Init Container example might provide some useful implementation details.

  • End-to-end implementation of "memory" frontend
  • Update documentation for using the memory frontend

After completing this work, kubernetes-external-secrets should have full support for a memory front end (see #46).

Failure while polling the secrets

I'm trying to get kubernetes-external-secrets to work, but it's currently failing and the error logs don't seem to show any useful clues as to why it's happening:

2019-05-16T00:22:18.787878009Z {"level":30,"time":1557966138787,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"running poll","v":1}
2019-05-16T00:22:18.78823308Z {"level":30,"time":1557966138788,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"fetching secret property github_access_token","v":1}
2019-05-16T00:22:18.882898059Z {"level":50,"time":1557966138879,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"failure while polling the secrets {}","v":1}
2019-05-16T00:22:28.791813549Z {"level":30,"time":1557966148791,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"running poll","v":1}
2019-05-16T00:22:28.791997103Z {"level":30,"time":1557966148791,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"fetching secret property github_access_token","v":1}
2019-05-16T00:22:28.797970739Z {"level":30,"time":1557966148797,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"stopping and removing poller github-access-token_9063402","v":1}
2019-05-16T00:22:28.798018978Z {"level":30,"time":1557966148797,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"stopping poller","v":1}

Any ideas?

I ssh'd into the pod, created a nodejs test script that fetches the secret, and it does work.

Support for Annotations

Support for Annotations

Thanks for the great piece of software. I'm looking to use external-secrets alongside kubernetes-replicator and in order to automatically duplicate my external secrets to different namespaces, they need to contain an annotation of:

    replicator.v1.mittwald.de/replication-allowed: "true"
    replicator.v1.mittwald.de/replication-allowed-namespaces: .*

However, when I create my external secret with these annotations, they are not carried over into the kubernetes secret object, preventing them from being replicated.

Please consider passing all/some annotations from the ExternalSecret object to the Secret object.

Steps to reproduce

Create an ExternalSecret with the annotations:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  annotations:
    replicator.v1.mittwald.de/replication-allowed: "true"
    replicator.v1.mittwald.de/replication-allowed-namespaces: .*
  name: datadog
  namespace: default
secretDescriptor:
  backendType: secretsManager
  data:
  - key: datadog
    name: datadog-api-key
    property: datadog-api-key
  - key: datadog
    name: datadog-auth-token
    property: datadog-auth-token

See that the resulting Secret does not contain annotations:

apiVersion: v1
data:
  datadog-api-key: xxx
  datadog-auth-token: xxx
kind: Secret
metadata:
  creationTimestamp: "2019-07-28T16:26:16Z"
  name: datadog
  namespace: default
  ownerReferences:
  - apiVersion: kubernetes-client.io/v1
    controller: true
    kind: ExternalSecret
    name: datadog
    uid: 86859120-affc-11e9-a909-025f2055801a
  resourceVersion: "649857"
  selfLink: /api/v1/namespaces/default/secrets/datadog
  uid: 6c3f8ecf-b154-11e9-a909-025f2055801a
type: Opaque

help: force secrets upserting eventhough POLLER_INTERVAL_MILLISECONDS is set to higher interval

Hi guys, we have noticed that if we set POLLER_INTERVAL_MILLISECONDS=86400000 (24 hours) external secrets are not pulled from aws secrets manager and upserted to secrets immediately after deployment.

Use Case: Our secrets are db connection string and passwords which are not expected to change very often so we wanted to have poller interval once a day. we havent found way to force externalsecrets deployment to poll and upsert secrets immediately helm upgrade. it waits for 24 hours for even first time polling. please let us know if you need more details. thanks.

Log message is incomplete

"msg":"fetching secret property KissmetricsToken"

does not tell me enough information to debug any issues. It would be helpful to see all the key name in Amazon Secrets Manager.

Volume front end API

kubernetes-external-secrets currently has a single "front end" that writes backend data to Secret objects (and Pods manipulate the Secret objects in the usual ways). The goal of this issue is to define the API for a new frontend that wrties data to a Volume. This is part of the effort to avoid putting Secrets in ETCD.

The Volume frontend API must allow engineers to declare how kubernetes external secrets writes external secret data to a Volume. One approach is to use annotations on the Pod to identify which volumes in .spec.volumes represent an ExternalSecret:

kind: Pod
metadata:
  annotations:
    externalsecrets.kubernetes-client.io/volume/db-secrets: 'true'
    externalsecrets.kubernetes-client.io/volume/client-secrets: 'true'
spec:
  containers:
  - name: test
    image: busybox
    volumeMounts:
      - name: db-secrets
        mountPath: /db-secrets
      - name: client-secrets
        mountPath: /client-secrets
      - name: other-stuff
        mountPath: /stuff
  volumes:
  - name: db-secrets
    emptyDir:
      medium: "Memory"
  - name: client-secrets
    emptyDir:
      medium: "Memory"
  - name: other-stuff
    configMap:
      name: stuff-config

We should discuss the pros and cons of an approach like this and discuss other potential approaches.

.kube/config missing after kubectl applying exerternal-secrets.yml

After kubectl applying the example deployment file external-secrets.yml, it keeps complaining about not being able to find the kubeconfig file (see log below from pod).

I've seen that in commit #70 this file (that I'm still using) was removed. Is this way of deploying not supported anymore? Is there something else I could check?
The proposed Helm chart is basically just a templated version of this deleted file, so I guess I will get the same type of error.

npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~prestart: [email protected]
npm info lifecycle [email protected]~start: [email protected]
 > [email protected] start /app
> ./bin/daemon.js
 fs.js:115
    throw err;
    ^
 Error: ENOENT: no such file or directory, open '/home/node/.kube/config'
    at Object.openSync (fs.js:439:3)
    at Object.readFileSync (fs.js:344:35)
    at cfgPaths.map.cfgPath (/app/node_modules/kubernetes-client/lib/config.js:221:37)
    at Array.map (<anonymous>)
    at loadKubeconfig (/app/node_modules/kubernetes-client/lib/config.js:220:28)
    at Object.fromKubeconfig (/app/node_modules/kubernetes-client/lib/config.js:62:33)
    at Object.<anonymous> (/app/config/index.js:17:34)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
npm info lifecycle [email protected]~start: Failed to exec start script
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `./bin/daemon.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm timing npm Completed in 523ms
 npm ERR! A complete log of this run can be found in:
npm ERR!     /home/node/.npm/_logs/2019-05-27T07_56_42_702Z-debug.log

edit: for completion sake, I converted the deleted file to terraform code using the kubernetes provider (see below), but still get the same error.

resource "kubernetes_cluster_role_binding" "ext-secret-cluster-role-binding" {
  metadata {
    name = "kubernetes-external-secrets-cluster-role-binding"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "kubernetes-external-secrets-cluster-role"
  }
  subject {
    kind = "ServiceAccount"
    name = "kubernetes-external-secrets-service-account"
    namespace = "kubernetes-external-secrets"
  }
}

resource "kubernetes_cluster_role" "ext-secret-cluster-role" {
  metadata {
    name = "kubernetes-external-secrets-cluster-role"
  }

  rule {
    api_groups = [""]
    resources = ["secrets"]
    verbs = ["create", "update"]
  }
  rule {
    api_groups = ["apiextensions.k8s.io"]
    resources = ["customresourcedefinitions"]
    verbs = ["create"]
  }
  rule {
    api_groups = ["apiextensions.k8s.io"]
    resources = ["customresourcedefinitions"]
    resource_names = ["externalsecrets.kubernetes-client.io"]
    verbs = ["get", "update"]
  }
  rule {
    api_groups = ["kubernetes-client.io"]
    resources = ["externalsecrets"]
    verbs = ["get", "watch", "list"]
  }
}

resource "kubernetes_namespace" "ext-secret-namespace" {
  metadata {
    name = "kubernetes-external-secrets"
  }
}

resource "kubernetes_service_account" "ext-secret-service-account" {
  metadata {
    name = "kubernetes-external-secrets-service-account"
    namespace = "kubernetes-external-secrets"
  }
}

resource "kubernetes_deployment" "ext-secret-deployment" {
  "metadata" {
    labels {
      name = "kubernetes-external-secrets"
    }
    name = "kubernetes-external-secrets"
    namespace = "kubernetes-external-secrets"
  }
  "spec" {
    replicas = "1"
    selector {
      match_labels {
        name = "kubernetes-external-secrets"
      }
    }
    "template" {
      "metadata" {
        labels {
          name = "kubernetes-external-secrets"
          service = "kubernetes-external-secrets"
        }
      }
      "spec" {
        service_account_name = "kubernetes-external-secrets-service-account"
        container {
          name = "kubernetes-external-secrets"
          image_pull_policy = "Always"
          image = "godaddy/kubernetes-external-secrets:1.2.0"
        }
      }
    }
  }
}

No way to install via kubectl?

I was directed here from this blog: https://godaddy.github.io/2019/04/16/kubernetes-external-secrets/

I was looking into using this tool but noticed that recently deploying via kubectl was removed and helm is now the official way to install. Is there some type of workaround or plans to re-introduce deploying with kubectl? Having to install helm just to install this tool is a real bummer.

PR where helm was setup as the official way to install: #68
When external-secrets.yml was removed: #70
Someone else asked a similar question here: #72

AWS Region should be required

The config defaults to us-west-2 for the AWS region which seems arbitrary and caused some pain while troubleshooting. Instead, the AWS region should be set explicitly in the deployment. The documentation should be updated to specify where to modify the value if needed.

Parameterize the `type` of the Secret manifest

The fstab/cifs FlexVolume plugin (among others) requires secrets to be created with type: fstab/cifs. However, the upserted secrets are all created with type: Opaque. This results in errors such as:

MountVolume.SetUp failed for volume "foo-volume-mount" : Couldn't get secret foo-project/volume-mount-secret. err: Cannot get secret of type fstab/cifs

It would be helpful to parameterize the type used in the final manifest, and allow that to flow through to the poller as needed.

no_proxy environment variable is not getting honored

Hi - I am mounting environment variables as


spec:
      serviceAccountName: kubernetes-external-secrets-service-account
      containers:
        - image: "godaddy/kubernetes-external-secrets:1.2.0"
          imagePullPolicy: Always
          name: kubernetes-external-secrets
          envFrom:
            - configMapRef:
                name: proxy-environment-variables

My proxy-environment-variable config map is as follows


apiVersion: v1
kind: ConfigMap
metadata:
  name: proxy-environment-variables
  namespace: kubernetes-external-secrets
data:
  HTTPS_PROXY: my_redacted_proxy
  HTTP_PROXY: my_redacted_proxy
  http_proxy: my_redacted_proxy
  https_proxy: my_redacted_proxy
  NO_PROXY: my_redacted_no_proxy urls
  no_proxy: my_redacted_no_proxy urls

One of the no proxy cidr is 10.100.0.0/16

When i deploy the pod exists with the following message

{
    "log": "Error: Failed to get /openapi/v2 and /swagger.json: tunneling socket could not be established, statusCode=503
",
    "stream": "stderr",
    "docker": {
        "container_id": "052ffccfdf5a55a7cc22f58e5f9f01cc5e028804667962810cfc50c6573f9acc"
    },
    "kubernetes": {
        "container_name": "kubernetes-external-secrets",
        "namespace_name": "kubernetes-external-secrets",
        "pod_name": "kubernetes-external-secrets-775d45f74d-bjr55",
        "container_image": "godaddy/kubernetes-external-secrets:1.2.0",
        "container_image_id": "docker-pullable://godaddy/kubernetes-external-secrets@sha256:6439feeef5602ed0bd3fd635f0e403c1274b3198c9a59526c84a3437152ae245",
        "pod_id": "addbacb1-71b7-11e9-b8cb-02ef4f9bf35c",
        "labels": {
            "name": "kubernetes-external-secrets",
            "pod-template-hash": "775d45f74d",
            "service": "kubernetes-external-secrets"
        },
        "host": "ip-172-25-130-225.us-east-2.compute.internal",
        "master_url": "https://10.100.0.1:443/api",
        "namespace_id": "ad7f4034-71b7-11e9-b8cb-02ef4f9bf35c"
    }
}

I tail the proxy logs and find that these calls are going over proxy

1557340191.194 59985 172.25.130.225 TAG_NONE/503 0 CONNECT 10.100.0.1:443 - HIER_NONE/- -

Allow option to restart if do not have permission to access secrets

If the kiam service is not running, then the external secrets will start without permission to access secrets.

At present, external-secrets will log a message.

If kiam then starts, the external-secrets pods will retain the old, incorrect permissions.

To enable the system to self-correct, please add an option that has the external-secrets service to restart if it does not have permission to access secrets.

Clearer Documentation on property and name

The Readme shows following section
- key: hello-service/credentials
name: username
property: username

Please make clear, what name and what property ist. Maybe it would help if changing username to username_aws and username_kubernetes. Thanks in advance

Trigger from CloudWatch Events -> SNS topic or Lambda rather than polling

Polling every external secret seems like a lot of unnecessary transactions.

SSM will tell you when a secret is updated via a CloudWatch Event. That event can either:

  1. Pass the event to an SNS topic, which kubernetes-external-secrets is subscribed to
  2. Pass the event to a Lambda function that could make a webhook call to a kubernetes-exernal-event endpoint.

kubernetes-external-secretswould still poll on start-up, and maybe once every hour or two, just in case.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

Create a Helm Chart

To make installation of the kubernetes-external-secrets controller and related resources a bit easier, a helm chart can be added to this repository in the first instance (and hopefully promoted to https://github.com/helm/charts/tree/master/incubator for public consumption).

This Helm Chart can also allow easy configuration of the environment variables for the container, like AWS_REGION and the events / poller intervals (addresses #51).

PodAnnotations can also be used to configure kube2iam role's to grant access to AWS Secret Manager for example.

Option to run controller outside cluster

@kelseyhightower suggests a way to improve when deployed across multiple clusters (see attached quote) by running the controller in a control plane outside the cluster. This has resource consumption benefits and potential security benefits (e.g., kubernetes cluster doesn't have a role to access secrets manager, but a locked down external control plane does).

You have a few options on how to leverage External Secrets in this context. The obvious approach is to deploy the "control plane" across every cluster. There are pros and cons. While you have the ability to do a canary roll out of the External Secrets control plane across multiple clusters, you also take up compute resource across every cluster, and must store the credentials required for the External Secrets control plane to sync secrets from an external store.

The other option would be to enable the External Secrets control plane to live outside of any individual Kubernetes cluster and work as a "global control plane". In this configuration the External Secrets control plane can push secrets across multiple clusters. Users can still define ExternalSecrets objects in each cluster, but you now have the ability to support "global" ExternalSecrets objects which are replicated to every cluster, or a limited set of clusters and namespaces based on configuration.

You can still host the External Secrets control plane using Kubernetes, but in a "admin" cluster, which is separate from the clusters where normal applications are deployed.

Feature Request: Secret versioning and auto-update

One issue with the current approach is that it is impossible to know what version of a secret is currently deployed to a cluster nor whether the current secret is the one stored in the backing store.

I propose adding an opaque versionID field to the ExternalSecrets CRD (like ContainerSolutions controller) AND (most importantly) a watcher that updates Git (i.e. GitOps) when the backend detects a new version of the secret.

In this way, the secret is kept up to date AND one can tell which version of the secret is stored in any cluster that is watching that GitHub repo.

Restrict controller network access

One potential security improvement is to restrict ingress and egress from the controller using a NetworkPolicy object. The controller needs access only to the secret backends it has been configured with and the local Kubernetes API server.

A first version of adding support for this would be to include a NetworkPolicy manifest in this repo designed for AWS Secretes Manager.

feature request: create secret with every entry in secret manager key

Would it be possible to add an extra section under External Secrets that will just import all key/values under the stored AWS key and add them to a Kubernetes secret?

I would like to avoid having to explicitly list out every key and property every time I update or add a new k/v to a AWS secret.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.