GithubHelp home page GithubHelp logo

hashicorp / vault-helm Goto Github PK

View Code? Open in Web Editor NEW
1.1K 72.0 864.0 1.1 MB

Helm chart to install Vault and other associated components.

License: Mozilla Public License 2.0

Makefile 0.70% Smarty 7.64% Shell 90.42% Dockerfile 0.42% HCL 0.80%

vault-helm's Introduction

Vault Helm Chart

โš ๏ธ Please note: We take Vault's security and our users' trust very seriously. If you believe you have found a security issue in Vault Helm, please responsibly disclose by contacting us at [email protected].

This repository contains the official HashiCorp Helm chart for installing and configuring Vault on Kubernetes. This chart supports multiple use cases of Vault on Kubernetes depending on the values provided.

For full documentation on this Helm chart along with all the ways you can use Vault with Kubernetes, please see the Vault and Kubernetes documentation.

Prerequisites

To use the charts here, Helm must be configured for your Kubernetes cluster. Setting up Kubernetes and Helm is outside the scope of this README. Please refer to the Kubernetes and Helm documentation.

The versions required are:

  • Helm 3.6+
  • Kubernetes 1.22+ - This is the earliest version of Kubernetes tested. It is possible that this chart works with earlier versions but it is untested.

Usage

To install the latest version of this chart, add the Hashicorp helm repository and run helm install:

$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories

$ helm install vault hashicorp/vault

Please see the many options supported in the values.yaml file. These are also fully documented directly on the Vault website along with more detailed installation instructions.

vault-helm's People

Contributors

alvin-huang avatar anhdat avatar arielevs avatar benashz avatar catsby avatar corest avatar dependabot[bot] avatar eyenx avatar fischerman avatar georgekaz avatar gw0 avatar hashicorp-copywrite[bot] avatar imthaghost avatar jasonodonnell avatar kschoche avatar lawliet89 avatar malnick avatar mehmetsalgar avatar mitchellh avatar mogaal avatar pcman312 avatar rasta-rocket avatar sarahethompson avatar sharkannon avatar sosheskaz avatar stupidscience avatar swenson avatar thyton avatar tomhjp avatar tvoran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vault-helm's Issues

Error loading configuration from /tmp/storageconfig.hcl

Out of a sudden I start getting the following error in logs:

Error loading configuration from /tmp/storageconfig.hcl: At 10:33: illegal char

That's the only line in logs and all 3 replicas are stuck in CrashLoopBackOff.

Use native kubernetes ways of configuration

I'm currently using the helm chart in the incubator (https://github.com/helm/charts/tree/master/incubator/vault), and trying to migrate over to this official chart.

Unfortunately a lot of what is done in this helm chart is so much harder that the one from the incubator. Annotations for the pod for example require you to use a string instead of helm parsing it, allowing for the same configuration one would do for kubernetes itself. Same goes for volume and mounts.

Passing parameter values to config

Is there an easy way to pass values to the config section in values.yaml, specifically we plan on using the awskms seal as well as a provisioned EFS claim to be used as backend storage. These values are generated dynamically as we have multiple kubernetes clusters.

We have everything working correctly however it is being hardcoded. Also, we use terraform helm_release provider.

Clearer README?

I'm getting this error when not submitting any override values:

core: seal configuration missing, not initialized

Installing vault using helm not creating pod

I am trying to install the valut using helm v3, its throwing the below error.

cloned the repo
https://github.com/hashicorp/vault-helm.git

Switched to v0.1.2 by "git checkout v0.1.2"
then, updated the version in the Charts.yaml file "version: 0.1.2"

Then git is at below version.
root@arun-desktop-e470:~/github/vault-helm# git log .
commit e3c771a (HEAD -> master, origin/master, origin/HEAD)
Author: Jason O'Donnell [email protected]
Date: Tue Oct 29 11:19:37 2019 -0400
changelog++
commit 04303ba
Author: Luke Barton [email protected]
Date: Mon Oct 28 15:56:29 2019 +0000
Fix bad GCP environment variable example (#101)
(truncated)

executed the below command to install the vault after "cd valut-helm" repo.
helm install --generate-name -n dev-poc-namespace .

But the pod is not created. I checked with "kubectl get pods -n dev-poc-namespace" and no pods are running, the history of chart displays as incomplete (see below)

vault-helm# helm history chart-1573001538
REVISION	UPDATED                 	STATUS  	CHART      	APP VERSION	DESCRIPTION     
1       	Tue Nov  5 21:52:21 2019	deployed	vault-0.1.2	           	Install complete

The helm get chart is copied to URL

root@desktop-e470:~/github/vault-helm# helm version
version.BuildInfo{Version:"v3.0.0-rc.2", GitCommit:"82ea5aa774661cc6557cb57293571d06f94aff0c", GitTreeState:"clean", GoVersion:"go1.13.3"}
root@desktop-e470:~/github/vault-helm# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

where to find more information about this failure and fix this issue?

Auto-Unseal with Azure Key Vault doesn't work

This is the same behaviour as hashicorp/vault#6959, just from the Helm chart. I have pre-deployed the Consul Helm chart and am using that as the backend config.

With this values.yml:

global:
  enabled: true
  image: "vault:1.2.3"

server:
  resource: |
    requests:
      memory: "512Mi"
    limits:
      memory: "1Gi"
  enabled: true

  service:
    enabled: true

  ha:
    enabled: true
    replicas: 3

    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      storage "consul" {
        path = "vault"
        address = "ap-discovery-consul-server-0.ap-discovery-consul-server.infrastructure.svc.cluster.local:8500"
      }

      seal "azurekeyvault" {
        client_id      = "<client-id>"
        client_secret  = "<client-secret>"
        tenant_id      = "<tenant-id>"
        vault_name     = "<keyvault-name>"
        key_name       = "<unseal-key>"
      }

With v0.1.2 of the Helm chart:

$ helm install --atomic --name ap-secrets -f <...>\Vault.yaml --namespace infrastructure .

# in other terminal
% kubectl logs -f ap-secrets-vault-0 --namespace infrastructure -c vault                                           ==> Vault server configuration:

       Azure Environment: AzurePublicCloud
          Azure Key Name: <unseal-key>
        Azure Vault Name: <keyvault-name>
               Seal Type: azurekeyvault
             Api Address: http://10.200.33.50:8200
                     Cgo: disabled
         Cluster Address: https://10.200.33.50:8201
              Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: true
                 Storage: consul (HA available)
                 Version: Vault v1.2.3

2019-09-14T06:01:20.848Z [WARN]  storage.consul: appending trailing forward slash to path
==> Vault server started! Log data will stream in below:

2019-09-14T06:01:22.177Z [INFO]  core: stored unseal keys supported, attempting fetch
2019-09-14T06:01:22.192Z [WARN]  failed to unseal core: error="stored unseal keys are supported, but none were found"
2019-09-14T06:01:23.691Z [INFO]  core: autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery
2019-09-14T06:01:26.685Z [INFO]  core: autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery
2019-09-14T06:01:27.192Z [INFO]  core: stored unseal keys supported, attempting fetch
2019-09-14T06:01:27.198Z [WARN]  failed to unseal core: error="stored unseal keys are supported, but none were found"
2019-09-14T06:01:29.711Z [INFO]  core: autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery
...

Attempting to unseal it:

% kubectl exec -ti pods/ap-secrets-vault-0 --namespace infrastructure sh                                           Defaulting container name to vault.                                                                                               Use 'kubectl describe pod/ap-secrets-vault-0 -n infrastructure' to see all of the containers in this pod.
/ # vault status                                                                                                                  Key                      Value
---                      -----
Recovery Seal Type       azurekeyvault
Initialized              false
Sealed                   true
Total Recovery Shares    0
Threshold                0
Unseal Progress          0/0
Unseal Nonce             n/a
Version                  n/a
HA Enabled               true
/ # vault operator init
Error initializing: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/sys/init
Code: 400. Errors:

* Vault is already initialized

I did double check my unseal key policies in Azure Key Vault, and it has all permissions minus delete, so I know it's not a permission's issue.

HA setup with consul and auto-seal using Google Cloud KMS

I'm following the guide and referring to the video below:
https://www.youtube.com/watch?v=_r368h-mxxs

I was hoping someone could point me at a working config and set of instructions to set up vault inside of my pre-existing GKE cluster using the vault helm chart with Google Cloud KMS for auto-unseal. The documentation for vault is great besides the helm chart HA install.

I've seen various issues across the internet for getting this working e.g:
#39

But the documentation is lacking that explains:

  1. What service account need to be created
  2. What needs to go in the seal "gcpckms" { settings
  3. What extraEnvironmentVars need to be added
  4. Should workload-identity be enabled
  5. What secrets need to be installed before the vault helm chart

I've tried every combination and I keep hitting the following error message in my pods:

Error parsing Seal configuration: failed to encrypt with GCP CKMS - ensure the key exists and the service account has at least roles/cloudkms.cryptoKeyEncrypterDecrypter permission: rpc error: code = PermissionDenied desc = Permission 'cloudkms.cryptoKeyVersions.useToEncrypt' denied on resource 'projects/adamtestcert/locations/global/keyRings/vault-adam/cryptoKeys/vault-adam' (or it may not exist).

I'm not sure how I can debug this issue further, I've confirmed my new service account has the correct permissions to access my cryptokey.

Any helm would be greatly appreciated.

How to configure transit seal automatically

Hi.

It's confusing for me how to setup a seal "transit" { when we deploy the vault with helm chart.

Any guides on how todo that?

I try to put this config

config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }

      seal "transit" {
        address            = "http://HOST_IP:8200"
        token              = "s.Qf1s5zigZ4OX6akYjQXJC1jY"
        disable_renewal    = "false"

        key_name           = "transit_key_name"
        mount_path         = "transit/"
        namespace          = "ns1/"
        tls_skip_verify    = "false"
      }

But I can't connect to to vault. Any guides to point me in the write direction?

Thx.

Add StatefulSet update strategy

The updatestrategy is currently set to OnDelete, allow for the values to define the strategy to be the alternative RollingUpdate.

Scaling Pods results in zombie instances in Consul

When scaling down the Vault replicas it results in zombie instances in Consul. I'm not a kubernetes expert but seems like doing a preStop Lifecycle hook which does a consul leave command could do the trick.

Consul looks like this for me now:
image

Is there a way to manually remove these instances manually? Or do they go away after a while.

How can I input a multi-line annotation with vault?

  # Extra annotations to attach to the ui service
  # This should be a multi-line string mapping directly to the a map of
  # the annotations to apply to the ui service
  annotations: {}

This is how my values looks like:

ui:
  enabled: true
  serviceType: LoadBalancer
  externalPort: 8200
  annotations: |
    external-dns.alpha.kubernetes.io/hostname: dns_name
    service.beta.kubernetes.io/aws-load-balancer-security-groups: sg_group


Getting a Error: YAML parse error on vault/templates/ui-service.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go struct field .annotations of type map[string]string

Just to note that I do the same exact thing with the consul-helm chart and it works fine

Setting up consul secret engine?

I'm trying to setup the Consul Secrets Engine but it seems to be impossible with the current Helm Charts.

I also use the Hashicorp Consul Helm Charts which creates a Consul Agent on each node of my kubernetes cluster (DaemonSet). (Described here) Which is fine and sounds logical. The Consul Docs also shows how I would be able to access the Consul Agent (Described here) which makes the Node IP available as environment variable.

The issue is that I need a address to the Consul Agent when configuring the Secrets Engine in Vault. And the Value cannot be a Environment Variable and the same value must be reachable from each Vault Pod.

vault write consul/config/access \
    address=127.0.0.1:8500 \
    token=xxxx-xxxx-xxxx-xxxx-xxxx

The only solution I can think of is adding a Consul Agent in the Pod and then use the localhost IP to resolve to that agent. Is there a better method or is that the only one?

Error parsing Seal configuration: failed to encrypt with GCP CKMS

Hi everyone,

I've enabled GCP CKMS unseal in my values.yaml. Key exists and passed service account is Owner, so full permissions. I must be missing something since the pods end up in CrashLoopBackOff a few seconds after installing the chat. For sake of completeness, here is my values.yaml. I tried adding "credentials" key in seal section with no effect as well.

The error I see in kubectl logs podname is:

Error parsing Seal configuration: failed to encrypt with GCP CKMS - ensure the key exists and the service account has at least roles/cloudkms.cryptoKeyEncrypterDecrypter permission: rpc error: code = InvalidArgument desc = Resource name [projects/XXXX-platform,/locations/europe-west1,/keyRings/mykeyring/cryptoKeys/mykey] does not match any known resource name pattern.

Any idea?

global:
  enabled: true
  image: "vault:1.2.1"

server:
  resources:
    resources:
      requests:
        memory: 256Mi
        cpu: 250m
      limits:
        memory: 256Mi
        cpu: 250m

  authDelegator:
    enabled: true

  extraEnvironmentVars:
    GOOGLE_REGION: europe-west1,
    GOOGLE_PROJECT: XXXX-platform,
    GOOGLE_CREDENTIALS: /vault/userconfig/vault-test-key/vault-test-key.json

  extraVolumes:
    - type: secret
      name: vault-test-key
      load: true

  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ template "vault.name" . }}
              release: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname

  tolerations: {}
  nodeSelector: {}
  annotations: {}

  dataStorage:
    enabled: false
    size: 10Gi
    storageClass: null
    accessMode: ReadWriteOnce

  auditStorage:
    enabled: false
    size: 10Gi
    storageClass: null
    accessMode: ReadWriteOnce


  dev:
    enabled: false
  
  service:
    enabled: true
    clusterIP: ""

  standalone:
    enabled: "false"
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "file" {
        path = "/vault/data"
      }

  ha:
    enabled: true
    replicas: 3
    config: |
      ui = true
      listener "tcp" {
        tls_disable = 0
        address = "[::]:8200"
        cluster_address = "[::]:8201"
        tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
        tls_key_file  = "/vault/userconfig/vault-server-tls/vault.key"
        tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
      }
      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }

      seal "gcpckms" {
        credentials = "/vault/userconfig/vault-test-key/vault-test-key.json"
        project     = "XXXX-platform"
        region      = "europe-west1"
        key_ring    = "mykeyring"
        crypto_key  = "mykey"
      }

    disruptionBudget:
      enabled: true
      maxUnavailable: null

ui:
  enabled: false
  serviceType: "ClusterIP"

Latest commit to helm template server-statefulsets.yaml breaks installation on docker for desktop installs

We use the helm templates for vault to do local k8s testing of vault using docker-for-desktop. Commit change b41d36c introduced a change to server-statefulsets.yaml that causes it to fail installs on docker-for-desktop k8s cluster. Specifically we get the following error:

error: error validating โ€œSTDINโ€: error validating data: ValidationError(StatefulSet.spec.template.spec.securityContext): unknown field โ€œreadOnlyRootFilesystemโ€ in io.k8s.api.core.v1.PodSecurityContext

This is using Docker Desktop: v2.1.0.4 running kubernetes v1.14.7

Removing the line introduced here: b41d36c?diff=split#diff-60ca1594dfcfe4f0d22db67a9583d9feR44 fixes the issue.

To reproduce:

  1. Install docker for desktop + Kubernetes (plus kubectl tools)
  2. Install Helm
  3. Get latest vault-helm templates
  4. Run helm template vault-helm --name vault | kubectl apply -f - on the docker-desktop cluster

Add ingress support on Helm Chart Configuration

We need the ingress support on helm installation.

Ex: Helm Configuration

| ingress.enabled | enable ingress | false |
| ingress.web.host | hostname for the webserver ui | "" |
| ingress.web.path | path of the werbserver ui (read values.yaml) | "/ui" |
| ingress.web.annotations | annotations for the web ui ingress | {}

Chart Is Not Compliant with the K8s Standard Hostpath Provisioner

dataStorage:
    enabled: true

Causes:

/ # vault operator init
Error initializing: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/sys/init
Code: 400. Errors:

* failed to initialize barrier: failed to persist keyring: mkdir /vault/data/core: permission denied

The chart expects a non-default Hostpath provisioner to be used - for ex., AWS EBS.

Running operator init on first run

Running into an issue with the pods not being alive to run init on the dynamoDB backend. Looks like settings are correct, but getting errors about being sealed.

How am I able to add an init if db is not setup yet?

2019-08-20T17:35:31.970Z [INFO] core: stored unseal keys supported, attempting fetch 2019-08-20T17:35:32.047Z [WARN] failed to unseal core: error="stored unseal keys are supported, but none were found" 2019-08-20T17:35:36.425Z [INFO] core: autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery 2019-08-20T17:35:37.048Z [INFO] core: stored unseal keys supported, attempting fetch

Remove image from global variables

Because of the way global variables are implemented in this and the consul chart it is currently impossible to configure the consul chart as a dependency/subchart of the vault chart, because global.image with the value vault:1.2.2 from the vault chart will also be rendered in the consul templates, because it too uses global.image for the configuration of the image version.

It is not recommended to use global variables in helm charts in this way and I would propose to switch to the normal way this is implemented in helm/charts, e.g.

image:
  repository: vault
  tag: 1.2.2

The issue is tracked is also tracked in consul-helm: hashicorp/consul-helm#238

Consistency issues with s3 storage backend

I have seen some very inconsistent issues with s3 storage backend running vault in ha (3 replicas)

the config is generated inside terraform like so and applied via helm_release

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "s3" {
        access_key = "${var.aws_access_key_id}"
        secret_key = "${var.aws_secret_access_key}"
        bucket = "${aws_s3_bucket.vault_bucket.id}"
        region = "${var.aws_default_region}"
      }
      seal "awskms" {
        kms_key_id = "${aws_kms_key.vault-kms-key.key_id}"
      }

Everything in kubernetes provisions, vault will initialize correctly and seems to be running fine. The auto-unseal via awskms key is also functioning ok. However after creating some test secrets through the UI and enabling different auth methods the UI will randomly not display any of the new secrets and show errors Unable to find secret at /path/path and will even not display the enabled auth methods. This can be said for any other resource created other than the default ones.

Re-logging and refreshing seems to randomly make any of the new resources appear. My question is then is this a potential bug with ha and s3 or is s3 not really a recommended backend for vault ha storage.

Also, none of these issues happen running in standalone with a PVC.

runnuing multi vault pods on kubernetes

@jasonodonnell With standalone option Helm chart install a single pod and a single pVC.
Is there way we can have multiple vault pods with file storage so in a scenario when active vault becomes unavailable the secondary pod takes over. I understand storage is still a single volume but with multiple vault pods we can at least achieve HA form application side.

Stateful Image Pull Policy

Would it be possible to have the image pull policy not default and be configurable by helm chart.

Gives more control on how, when, why you might want to have new image pulled from repo.

Vault pod not coming up

After following all the steps in readme. and creating storage class and persistent volume getting this error
Events:
Type Reason Age From Message


Normal WaitForFirstConsumer (x2 over ) persistentvolume-controller waiting for first consumer to be created before binding

prometheus is not enabled%

after deploying ha vault in a k8s cluster, I started to try to scrape prometheus metrics of vault following the regular guide.
but get this error when
curl -X GET "http://localhost:8236/v1/sys/metrics?format="prometheus"" -H "X-Vault-Token: <root_token>"

prometheus is not enabled%

you can reproduce this err following this step

# Available parameters and their default values for the Vault chart.

global:
  # enabled is the master enabled switch. Setting this to true or false
  # will enable or disable all the components within this chart by default.
  enabled: true

  # Image is the name (and tag) of the Vault Docker image.
  image: "vault:1.3.0"
  # Overrides the default Image Pull Policy
  imagePullPolicy: IfNotPresent
  # Image pull secret to use for registry authentication.
  imagePullSecrets: []
  # imagePullSecrets:
  #   - name: image-pull-secret
  # TLS for end-to-end encrypted transport
  tlsDisable: true

server:
  # Resource requests, limits, etc. for the server cluster placement. This
  # should map directly to the value of the resources field for a PodSpec.
  # By default no direct resource request is made.

  resources:
  # resources:
  #   requests:
  #     memory: 256Mi
  #     cpu: 250m
  #   limits:
  #     memory: 256Mi
  #     cpu: 250m

  # Ingress allows ingress services to be created to allow external access
  # from Kubernetes to access Vault pods.
  ingress:
    enabled: false
    labels:
      {}
      # traffic: external
    annotations:
      {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    hosts:
      - host: chart-example.local
        paths: []

    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local

  # authDelegator enables a cluster role binding to be attached to the service
  # account.  This cluster role binding can be used to setup Kubernetes auth
  # method.  https://www.vaultproject.io/docs/auth/kubernetes.html
  authDelegator:
    enabled: false

  # extraContainers is a list of sidecar containers. Specified as a raw YAML string.
  extraContainers: null

  # extraEnvironmentVars is a list of extra enviroment variables to set with the stateful set. These could be
  # used to include variables required for auto-unseal.
  extraEnvironmentVars:
    {}
    # GOOGLE_REGION: global
    # GOOGLE_PROJECT: myproject
    # GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json

  # extraSecretEnvironmentVars is a list of extra enviroment variables to set with the stateful set.
  # These variables take value from existing Secret objects.
  extraSecretEnvironmentVars:
    []
    # - envName: AWS_SECRET_ACCESS_KEY
    #   secretName: vault
    #   secretKey: AWS_SECRET_ACCESS_KEY

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Vault in the path `/vault/userconfig/<name>/`. The value below is
  # an array of objects, examples are shown below.
  extraVolumes:
    []
    # - type: secret (or "configMap")
    #   name: my-secret
    #   path: null # default is `/vault/userconfig`

  # Affinity Settings
  # Commenting out or setting as empty the affinity variable, will allow
  # deployment to single node services such as Minikube
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: {{ template "vault.name" . }}
              app.kubernetes.io/instance: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname

  # Toleration Settings for server pods
  # This should be a multi-line string matching the Toleration array
  # in a PodSpec.
  tolerations: {}

  # nodeSelector labels for server pod assignment, formatted as a muli-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: {}

  # Extra labels to attach to the server pods
  # This should be a multi-line string mapping directly to the a map of
  # the labels to apply to the server pods
  extraLabels: {}

  # Extra annotations to attach to the server pods
  # This should be a multi-line string mapping directly to the a map of
  # the annotations to apply to the server pods
  annotations: {}

  # Enables a headless service to be used by the Vault Statefulset
  service:
    enabled: true
    # clusterIP controls whether a Cluster IP address is attached to the
    # Vault service within Kubernetes.  By default the Vault service will
    # be given a Cluster IP address, set to None to disable.  When disabled
    # Kubernetes will create a "headless" service.  Headless services can be
    # used to communicate with pods directly through DNS instead of a round robin
    # load balancer.
    # clusterIP: None

    # Port on which Vault server is listening
    port: 8200
    # Target port to which the service should be mapped to
    targetPort: 8200
    # Extra annotations for the service definition
    annotations: {}

  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file backend.
  # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more
  dataStorage:
    enabled: false
    # Size of the PVC created
    size: 10Gi
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce

  # This configures the Vault Statefulset to create a PVC for audit
  # logs.  Once Vault is deployed, initialized and unseal, Vault must
  # be configured to use this for audit logs.  This will be mounted to
  # /vault/audit
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: false
    # Size of the PVC created
    size: 10Gi
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce

  # Run Vault in "dev" mode. This requires no further setup, no state management,
  # and no initialization. This is useful for experimenting with Vault without
  # needing to unseal, store keys, et. al. All data is lost on restart - do not
  # use dev mode for anything other than experimenting.
  # See https://www.vaultproject.io/docs/concepts/dev-server.html to know more
  dev:
    enabled: false

  # Run Vault in "standalone" mode. This is the default mode that will deploy if
  # no arguments are given to helm. This requires a PVC for data storage to use
  # the "file" backend.  This mode is not highly available and should not be scaled
  # past a single replica.
  standalone:
    enabled: "-"

    # config is a raw string of default configuration when using a Stateful
    # deployment. Default is to use a PersistentVolumeClaim mounted at /vault/data
    # and store data there. This is only used when using a Replica count of 1, and
    # using a stateful set. This should be HCL.
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "file" {
        path = "/vault/data"
      }

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}

  # Run Vault in "HA" mode. There are no storage requirements unless audit log
  # persistence is required.  In HA mode Vault will configure itself to use Consul
  # for its storage backend.  The default configuration provided will work the Consul
  # Helm project by default.  It is possible to manually configure Vault to use a
  # different HA backend.
  ha:
    enabled: true
    replicas: 3

    # config is a raw string of default configuration when using a Stateful
    # deployment. Default is to use a Consul for its HA storage backend.
    # This should be HCL.
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "mysql" {
        address = "mysql-gzfcm.mysql-gzfcm:3306"
        username = "admin"
        password = "admin"
        ha_enabled = "true"
        lock_table = "vault_lockrr"
      }
      telemetry {
        prometheus_retention_time = "24h"
        disable_hostname = true
      }
      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev-246514"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}

    # A disruption budget limits the number of pods of a replicated application
    # that are down simultaneously from voluntary disruptions
    disruptionBudget:
      enabled: true

      # maxUnavailable will default to (n/2)-1 where n is the number of
      # replicas. If you'd like a custom value, you can specify an override here.
      maxUnavailable: null

  # Definition of the serviceaccount used to run Vault.
  serviceaccount:
    annotations: {}

# Vault UI
ui:
  # True if you want to create a Service entry for the Vault UI.
  #
  # serviceType can be used to control the type of service created. For
  # example, setting this to "LoadBalancer" will create an external load
  # balancer (for supported K8S installations) to access the UI.
  enabled: true
  serviceType: "ClusterIP"
  serviceNodePort: null
  externalPort: 8200

  # loadBalancerSourceRanges:
  #   - 10.0.0.0/16
  #   - 1.78.23.3/32

  # loadBalancerIP:

  # Extra annotations to attach to the ui service
  # This should be a multi-line string mapping directly to the a map of
  # the annotations to apply to the ui service
  annotations: {}

helm install ./
then port forward the svc to localhost:8236 ,and unseal vault in web ui.

then the curl metrics return prometheus is not enabled%

auto-unseal gcpckms doesn't work

When I run chart with the parameters:
`global:
enabled: true

server:
extraEnvironmentVars:
GOOGLE_REGION: global,
GOOGLE_PROJECT: project-dev,
GOOGLE_CREDENTIALS: /vault/userconfig/project-dev/project-dev.json

extraVolumes:
- type: secret
name: project-dev
load: false

affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "vault.name" . }}
release: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname

service:
enabled: true

ha:
enabled: true
replicas: 3

config: |
  ui = true

  listener "tcp" {
    tls_disable = 1
    address = "[::]:8200"
    cluster_address = "[::]:8201"
  }

  storage "consul" {
    path = "vault"
    address = "HOST_IP:8500"
  }

  seal "gcpckms" {
     project     = "project-dev"
     region      = "global"
     key_ring    = "vault-init-test"
     crypto_key  = "vault-unseal-key-test"
  }`

i've the error:
โžœ infrastructure git:(DEV) kubectl logs vault-0 Error parsing Seal configuration: failed to encrypt with GCP CKMS - ensure the key exists and the service account has at least roles/cloudkms.cryptoKeyEncrypterDecrypter permission: rpc error: code = InvalidArgument desc = Resource name [projects/project-dev,/locations/global,/keyRings/vault-init-test/cryptoKeys/vault-unseal-key-test] does not match any known resource name pattern.

if I run the example: https://learn.hashicorp.com/vault/day-one/autounseal-gcp-kms auto-unseal working.

key_ring, crypto_key and service account the same.

The current affinity rules doesn't match pods labels

Hi,

The current podAntiAffinity labels in values.yaml doesn't match pods labels. The result is that pods can be scheduled to same node. I can easily add a PR for this if you would like.

values.yaml

affinity: |
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: {{ template "vault.name" . }}
            release: "{{ .Release.Name }}"
            component: server
        topologyKey: kubernetes.io/hostname

server-statefulset.yaml

selector:
  matchLabels:
    helm.sh/chart: {{ template "vault.chart" . }}
    app.kubernetes.io/name: {{ template "vault.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    component: server

remove needs to run as privileged

it looks like vault needs to run as a privileged pod because of the additional MLOCK capability.
A few considerations:

  1. it's always a difficult conversation to have a pod running as privileged, so this static setting will slow down adoption
  2. if MLOCK is what is needed then you don't need a fully privileged pod, you could create a podSecurityPolicy (https://kubernetes.io/docs/concepts/policy/pod-security-policy/) to allow only as an additional capability MLOCK.
  3. My opinion is that MLOCK is not even needed. In fact it is a best practice to run Kubernetes with swap disabled and most Kubernetes distributions have adopted this approach. So, if the only function of MLOCK is to prevent memory from being swapped to disk, it's not needed in most cases and there should be a chart flag to select an installation option in which MLOCK is disabled.

Statefulset prevents upgrade of release

According to the docs the recommended way to upgrade Vault is to run a helm upgrade.

However, when upgrading from one version to another we get

The StatefulSet "vault" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

The template contains {{ include "vault.chart" . }} as a label value, which depends on the version of the chart. I believe this is the field that the upgrade tries to patch.

How to reproduce

wget https://github.com/hashicorp/vault-helm/archive/v0.2.1.tar.gz
tar xvzf v0.2.1.tar.gz
helm install vault vault-helm-0.2.1/
nano vault-helm-0.2.1/ # change version field to 0.2.2
helm upgrade --install vault vault-helm-0.2.1/
helm version
version.BuildInfo{Version:"v3.0.0-rc.3", GitCommit:"2ed206799b451830c68bff30af2a52879b8b937a", GitTreeState:"clean", GoVersion:"go1.13.4"}

Error: UPGRADE FAILED: PodDisruptionBudget.policy "vault" is invalid: spec.maxUnavailable: Invalid value: -1: must be greater than or equal to 0

Description

We're trying to handle Helm upgrades and the pod disruption fails. If I adjust the server-disruptionbudget.yaml to wrap it with {{- if .Release.IsInstall -}}, all is well. Without this, it fails every time.

Is there a preferred solution here?

Details

Command

helm upgrade \
    --namespace vault \
    --install \
    --recreate-pods \
    vault \
    ./vault/chart \
    -f ./vault/values.yaml

Log

UPGRADE FAILED
Error: PodDisruptionBudget.policy "vault" is invalid: spec.maxUnavailable: Invalid value: -1: must be greater than or equal to 0
Error: UPGRADE FAILED: PodDisruptionBudget.policy "vault" is invalid: spec.maxUnavailable: Invalid value: -1: must be greater than or equal to 0

Values

# Available parameters and their default values for the Vault chart.

global:
  # enabled is the master enabled switch. Setting this to true or false
  # will enable or disable all the components within this chart by default.
  enabled: true

  # Image is the name (and tag) of the Vault Docker image.
  image: "vault:1.2.2"

  # TLS for end-to-end encrypted transport
  tlsDisable: true

server:
  # Resource requests, limits, etc. for the server cluster placement. This
  # should map directly to the value of the resources field for a PodSpec.
  # By default no direct resource request is made.
  resources:
  # resources:
  #   requests:
  #     memory: 256Mi
  #     cpu: 250m
  #   limits:
  #     memory: 256Mi
  #     cpu: 250m

  # authDelegator enables a cluster role binding to be attached to the service
  # account.  This cluster role binding can be used to setup Kubernetes auth
  # method.  https://www.vaultproject.io/docs/auth/kubernetes.html
  authDelegator:
    enabled: false

  # extraEnvironmentVars is a list of extra enviroment variables to set with the stateful set. These could be
  # used to include variables required for auto-unseal.
  extraEnvironmentVars: {}
    # GOOGLE_REGION: global,
    # GOOGLE_PROJECT: myproject,
    # GOOGLE_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json

  # extraSecretEnvironmentVars is a list of extra enviroment variables to set with the stateful set.
  # These variables take value from existing Secret objects.
  extraSecretEnvironmentVars: []
    # - envName: AWS_SECRET_ACCESS_KEY
    #   secretName: vault
    #   secretKey: AWS_SECRET_ACCESS_KEY

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Vault in the path `/vault/userconfig/<name>/`. The value below is
  # an array of objects, examples are shown below.
  extraVolumes: []
    # - type: secret (or "configMap")
    #   name: my-secret
    #   load: false # if true, will add to `-config` to load by Vault
    #   path: null # default is `/vault/userconfig`

  # Affinity Settings
  # Commenting out or setting as empty the affinity variable, will allow
  # deployment to single node services such as Minikube
  # affinity: |
  #   podAntiAffinity:
  #     requiredDuringSchedulingIgnoredDuringExecution:
  #       - labelSelector:
  #           matchLabels:
  #             app: {{ template "vault.name" . }}
  #             release: "{{ .Release.Name }}"
  #             component: server
  #         topologyKey: kubernetes.io/hostname

  # Toleration Settings for server pods
  # This should be a multi-line string matching the Toleration array
  # in a PodSpec.
  tolerations: {}

  # nodeSelector labels for server pod assignment, formatted as a muli-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: {}

  # Extra annotations to attach to the server pods
  # This should be a multi-line string mapping directly to the a map of
  # the annotations to apply to the server pods
  annotations: {}

  # Enables a headless service to be used by the Vault Statefulset
  service:
    enabled: true
    # clusterIP controls whether a Cluster IP address is attached to the
    # Vault service within Kubernetes.  By default the Vault service will
    # be given a Cluster IP address, set to None to disable.  When disabled
    # Kubernetes will create a "headless" service.  Headless services can be
    # used to communicate with pods directly through DNS instead of a round robin
    # load balancer.
    # clusterIP: None

    # Port on which Vault server is listening
    port: 8200
    # Target port to which the service should be mapped to
    targetPort: 8200

  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file backend.
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce

  # This configures the Vault Statefulset to create a PVC for audit
  # logs.  Once Vault is deployed, initialized and unseal, Vault must
  # be configured to use this for audit logs.  This will be mounted to
  # /vault/audit
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: false
    # Size of the PVC created
    size: 10Gi
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce

  # Run Vault in "dev" mode. This requires no further setup, no state management,
  # and no initialization. This is useful for experimenting with Vault without
  # needing to unseal, store keys, et. al. All data is lost on restart - do not
  # use dev mode for anything other than experimenting.
  # See https://www.vaultproject.io/docs/concepts/dev-server.html to know more
  dev:
    enabled: false

  # Run Vault in "standalone" mode. This is the default mode that will deploy if
  # no arguments are given to helm. This requires a PVC for data storage to use
  # the "file" backend.  This mode is not highly available and should not be scaled
  # past a single replica.
  standalone:
    enabled: "false"

    # config is a raw string of default configuration when using a Stateful
    # deployment. Default is to use a PersistentVolumeClaim mounted at /vault/data
    # and store data there. This is only used when using a Replica count of 1, and
    # using a stateful set. This should be HCL.
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "consul" {
        path = "vault"
        address = "consul:8500"
      }

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}

  # Run Vault in "HA" mode. There are no storage requirements unless audit log
  # persistence is required.  In HA mode Vault will configure itself to use Consul
  # for its storage backend.  The default configuration provided will work the Consul
  # Helm project by default.  It is possible to manually configure Vault to use a
  # different HA backend.
  ha:
    enabled: true
    replicas: 3

    # config is a raw string of default configuration when using a Stateful
    # deployment. Default is to use a Consul for its HA storage backend.
    # This should be HCL.
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "consul" {
        path = "vault"
        address = "consul:8500"
      }

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev-246514"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}

    # A disruption budget limits the number of pods of a replicated application
    # that are down simultaneously from voluntary disruptions
    disruptionBudget:
      enabled: true

      # maxUnavailable will default to (n/2)-1 where n is the number of
      # replicas. If you'd like a custom value, you can specify an override here.
      maxUnavailable: null

  # Definition of the serviceaccount used to run Vault.
  serviceaccount:
    annotations: {}

# Vault UI
ui:
  # True if you want to create a Service entry for the Vault UI.
  #
  # serviceType can be used to control the type of service created. For
  # example, setting this to "LoadBalancer" will create an external load
  # balancer (for supported K8S installations) to access the UI.
  enabled: false
  serviceType: "ClusterIP"
  serviceNodePort: null
  externalPort: 8200
  # loadBalancerIP:

  # Extra annotations to attach to the ui service
  # This should be a multi-line string mapping directly to the a map of
  # the annotations to apply to the ui service
  annotations: {}

Supported K8s version

Hi,
According to the READ.me, the chart was verified on versions 1.9, 1.10 and 1.11.
What about newer versions?
Is it supported?

Comparison to incubator Vault chart

Hi,

I was looking at upgrading our Vault chart to the latest incubator chart version, and came across this repository. Is there a writeup of the differences between the two anywhere?

From what I can tell at a glance, the incubator chart uses a Deployment and also includes a sidecar Consul client, but this chart uses a StatefulSet and doesn't appear to include the Consul client. Some documentation would be excellent, perhaps around the considerations for running this chart vs others, or Vault in this way?

I read through the README linked docs here, and while excellent, they don't appear to answer my question.

Thanks!

Audit log file rotation and shipping to Centralize logging

As I understood from the [audit docs page] (https://www.vaultproject.io/docs/audit/index.html) - the most reliable audit type is file. And there is respective option in values.yaml to enable it. Which creates PVC and attaches to Statefulset.

I wondering what would be the best option to rotate this log file and how to ship it to centralize logging?

Community chart has an option to add side-car containers. We can build images with logrotate/filebeat/fluentd/etc which can do the stuff.

Is that a way you guys see it?

auto-unseal returns denied for resource even with proper service account permissions

Running helm install --name=vault --set='server.ha.enabled=true' .
Results in
Error parsing Seal configuration: failed to encrypt with GCP CKMS - ensure the key exists and the service account has at least roles/cloudkms.cryptoKeyEncrypterDecrypter permission: rpc error: code = PermissionDenied desc = Permission 'cloudkms.cryptoKeyVersions.useToEncrypt' denied for resource 'projects/my-project/locations/us-east1/keyRings/vault-helm-unseal-kr-1/cryptoKeys/vault-key'.

Confirmed service account is being used (via KMS API monitoring) and the service account has proper permissions (encrypt/decrypt/get, also tried granting service account full project owner permissions).

consul-helm as backend
vault-helm-0.12

snippets from values.yaml (Tried both ways below, with GOOGLE_APPLICATION_CREDENTIALS, and without) - all other values.yaml default.

extraEnvironmentVars:
    #GOOGLE_REGION: us-east1
    #GOOGLE_PROJECT: my-project
    #GOOGLE_CREDENTIALS: /vault/userconfig/my-project/my-project-1234.json
    GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/redacted/my-project-1234.json

  extraVolumes:
    - type: secret
      name: my-project
      load: false # if true, will add to `-config` to load by Vault
    #   path: null # default is `/vault/userconfig`


      seal "gcpckms" {
         credentials = "/vault/userconfig/my-project/my-project-1234.json"
         project     = "my-project"
         region      = "us-east1"
         key_ring    = "vault-helm-unseal-kr-1"
         crypto_key  = "vault-key"
      }
  • Standard GKE cluster with workload identity enabled
  • Node pool setup to use service-account

gcpckms seal stanza adding incorrect resource names

0.1.2

Example stanza from values.yaml:
seal "gcpckms" {
project = "my-project"
region = "us-east-1"
key_ring = "my-kr"
crypto_key = "vault-key"
}

Expect resource name:
projects/my-project/locations/us-east1/keyRings/my-kr/cryptoKeys/vault-key

Actual resource name:
projects/my-project,/locations/us-east1,/keyRings/my-kr/cryptoKeys/vault-key

Above commas causing this error when installing vault:
Error parsing Seal configuration: failed to encrypt with GCP CKMS - ensure the key exists and the service account has at least roles/cloudkms.cryptoKeyEncrypterDecrypter permission: rpc error: code = InvalidArgument desc = Resource name [projects/my-project,/locations/us-east1,/keyRings/my-kr/cryptoKeys/vault-key] does not match any known resource name pattern.

Adding values to README?

  1. I suppose we will need values table in README like the every other stable helm chart out there.

  2. I think the Testing section should be moved to the Wiki.

Could not chown /vault/config (may not have appropriate permissions)

Running into the following issue, could not chown /vault/config.

chown: /vault/config/extraconfig-from-values.hcl: Read-only file system
chown: /vault/config/..data: Read-only file system
chown: /vault/config/..2019_10_02_19_14_10.869393029/extraconfig-from-values.hcl: Read-only file system
chown: /vault/config/..2019_10_02_19_14_10.869393029: Read-only file system
chown: /vault/config/..2019_10_02_19_14_10.869393029: Read-only file system
chown: /vault/config: Read-only file system
chown: /vault/config: Read-only file system
Could not chown /vault/config (may not have appropriate permissions)
==> Vault server configuration:

      GCP KMS Crypto Key: xxx-xxx-xxx-xxx
        GCP KMS Key Ring: xxx-xxx-xxx
         GCP KMS Project: xxx-xxx-xxx
          GCP KMS Region: us
               Seal Type: gcpckms
             Api Address: http://10.88.14.191:8200
                     Cgo: disabled
         Cluster Address: https://10.88.14.191:8201
              Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: true
                 Storage: gcs (HA disabled)
                 Version: Vault v1.0.3
             Version Sha: 85909e3373aa743c34a6a0ab59131f61fd9e8e43

2019-10-02T19:14:15.412Z [WARN]  core: entering seal migration mode; Vault will not automatically unseal even if using an autoseal
2019-10-02T19:14:15.413Z [WARN]  failed to unseal core: error="cannot auto-unseal during seal migration"

with values.yaml:

global:
  enabled: true
  image: "vault:1.2.2"

server:
  extraEnvironmentVars:
    GOOGLE_REGION: us
    GOOGLE_PROJECT: xxx-xxx-xxx
    GOOGLE_CREDENTIALS: /vault/userconfig/xxx-xxx-xxx/credentials.json
    GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/xxx-xxx-xxx/credentials.json

  resources:
    requests:
      memory: 600Mi
      cpu: 100m
    limits:
      memory: 600Mi
      cpu: 250m

  extraVolumes:
    - type: "secret"
      name: "xxx-xxx-xxx"
      load: false

  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ template "vault.name" . }}
              release: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname

  annotations: |
    prometheus.io/scrape: "true"
    prometheus.io/path: "/v1/sys/metrics"
    prometheus.io/port: "8200"

  service:
    enabled: true

  ha:
    enabled: true
    replicas: 3

    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      storage "gcs" {
        bucket        = "xxx-xxx-xxx"
      }

      seal "gcpckms" {
         project     = "xxx-xxx-xxx"
         region      = "us"
         key_ring    = "xxx-xxx-xxx"
         crypto_key  = "xxx-xxx-crypto-key"
      }

ui:
  enabled: true
  serviceType: "LoadBalancer"
  serviceNodePort: null
  externalPort: 80
  annotations:
    cloud.google.com/load-balancer-type: "Internal"

The vault status is as follows:

Key                      Value
---                      -----
Recovery Seal Type       shamir
Initialized              true
Sealed                   true
Total Recovery Shares    11
Threshold                2
Unseal Progress          0/2
Unseal Nonce             n/a
Version                  1.0.3
HA Enabled               false
command terminated with exit code 2

The pods were created, and looking at the log file see the following

2019-10-04T14:08:23.440Z [ERROR] core: failed to create audit entry: path=file/ error="sanity check failed; unable to open "/var/log/vault/audit.log" for writing: mkdir /var/log/vault: permission denied"

I issued command vault audit enable file file_path=/vault/audit/vault_audit.log, but it didn't work. I issued command vault operator unseal -migrate, it didn't work either after entering the recovery key(s).

If I manually created a folder mkdir /var/log/vault and change ownership to vault chown vault:vault /var/log/vault, then the vault cluster will be unsealed automatically.

Am I missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.