GithubHelp home page GithubHelp logo

kubernetes-retired / external-storage Goto Github PK

View Code? Open in Web Editor NEW
2.7K 108.0 1.6K 113.79 MB

[EOL] External storage plugins, provisioners, and helper libraries

License: Apache License 2.0

Makefile 2.39% Go 88.19% Shell 3.63% Python 3.27% Dockerfile 1.29% Starlark 0.97% HTML 0.26%

external-storage's Introduction

Although many of these recipes still work, this repo is now deprecated, moving work to https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner, come join us there !

External Storage

Build Status GoDoc Go Report Card

External Provisioners

This repository houses community-maintained external provisioners plus a helper library for building them. Each provisioner is contained in its own directory so for information on how to use one, enter its directory and read its documentation. The library is contained in the lib directory.

What is an 'external provisioner'?

An external provisioner is a dynamic PV provisioner whose code lives out-of-tree/external to Kubernetes. Unlike in-tree dynamic provisioners that run as part of the Kubernetes controller manager, external ones can be deployed & updated independently.

External provisioners work just like in-tree dynamic PV provisioners. A StorageClass object can specify an external provisioner instance to be its provisioner like it can in-tree provisioners. The instance will then watch for PersistentVolumeClaims that ask for the StorageClass and automatically create PersistentVolumes for them. For more information on how dynamic provisioning works, see the docs or this blog post.

How to use the library

lib is deprecated. The library has moved to kubernetes-sigs/sig-storage-lib-external-provisioner.

Roadmap

February

  • Finalize repo structure, release process, etc.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

  • Slack: #sig-storage

Kubernetes Incubator

This is a Kubernetes Incubator project. The project was established 2016-11-15 (as nfs-provisioner). The incubator team for the project is:

  • Sponsor: Clayton (@smarterclayton)
  • Champion: Jan (@jsafrane) & Brad (@childsb)
  • SIG: sig-storage

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

external-storage's People

Contributors

aglitke avatar childsb avatar cofyc avatar dhirajh avatar disrani-px avatar humblec avatar hzxuzhonghu avatar ianchakeres avatar j-griffith avatar johngmyers avatar jsafrane avatar k8s-ci-robot avatar klausenbusk avatar kvaps avatar lichuqiang avatar mcronce avatar msau42 avatar nak3 avatar pospispa avatar raffaelespazzoli avatar robbilie avatar rootfs avatar sathieu avatar satyamz avatar sbezverk avatar thirdeyenick avatar tsmetana avatar wongma7 avatar xingzhou avatar yuanying avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

external-storage's Issues

CephFS: volume deletion results in "server could not find the requested resource"

Versions:

oc v3.6.0-alpha.1+46942ad
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://apps.purdueusb.com:8443
openshift v3.6.0-alpha.1+46942ad
kubernetes v1.5.2+43a9be4

etcd Version: 3.1.3

Using three master/node hosts and one node-only host.

$ oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM            REASON    AGE
pvc-5f6b7c74-2ea0-11e7-b8b2-00259069bf70   1Gi        RWO           Delete          Released   default/test-1             39m
pvc-79c578e4-2e92-11e7-b8b2-00259069bf70   1Gi        RWO           Delete          Released   ceph/fs-test-2             44m

The provisioner is running under a service account that has privileged access to the cluster, so it's not a permissions issue.

I0501 19:33:10.076787       1 controller.go:1052] scheduleOperation[delete-pvc-5f6b7c74-2ea0-11e7-b8b2-00259069bf70[6122c478-2ea0-11e7-b8b2-00259069bf70]]
I0501 19:33:10.076820       1 controller.go:1052] scheduleOperation[delete-pvc-79c578e4-2e92-11e7-b8b2-00259069bf70[b76cfc00-2e9f-11e7-b8b2-00259069bf70]]
I0501 19:33:10.076864       1 controller.go:1001] deleteVolumeOperation [pvc-79c578e4-2e92-11e7-b8b2-00259069bf70] started
I0501 19:33:10.076904       1 controller.go:1001] deleteVolumeOperation [pvc-5f6b7c74-2ea0-11e7-b8b2-00259069bf70] started
E0501 19:33:10.163146       1 controller.go:1023] Deletion of volume "pvc-5f6b7c74-2ea0-11e7-b8b2-00259069bf70" failed: the server could not find the requested resource
E0501 19:33:10.163218       1 goroutinemap.go:164] Operation for "delete-pvc-5f6b7c74-2ea0-11e7-b8b2-00259069bf70[6122c478-2ea0-11e7-b8b2-00259069bf70]" failed. No retries permitted 
until 2017-05-01 19:33:10.663193288 +0000 UTC (durationBeforeRetry 500ms). Error: the server could not find the requested resource
E0501 19:33:10.171214       1 controller.go:1023] Deletion of volume "pvc-79c578e4-2e92-11e7-b8b2-00259069bf70" failed: the server could not find the requested resource
E0501 19:33:10.171259       1 goroutinemap.go:164] Operation for "delete-pvc-79c578e4-2e92-11e7-b8b2-00259069bf70[b76cfc00-2e9f-11e7-b8b2-00259069bf70]" failed. No retries permitted 
until 2017-05-01 19:33:10.671245341 +0000 UTC (durationBeforeRetry 500ms). Error: the server could not find the requested resource

The cephfs-provisioner is built against 5f6f444 of this repo.

Use the "failedRetryThreshold" parameter in delete PVC flow

In the external-storage incubator code I see the failedRetryThreshold is used only for provision flow.
Currently I see if delete PVC fails then it keeps retrying forever. If we have failedRetryThreshold apply for delete as well then we can control the retries.

Make aws-secrets optional?

Thanks for providing the efs external provisioner!
I wonder if providing aws-secrets could be optional (since they only are needed for checking - and if you are confident that your setup is ok then they are not really needed?)

`seccomp` when running NFS-provisioner with the k8s Deployment

The documentation for running the NFS-provisioner using Docker includes this line:

If you are using Docker 1.10 or newer, it also needs a more permissive seccomp profile: unconfined or deploy/docker/nfs-provisioner-seccomp.json

How does this affect you when running with Kubernetes and Docker 1.10 or newer? Do you need to add the more permissive seccomp profile as well?

Auto deploy images to quay via travis

We should automatically build and push images for every provisioner when changes are made. For this we may need to rely on git tags. I am thinking of using a convention like

<image-name>-<version>
efs-provisioner-v1.0.0
nfs-provisioner-v1.0.8

Only nfs-provisioner has a version number at the moment. We should give everything else an initial v1.0.0 now that they've had time to settle. This may clutter up the Releases page a bit.

The library will continue to get the simple v3.0.0 tags for itself. otherwise it could be moved to lib-v3.0.0.

Possible to change the identity of the provisioner

I am trying to write a dynamic provisioner which provisions on a shared storage pool (think Gluster/Ceph, but proprietary)

I have the provisioner working on a single node, but i'm not trying to use the leader elect library to make it HA. I want it to run on all the nodes in that provide the storage.

I can get this working easily, however there's a caveat that I can't figure out.

I'm trying to set the provisioner identity to something repeatable. I've modelled this code on the cephfs provisioner.

type myProvisioner struct {
	identity types.UID
}
func NewMyProvisioner() controller.Provisioner {
	return &myProvisioner{
		identity: uuid.NewUUID(),
	}
}

What'd I'd like to do, is set that to something more obvious

  1. If the host reboots for some reason, and then comes back, when the provisioner starts up it'll have a new UUID. This leaves dangling volumes. If I can make the provisioner id repeatable, it can reclaim those volumes when it (if) it returns
  2. Alternatively, setting it to the cluster/storage pool name, so that when the master dies, another host can manage the volumes the other provisioner created.

I tried to do this like so:

type myProvisioner struct {
	identity string
	cluster string
}
func NewMyProvisioner() controller.Provisioner {
	hostName, err := os.Hostname()
	// we must have a hostname set to identify the pv
	if err != nil {
		panic(err)
	}
	clusterName := os.Getenv("STORAGE_NAME")
	return &myProvisioner{
		// identity of the provisioner is the hostname
		identity: hostName,
		cluster:  clusterName,
	}
}

and then in the persistent volume object, I set it like so:

pv := &v1.PersistentVolume{
		ObjectMeta: v1.ObjectMeta{
			Name: options.PVName,
			Annotations: map[string]string{
				"ploopProvisionerIdentity": p.identity,
				"ploopClusterName":         p.cluster,
			},
		},
		Spec: v1.PersistentVolumeSpec{
			PersistentVolumeReclaimPolicy: options.PersistentVolumeReclaimPolicy,
			AccessModes:                   options.PVC.Spec.AccessModes,
			Capacity: v1.ResourceList{
				v1.ResourceName(v1.ResourceStorage): options.PVC.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)],
			},
			PersistentVolumeSource: v1.PersistentVolumeSource{
				FlexVolume: &v1.FlexVolumeSource{
					Driver:  "jaxxstorm/ploop", // 
					Options: flexOptions,
				},
			},
		},
	}

This didn't work at all and it still generated the exact same identity (a uuid) for the provisioner.

Any ideas? Is this even the right place to ask?

Could not start glusterblock-provisioner container

Following the guide from https://github.com/kubernetes-incubator/external-storage/tree/master/gluster/block, I got the this error when starting the container:

# docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host  glusterblock-provisioner /usr/local/bin/glusterblock-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=glusterblock-provisioner-1
F0523 07:45:28.088277       1 glusterblock-provisioner.go:568] Failed to create config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

Version the library

I am about to update the library to use the stable version of client-go 2.0.0 that was released 3 weeks ago. The library should follow semver so that people can easily know when changes have been made to it (versus changes to other files in this repository like documentation or provisioner-specific changes)

If anybody have any opinions on how to version the library please post them here.

At the moment I am thinking of devoting this entire repo's tags to versioning the library. i.e. https://github.com/kubernetes-incubator/external-storage/releases will be devoted to the library and I will tag the repo with v1.0.0 once I update to client-go v2.0.0. This means individual provisioners will have to figure out how to version and release themselves on their own (e.g., point to Docker Hub/Quay and use Docker tags)

Troubles with new PVC.spec.storageClassName attribute

I'm testing Kubernetes 1.6 with quay.io/kubernetes_incubator/nfs-provisioner:v1.0.3 and the provisoner does not work when I use ask for a StorageClass in my PVC using PVC.spec.storageClassName.

Ok, it's a new attribute, the provisioner does not know about it. On the other hand, I would expect that my PVC gets provisioned when I fill both beta annotation and storageClassName on the PVC. Instead the provisioned pod logs this;

I0317 12:51:25.840945       1 controller.go:841] scheduleOperation[lock-provision-e2e-tests-volume-provisioning-h2s0q/pvc-403m7[fa73f5db-0b0f-11e7-865f-42010af00002]]
E0317 12:51:29.408723       1 leaderelection.go:274] Failed to update lock: PersistentVolumeClaim "pvc-403m7" is invalid: spec: Forbidden: field is immutable after creation
E0317 12:51:33.052046       1 leaderelection.go:274] Failed to update lock: PersistentVolumeClaim "pvc-403m7" is invalid: spec: Forbidden: field is immutable after creation
E0317 12:51:37.407035       1 leaderelection.go:274] Failed to update lock: PersistentVolumeClaim "pvc-403m7" is invalid: spec: Forbidden: field is immutable after creation

It seems the provisioner reads the PVC (with storageClassName attribute) and tries to write it back without the attribute and it gets rejected.

I am not sure about client-go release schedule, an update would probably fix it (and use v1.GetPersistentVolumeClaimClass to get the class both from beta annotation and from the attribute).

My PVC:

- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      volume.beta.kubernetes.io/storage-class: myclass-external
      volume.beta.kubernetes.io/storage-provisioner: example.com/nfs
    creationTimestamp: 2017-03-17T12:48:11Z
    generateName: pvc-
    name: pvc-403m7
    namespace: e2e-tests-volume-provisioning-h2s0q
    resourceVersion: "11148"
    selfLink: /api/v1/namespaces/e2e-tests-volume-provisioning-h2s0q/persistentvolumeclaims/pvc-403m7
    uid: fa73f5db-0b0f-11e7-865f-42010af00002
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1500Mi
    storageClassName: myclass-external
  status:
    phase: Pending

[nfs-provisionner] Unable to share single files

Those are the volumeMouns of a Replication Controller mounted to the nfs-provisioner:

       volumeMounts:
          - name: nfs
            mountPath: /etc/nginx/nginx.conf
            subPath: robot/nginx/nginx.conf
          - name: nfs
            mountPath: /etc/nginx/fullchain.pem
            subPath: robot/nginx/fullchain.pem
          - name: nfs
            mountPath: /etc/nginx/privkey.pem
            subPath: robot/nginx/privkey.pem
          - name: nfs
            mountPath: /etc/nginx/.htpasswd
            subPath: robot/nginx/.htpasswd
          - name: nfs
            mountPath: /usr/share/nginx/html
            subPath: robot/www

The 3 first ones will never mount and fail (they are use to mount single files), only the fourt one will work because is mounting a entire directory.

While i can recreate the pods to work with only directories is really handy to mount single files.

PVC deletion is not deleting associated PV

I am using NFS provisioner for dynamic provisioning. When I delete PVC, the PV status changes to "Failed" and hence the PV deletion does not happen. I see logs as below from controller.

volume "pvc-d4e8daaf-0592-11e7-9057-4223abc40265" no longer needs deletion, skipping

Adding some more info from my debugging of the incubator code at https://github.com/kubernetes-incubator/external-storage/blob/master/lib/controller/controller.go#L423

I see the PV status is changing to "Failed" upon PVC deletion. Hence the above condition fails which prevents PV deletion.

I have opened a BUG in kubernetes community for PV status issue. But should the incubator allow deletion of PV in case of "Failed" status as well.

kubernetes/kubernetes#43138

Golang equals or higher than 1.7 needed

It could be worth mentioning in the documentation that Golang 1.7 or higher is needed.
On my Fedora 24, installing from the default repo the 1.6.4 version I got the following error trying to make the hostpath-provisioner :

CGO_ENABLED=0 go build -a -ldflags '-extldflags "-static"' -o hostpath-provisioner .
vendor/k8s.io/client-go/rest/request.go:21:2: cannot find package "context" in any of:
/home/ppatiern/go/src/hostpath-provisioner/vendor/context (vendor tree)
/usr/lib/golang/src/context (from $GOROOT)
/home/ppatiern/go/src/context (from $GOPATH)
Makefile:23: recipe for target 'hostpath-provisioner' failed
make: *** [hostpath-provisioner] Error 1

I see that "context" package was added in the 1.7 release.

Packages not available.

The referenced all packages were not available.

  • package: k8s.io/client-go
    version: 7615377
    subpackages:
    • kubernetes
    • kubernetes/typed/core/v1
    • pkg/api
    • pkg/api/v1
    • pkg/apis/storage/v1
    • pkg/apis/storage/v1beta1
    • rest
    • tools/cache
    • tools/clientcmd
    • tools/record

For example, following were not found the commit: 7615377

  • pkg/api/resource
  • pkg/api/testapi
  • pkg/api
  • pkg/api/v1

Library consumers do not have access to some upstream volume util functions

Come up a couple times now but some volume util functions people expect to have available when writing volume plugins/provisioners are not automatically copied to client-go and so not readily available to us https://github.com/kubernetes/kubernetes/blob/38837b018bf2a9215653054c2ec59636c0ba9440/pkg/volume/util.go

We have copied RoundUpSize to a util package. We can keep copying them on an as-is needed basis or look into refactoring things upstream such that the functions we need are automatically copied to client-go

Permission/Owner errors when provisioning application with nfs-client provider

I've made a nfs-client provider with a StorageClass named default. My volume is tested with test-claim/test-pod successfully but when I tried to deploy a gitlab-ce from helm, I got permission/ownership problems. Here are the logs:

[root@kube-master-1 ~]# kubectl describe storageclass
Name:           default
IsDefaultClass: Yes
Annotations:    storageclass.kubernetes.io/is-default-class=true
Provisioner:    fuseim.pri/ifs
Parameters:     <none>
Events:         <none>

All my PVs are bound to default StorageClass:

[root@kube-master-1 ~]# kubectl get pv
NAME                                                                     CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                           STORAGECLASS          REASON    AGE
default-gitlab-gitlab-ce-data-pvc-e11fbd86-3655-11e7-97a1-fa163e5e86fb   10Gi       RWO           Delete          Bound     default/gitlab-gitlab-ce-data   default                        25m
default-gitlab-gitlab-ce-etc-pvc-e1236992-3655-11e7-97a1-fa163e5e86fb    1Gi        RWO           Delete          Bound     default/gitlab-gitlab-ce-etc    default                        25m
default-gitlab-postgresql-pvc-e1216623-3655-11e7-97a1-fa163e5e86fb       10Gi       RWO           Delete          Bound     default/gitlab-postgresql       default                        25m
default-gitlab-redis-pvc-e1207a34-3655-11e7-97a1-fa163e5e86fb            10Gi       RWO           Delete          Bound     default/gitlab-redis            default                        25m
kube-system-grafana-pv-claim-pvc-2d14d769-35c5-11e7-97a1-fa163e5e86fb    1Gi        RWX           Delete          Bound     kube-system/grafana-pv-claim    managed-nfs-storage             17h
kube-system-influxdb-pv-claim-pvc-2d22b3e9-35c5-11e7-97a1-fa163e5e86fb   5Gi        RWX           Delete          Bound     kube-system/influxdb-pv-claim   managed-nfs-storage             17h

And the PVC are bound to PV:

[root@kube-master-1 ~]# kubectl get pvc
NAME                    STATUS    VOLUME                                                                   CAPACITY   ACCESSMODES   STORAGECLASS   AGE
gitlab-gitlab-ce-data   Bound     default-gitlab-gitlab-ce-data-pvc-e11fbd86-3655-11e7-97a1-fa163e5e86fb   10Gi       RWO           default        25m
gitlab-gitlab-ce-etc    Bound     default-gitlab-gitlab-ce-etc-pvc-e1236992-3655-11e7-97a1-fa163e5e86fb    1Gi        RWO           default        25m
gitlab-postgresql       Bound     default-gitlab-postgresql-pvc-e1216623-3655-11e7-97a1-fa163e5e86fb       10Gi       RWO           default        25m
gitlab-redis            Bound     default-gitlab-redis-pvc-e1207a34-3655-11e7-97a1-fa163e5e86fb            10Gi       RWO           default        25m

The directory is mounted and got some files when the container started:

[root@kube-master-1 ~]# ls -l /mnt/188/default-gitlab-redis-pvc-e1207a34-3655-11e7-97a1-fa163e5e86fb/
total 8
drwxr-sr-x+ 2 96 96 23 May 11  2017 conf
drwxr-xr-x+ 2 96 96  6 May 11  2017 data

[root@kube-master-1 ~]# ls -l /mnt/188/default-gitlab-postgresql-pvc-e1216623-3655-11e7-97a1-fa163e5e86fb/
total 4
drwxr-sr-x+ 2 96 96 6 May 11  2017 postgresql-db

But the pod got errors when changing permissions or ownership:

Pod: gitlab-redis-676770141-tr5kp


Welcome to the Bitnami redis container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues
Send us your feedback at [email protected]

nami    INFO  Initializing redis
redis   INFO  [retry] Trying to configure permissions... 19 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 18 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 17 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 16 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 15 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 14 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 13 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 12 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 11 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 10 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 9 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 8 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 7 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 6 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 5 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 4 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 3 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 2 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 1 remaining attempts
redis   INFO  [retry] Trying to configure permissions... 0 remaining attempts
Error executing 'postInstallation': EPERM: operation not permitted, chown '/opt/bitnami/redis/conf/redis.conf'


Pod: gitlab-postgresql-701477374-zh52g

chown: changing ownership of โ€˜/var/lib/postgresql/data/pgdataโ€™: Operation not permitted

CephFS: PV/PVC are granted all access modes.

While I was deploying cephfs provisioner following https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs. I found that my created PVC and provisioned PV were granted all access modes rather than my requested ReadWriteMany.

Version

Client Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-alpha.0.1902+7543bac56342fd", GitCommit:"7543bac56342fddaaadcfe555ad500a19dabf611", GitTreeState:"clean", BuildDate:"2017-03-31T05:32:58Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-alpha.0.1902+7543bac56342fd", GitCommit:"7543bac56342fddaaadcfe555ad500a19dabf611", GitTreeState:"clean", BuildDate:"2017-03-31T05:32:58Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}

Create a PVC for the CephFS provisioner:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: c
  annotations:
    volume.beta.kubernetes.io/storage-class: "cephfs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Found that PVC and PV with all access modes:

# kubectl get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
c         Bound     pvc-c3a4282c-19ae-11e7-b3c3-0050569f3e08   1Gi        RWO,ROX,RWX   cephfs         4h

# kubectl get pvc c -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"8bd708c4-19ae-11e7-a0f8-0050569f3e08","leaseDurationSeconds":30,"acquireTime":"2017-04-05T03:20:05Z","renewTime":"2017-04-05T03:27:24Z","leaderTransitions":0}'
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-class: cephfs
    volume.beta.kubernetes.io/storage-provisioner: ceph/cephfs
  creationTimestamp: 2017-04-05T03:20:05Z
  name: c
  namespace: default
  resourceVersion: "961"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/c
  uid: c3a4282c-19ae-11e7-b3c3-0050569f3e08
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  volumeName: pvc-c3a4282c-19ae-11e7-b3c3-0050569f3e08
status:
  accessModes:
  - ReadWriteOnce
  - ReadOnlyMany
  - ReadWriteMany
  capacity:
    storage: 1Gi
  phase: Bound


# kubectl get pv pvc-c3a4282c-19ae-11e7-b3c3-0050569f3e08 -o yaml                                                                 
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    cephFSProvisionerIdentity: 8bd6f4f1-19ae-11e7-a0f8-0050569f3e08
    cephShare: kubernetes-dynamic-pvc-7d40c65c-19af-11e7-a0f8-0050569f3e08
    pv.kubernetes.io/provisioned-by: ceph/cephfs
  creationTimestamp: 2017-04-05T03:27:21Z
  name: pvc-c3a4282c-19ae-11e7-b3c3-0050569f3e08
  resourceVersion: "954"
  selfLink: /api/v1/persistentvolumespvc-c3a4282c-19ae-11e7-b3c3-0050569f3e08
  uid: c79df824-19af-11e7-b3c3-0050569f3e08
spec:
  accessModes:
  - ReadWriteOnce
  - ReadOnlyMany
  - ReadWriteMany
  capacity:
    storage: 1Gi
  cephfs:
    monitors:
    - 10.66.146.225:6789
    path: /volumes/kubernetes/kubernetes-dynamic-pvc-7d40c65c-19af-11e7-a0f8-0050569f3e08
    secretRef:
      name: ceph-kubernetes-dynamic-user-7d40c6f0-19af-11e7-a0f8-0050569f3e08-secret
    user: kubernetes-dynamic-user-7d40c6f0-19af-11e7-a0f8-0050569f3e08
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: c
    namespace: default
    resourceVersion: "782"
    uid: c3a4282c-19ae-11e7-b3c3-0050569f3e08
  persistentVolumeReclaimPolicy: Delete
  storageClassName: cephfs
status:
  phase: Bound

CephFS,Gluster Block: statefulness

@rootfs @humblec we have to implement a way for CephFS & Gluster Block provisioners to maintain state (across restarts) https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/cephfs/cephfs-provisioner.go#L61 <- this old method won't survive restarts since it's generated at runtime. First you have to decide if your provisioner needs to maintain state at all? Like, whose PVs should it be allowed to Delete? If it restarts can it still Delete the PVs it Provisioned before it restarted, or will those PVs be left dangling?

To see what I am talking about, please see https://github.com/kubernetes-incubator/external-storage/tree/master/docs#running-multiple-provisioners-and-giving-provisioners-identities. In not so many words, I am basically talking about the same thing as the "unique name" in the CSI proposal https://docs.google.com/document/d/1JMNVNP-ZHz8cGlnqckOnpJmHF-DNY7IYP-Di7iuVhQI/edit#heading=h.6r2wd16r05z7 which allows a plugin to associate a given "storage space" with itself in case of failure. In the same way, our external provisioners have to be able to associate themselves with given "storage spaces."

In our case, Kubernetes won't decide a "unique name" for us, so we have to decide if we need one, and then if so generate & maintain it ourselves.

local-volume image is not accessible for public

Current image for local-volume is hosted at gcr.io/msau-k8s-dev/local-volume-provisioner:latest which is not available for publically. Can you please push the local-volume-provisioner to quay so everyone can use it.

rbd support

Currently, I have to build a custom version of the kube-controler-manager to add the correct version of ceph rbd tools to support dynamic rbd provisioning.

Ideally the code would be moved out to function like the external cephfs provisioner so that a prebacked kube-rbd-jewel container can be added too.

NFS provisioner cannot handle "alpha" annotation for "default" storage class

This affects all Kubernetes Helm Charts that default to using the storage class:

volume.alpha.kubernetes.io/storage-class: default

The logged error is:

Claim "default/wordypress-wordpress-wordpress": StorageClass "" not found

Changing the storage class key name to the "beta" version fixes things!?

volume.beta.kubernetes.io/storage-class: default

It would be great if the alpha version worked so that many many charts need not be changed and so backward compatibility is not lost. I'm using K8s 1.5.2 on a cluster built with kubeadm.

Relates to this ticket I created
helm/charts#807

Unable to specify mountOptions in the parameters of storageclass definition

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: nfs
provisioner: storage-nfs.aws.event-cloud.net/nfs
parameters:
gid: none
rootSquash: "false"
mountOptions:
server: 172.16.185.20
path: /data

This YAML file does not work. Not sure how to specify the mount option so the NFS drive can be mounted when volumes are requested by this storage class. The documentation does not provide any examples for this.

Any help will be much appreciated.

Thanks
-Praveen

[ProvisionController] Deprovision upon PV deletion

Currently, the only path that triggers deprovisioning (i.e., Provisioner.Delete()) is when a dynamically provisioned PV is marked as released (and has ReclaimPolicy: Delete).
For cases where a PV is directly deleted by a user (say, accidently by an admin), the Provisioner.Delete() call will never be invoked for that PV, which may lead to dangling storage assets at the physical storage provider.

Here's a rough proposal of how to address this:

  1. Extend the ProvisionController to intercept PV deletions (of its particular storage class and provisioner name).
  2. Upon PV deletion, if the PV is bound, and its referenced PVC exists (not deleted), invoke Provisioner.Delete(pv).

[nfs-client-provisioner] PV stuck in pending state

Hi,

currently i'm trying to get the nfs-client-provisioner running on a Raspberry Pi Kubernetes cluster. I could already sucessfully deploy the provisioner in an ARM compatible docker container (see https://gitlab.com/rpikube/nfs-provisioner-arm32v7 for details). During claiming a PV with the nfs-client-provisioner i encountered 2 issues.

  1. The name of the PV has to be ${pvName} instead of ${namespace}-${pvcName}-${pvName} because otherwise i get stucked in an endless loop because it tries to provision the PV again and again (because it tries to find a PV with the name ${pvName} instead of ${namespace}-${pvcName}-${pvName} in controller.go). I could solve this issue by modifing the pvName from
    pvName := strings.Join([]string{pvcNamespace, pvcName, options.PVName}, "-") to pvName := options.PVName

  2. The second problem is that the PV and PVC stucks in pending state without any errors. I enabled verbose logging as well but there seems to be nothing wrong (see nfs-client-provisioner.txt). The directory gets sucessfully created on the NFS server with permissions 777 owned by nobody:nogroup

I also attach the deployment.txt which contains everything to setup the nfs client provisioner including rbac authorization and psp.

Any help would be appreciated.
Thanks

Unable to mount to EFS file system

Hello.
Currently I am trying to create persistent volumes on OpenShift Container Platform 3.4 using efs-provisioner POD.
The thing is, at very first time I was successfully able to deploy POD, create service account, clusterrole and add policies to service account. After running oc patch command, the POD spinned up immediately.

When trying to replicate all the steps on different EFS and new project on OCP, I am getting this error when POD is trying to spin up after oc patch command

Failed mount Unable to mount volumes for pod "efs-provisioner-2637432370-rc4mm_kristapstesting(f26eac51-4154-11e7-839c-026eb0b9aab0)": timeout expired waiting for volumes to attach/mount for pod "efs-provisioner-2637432370-rc4mm"/"kristapstesting". list of unattached/unmounted volumes=[pv-volume efs-provisioner-token-ioyyf] 3 times in the last 5 minutes

Failed sync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "efs-provisioner-2637432370-rc4mm"/"kristapstesting". list of unattached/unmounted volumes=[pv-volume efs-provisioner-token-ioyyf] 3 times in the last

I have tried several times with no luck. I am sure I am using the right directory form EFS in deployment.yaml as well as EFS DNS name. SSHing in to application node on and manually mounting to EFS, it mounts just fine.
I have run out of ideas at this moment.
Can you please help?

Regards,
Kristaps

nfs-client-provisioner create folder with 755, not 777

I'm having an issue that all the folder are created with 755, not 777

When I go into the pod, quay.io/external_storage/nfs-client-provisioner:v1, I can see created folders under /persistentvolumes , and I can chmod 777 to those folders, so it shouldn't be permission problem.

Any ideas what's the problem? Thanks in advance.

Travis takes too long

We should make up "suites" or something to build & test each provisioner individually in parallel. AFAIK there's no not-ugly way to do this, have to use bash env variables and if statements.

Travis is free so let's take advantage of it.

defaultFailedProvisionThreshold looks to be really low.

This threshold value has been set to 5 at present in the controller. There may be scenarios where user still create the required artifacts after firing the pvc. for eg# creating a secret or recreating a storageclass..etc. As there are no more attempts made for the claim after this failure, it will be bad user experience for a user/developer. Can I increase the this value to more attempts or more time waiting ?

CephFS: provisioner hangs

The python script in the CephFS provisioner hangs against my system. I have verified that the environment variables are:

export CEPH_CLUSTER_NAME=ceph
export CEPH_MON=10.130.0.21:6789
export CEPH_AUTH_ID=admin
export CEPH_AUTH_KEY=key

I checked, and it's freezing on this line: https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L471

Using the ceph cli inside the provisioner container:

$ ceph --cluster ceph -m 10.130.0.21:6789 --id admin status                                                                                                
    cluster 764b7332-d2cf-4373-aa7e-157471bf70bb                                                                                                                        
     health HEALTH_WARN                                                                                                                                                 
            too many PGs per OSD (720 > max 300)                                                                                                                        
     monmap e7: 3 mons at {ceph-mon-3124656-31zfj=10.129.0.33:6789/0,ceph-mon-3124656-vt7gh=10.130.0.21:6789/0,ceph-mon-3124656-vz5mw=10.131.0.30:6789/0}               
            election epoch 32, quorum 0,1,2 ceph-mon-3124656-31zfj,ceph-mon-3124656-vt7gh,ceph-mon-3124656-vz5mw                                                        
        mgr no daemons active                                                                                                                                           
     osdmap e42: 4 osds: 4 up, 4 in                                                                                                                                     
            flags sortbitwise,require_jewel_osds,require_kraken_osds                                                                                                    
      pgmap v38764: 960 pgs, 8 pools, 1329 MB data, 582 objects                                                                                                         
            186 GB used, 2511 GB / 2698 GB avail                                                                                                                        
                 960 active+clean

Any ideas? cc @rootfs

Need to add event list permission to ClusterRole

When I tested my volume provisioner based on this library, I found some error in logs.

I0608 12:34:49.601544       1 controller.go:854] cannot start watcher for PVC default/test-nfs: User "system:serviceaccount:kube-system:persistent-volume-provisioner" cannot list events in the namespace "default". (get events)
E0608 12:34:49.601580       1 controller.go:667] Error watching for provisioning success, can't provision for claim "default/test-nfs": User "system:serviceaccount:kube-system:persistent-volume-provisioner" cannot list events in the namespace "default". (get events)

And then I found List invocation here.

I think we need to add event list permission to the ClusterRoles for all provisioners based on
this library.

I can help to submit a pull request if needed.

Dynamic provisioner for Hostpath

Hi,
I am interested in a hostpath provisioner that can work with StatefulSet with replica > 1. I had a conversation in slack api-machinery room. They pointed me to this project. Here is my conversation: appscode/k8s-addons#14 (comment)

I am new to writing my own PV provisioner. Do you think what I am asking for is possible? How do I take the demo hostpath provisioner in this repo and make that a proper one.

Thanks.

EFS Provisioner - Non-root procs get permission-denied

HEAD of this repo, external EFS provisioner
Kubernetes 1.6.4

So far, we have gotten stuck trying to run a variety of things with EFS-provisioned volumes. It has worked for every container that runs processes as root inside the container, it has no problem accessing dynamically-provisioned mount points in the EFS volume.

However, we're running grafana and elasticsearch, both of which (responsibly) de-escalate permissions to non-root using gosu grafana /run.sh or gosu elasticsearch /run.sh. They get permission-denied errors when attempting to write to the PVC. I have execd into the containers and confirmed that I can touch files in the PVC as root, and get errors when I gosu <non-root> touch file.txt.

In the case of grafana, I was able to modify the entrypoint to remove gosu grafana, which worked and grafana is happy. However, elasticsearch refuses to run as root, which puts me in an awkward position.

I have tried using the GID annotation in the PVC: pv.beta.kubernetes.io/gid: <gid of non-root group>, however the documentation says that applies only to the user of the of entrypoint process of the first container:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#access-control

In each case, the entrypoint scripts are run as root, but de-escalate at the end of the script. I'm not sure if this would specifically solve my problem even if that wasn't the case, though.

In both cases above, the entrypoint script is changing the permissions of the PVC mount before attempting to run, and I can see those uid/gid changes when I browse the EFS volume in another pod. However, even after such modifications the non-root user with the same GID cannot write to it.

This may not be specific to EFS as it is NFS and NFS PV mounts in general.

[nfs-provisionner] Too many open files

Hi there,

We recently encounter a "Too many open files" wit the NFS-provisionner. It seems like the ganesha nfs-server is using some sort of file descriptor cache which can be a real issue when you have some nfs volumes with a loot of files.

There is currently no easy/direct way to specify the ulimit for the nfs provisionner and/or control the ganesha FD cache.

E0407 08:23:57.125652       1 controller.go:578] Failed to provision volume for claim "default/redis-session-datadir" with StorageClass "nfs-fast": error creating export for volume: error exporting export block 
EXPORT
{
Export_Id = 4;
Path = /export/pvc-40d9e46a-1b6b-11e7-9c28-0050568e35fd;
Pseudo = /export/pvc-40d9e46a-1b6b-11e7-9c28-0050568e35fd;
Access_Type = RW;
Squash = no_root_squash;
SecType = sys;
Filesystem_id = 4.4;
FSAL {
Name = VFS;
}
}
: error calling org.ganesha.nfsd.exportmgr.AddExport: Error while parsing /export/vfs.conf because of (token scan) errors. Details:
Config File (<unknown file>:0): new file (/export/vfs.conf) open error (Too many open files), ignored

Is this a known issue ?

CephFS: PV aren't auto deleted.

The PV provisioned by CephFS provisioner can not be automatically deleted when the PVC is deleted.

# kubectl get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
c1        Bound     pvc-0ea9f6de-1a8c-11e7-b3c3-0050569f3e08   1Gi        RWO           cephfs         8m

# kubectl delete pvc c1
persistentvolumeclaim "c1" deleted

# kubectl get pvc
No resources found.

# kubectl get pv pvc-0ea9f6de-1a8c-11e7-b3c3-0050569f3e08
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM        STORAGECLASS   REASON    AGE
pvc-0ea9f6de-1a8c-11e7-b3c3-0050569f3e08   1Gi        RWO           Delete          Bound     default/c1   cephfs                   9m

dynamic provisioner for externally managed NFS

Current NFS provisioner , either creates the nfs-ganesha or expects the kernel nfs-server running on the node.

I would like to propose the out of tree provisioner , which can dynamically provision volumes on the externally managed nfs-server. Provisioner would create directories over NFS mount-point corresponding to the PVCs being created . Each directory would be mapped to the PV object , which in turn is bounded to the PVC, similar flow as of the existing provisioner. Can be seen as a simple mkdir-provisioner.

Usecase : In cases where NFS , glusterFS etc are managed by the separate teams and k8s user gets only the necessary authorization and mountpoints for consuming such File Systems. There is no direct access to the server. K8s should have ready-made mkdir-provisioners which can just provision volumes over such FileSystems.

Example Flow:
--> NFS share provided to k8s user : nfsserver:/data

--> nfserver:/data mounted to the /exports (inside the provisioner-pod).

--> With creation of the new PVC named pvc-0 , provisioner :
- creates directory /exports/pvc-0
- creates PV object mounted to the nfsserver:/data/pvc-0 .
- PVC gets bound to PV.

Downsides specific to NFS :
--> Quota : Quota management would be hard , as each PV on NFS is just a directory and not the NFS share created by the NFS server.
--> Security : Pod consuming the PVC would have root access to the NFS mount. Privileged Pod can mess up with the directories associated with rest of the PVs (by just mounting the pod again to the NFS share.)

Code should follow the specification mentioned here: https://github.com/kubernetes-incubator/external-storage/tree/master/docs/demo/hostpath-provisioner ,
https://github.com/jsafrane/kubernetes/blob/95197536fc5be791e474b952aa939ac7a2299d7e/docs/proposals/volume-provisioning.md

I understand this provisioner would be just the subset of the existing nfs-provisioner but particular usecase is important if user is fine with the mentioned Quota and Security issue for now.

Does it sound good enough to have a separate directory in here , which basically consists of such provisioners , say for nfs (for now)?
I would love to create PR if this sounds good enough !!

cc @jsafrane

Thanks

glide fails to build dependencies

I'm getting the following error if I run glide up -v on this repo:

[rspazzol@rspazzol external-storage]$ glide up -v
[INFO]	Downloading dependencies. Please wait...
[INFO]	--> Fetching updates for github.com/dgrijalva/jwt-go.
[INFO]	--> Fetching updates for github.com/aws/aws-sdk-go.
[INFO]	--> Fetching updates for github.com/docker/docker.
[INFO]	--> Fetching updates for k8s.io/apimachinery.
[INFO]	--> Fetching updates for github.com/heketi/heketi.
[INFO]	--> Fetching updates for k8s.io/kubernetes.
[INFO]	--> Fetching updates for github.com/guelfey/go.dbus.
[INFO]	--> Fetching updates for github.com/lpabon/godbc.
[INFO]	--> Fetching updates for k8s.io/client-go.
[INFO]	--> Fetching updates for github.com/mitchellh/mapstructure.
[INFO]	--> Fetching updates for github.com/golang/glog.
[INFO]	--> Setting version for github.com/mitchellh/mapstructure to db1efb5.
[INFO]	--> Setting version for github.com/docker/docker to v1.13.1.
[INFO]	--> Setting version for github.com/lpabon/godbc to 9577782.
[INFO]	--> Setting version for github.com/heketi/heketi to 7bb2c5b.
[INFO]	--> Setting version for github.com/aws/aws-sdk-go to v1.7.3.
[INFO]	--> Setting version for k8s.io/client-go to 450baa5d60f8d6a251c7682cb6f86e939b750b2d.
[INFO]	--> Setting version for k8s.io/apimachinery to 2de00c78cb6d6127fb51b9531c1b3def1cbcac8c.
[INFO]	--> Setting version for k8s.io/kubernetes to v1.6.0.
[INFO]	Resolving imports
[INFO]	--> Fetching updates for github.com/go-ini/ini.
[INFO]	--> Fetching updates for golang.org/x/sys.
[INFO]	Found Godeps.json file in /home/rspazzol/.glide/cache/src/https-k8s.io-apimachinery
[INFO]	--> Parsing Godeps metadata...
[INFO]	--> Fetching updates for github.com/go-openapi/spec.
[INFO]	--> Setting version for github.com/go-openapi/spec to 6aced65f8501fe1217321abf0749d354824ba2ff.
[INFO]	--> Fetching updates for github.com/gogo/protobuf.
[INFO]	--> Setting version for github.com/gogo/protobuf to c0656edd0d9eab7c66d1eb0c568f9039345796f7.
[INFO]	--> Fetching updates for github.com/google/gofuzz.
[INFO]	--> Setting version for github.com/google/gofuzz to 44d81051d367757e1c7c6a5a86423ece9afcf63c.
[INFO]	--> Fetching updates for github.com/pborman/uuid.
[INFO]	--> Setting version for github.com/pborman/uuid to ca53cad383cad2479bbba7f7a1a05797ec1386e4.
[INFO]	--> Fetching updates for github.com/spf13/pflag.
[INFO]	--> Setting version for github.com/spf13/pflag to 9ff6c6923cfffbcd502984b8e0c80539a94968b7.
[INFO]	--> Fetching updates for gopkg.in/inf.v0.
[INFO]	--> Setting version for gopkg.in/inf.v0 to 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4.
[INFO]	Found Godeps.json file in /home/rspazzol/.glide/cache/src/https-k8s.io-client-go
[INFO]	--> Parsing Godeps metadata...
[INFO]	--> Fetching updates for github.com/ugorji/go.
[INFO]	--> Setting version for github.com/ugorji/go to ded73eae5db7e7a0ef6f55aace87a2873c5d2b74.
[INFO]	--> Fetching updates for github.com/howeyc/gopass.
[INFO]	--> Setting version for github.com/howeyc/gopass to 3ca23474a7c7203e0a0a070fd33508f6efdb9b3d.
[INFO]	--> Fetching updates for github.com/imdario/mergo.
[INFO]	--> Setting version for github.com/imdario/mergo to 6633656539c1639d9d78127b7d47c622b5d7b6dc.
[INFO]	--> Fetching updates for github.com/golang/groupcache.
[INFO]	--> Setting version for github.com/golang/groupcache to 02826c3e79038b59d737d3b1c0a1d937f71a4433.
[INFO]	Found Godeps.json file in /home/rspazzol/.glide/cache/src/https-github.com-heketi-heketi
[INFO]	--> Parsing Godeps metadata...
[INFO]	--> Setting version for github.com/dgrijalva/jwt-go to 01aeca54ebda6e0fbfafd0a524d234159c05ec20.
[INFO]	--> Fetching updates for github.com/inconshreveable/mousetrap.
[INFO]	--> Setting version for github.com/inconshreveable/mousetrap to 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75.
[INFO]	--> Fetching updates for github.com/fsnotify/fsnotify.
[INFO]	--> Fetching updates for github.com/hashicorp/hcl.
[INFO]	--> Fetching updates for github.com/magiconair/properties.
[INFO]	--> Fetching updates for github.com/pelletier/go-toml.
[INFO]	--> Fetching updates for github.com/spf13/afero.
[INFO]	--> Fetching updates for github.com/spf13/cast.
[INFO]	--> Fetching updates for github.com/spf13/jwalterweatherman.
[INFO]	--> Fetching updates for gopkg.in/yaml.v2.
[INFO]	--> Setting version for gopkg.in/yaml.v2 to 53feefa2559fb8dfa8d81baad31be332c97d6c77.
[INFO]	Found Godeps.json file in /home/rspazzol/.glide/cache/src/https-k8s.io-kubernetes
[INFO]	--> Parsing Godeps metadata...
[WARN]	Conflict: k8s.io/kubernetes rev is currently v1.6.0, but github.com/heketi/heketi wants 0776eab45fe28f02bbeac0f05ae1a203051a21eb
[INFO]	k8s.io/kubernetes reference v1.6.0:
[INFO] - author: Anthony Yeh <[email protected]>
[INFO] - commit date: Tue, 28 Mar 2017 09:23:06 -0700
[INFO] - subject (first line): Kubernetes version v1.6.0
[INFO]	k8s.io/kubernetes reference 0776eab45fe28f02bbeac0f05ae1a203051a21eb (v1.5.0-beta.2):
[INFO] - author: saadali <[email protected]>
[INFO] - commit date: Thu, 24 Nov 2016 14:29:04 -0800
[INFO] - subject (first line): Kubernetes version v1.5.0-beta.2
[INFO]	Keeping k8s.io/kubernetes v1.6.0
[WARN]	Conflict: github.com/aws/aws-sdk-go rev is currently v1.7.3, but k8s.io/kubernetes wants 63ce630574a5ec05ecd8e8de5cea16332a5a684d
[INFO]	github.com/aws/aws-sdk-go reference v1.7.3:
[INFO] - author: xibz <[email protected]>
[INFO] - commit date: Mon, 27 Feb 2017 18:59:22 -0800
[INFO] - subject (first line): Update CHANGELOG.md
[INFO]	github.com/aws/aws-sdk-go reference 63ce630574a5ec05ecd8e8de5cea16332a5a684d (v1.6.10):
[INFO] - author: Baron Von Ben Powell <[email protected]>
[INFO] - commit date: Wed, 04 Jan 2017 18:10:36 +0000
[INFO] - subject (first line): Release v1.6.10
[INFO]	Keeping github.com/aws/aws-sdk-go v1.7.3
[INFO]	--> Fetching updates for github.com/jmespath/go-jmespath.
[INFO]	--> Setting version for github.com/jmespath/go-jmespath to 3433f3ea46d9f8019119e7dd41274e112a2359a9.
[INFO]	--> Fetching updates for github.com/go-openapi/jsonpointer.
[INFO]	--> Setting version for github.com/go-openapi/jsonpointer to 46af16f9f7b149af66e5d1bd010e3574dc06de98.
[INFO]	--> Fetching updates for github.com/go-openapi/jsonreference.
[INFO]	--> Setting version for github.com/go-openapi/jsonreference to 13c6e3589ad90f49bd3e3bbe2c2cb3d7a4142272.
[INFO]	--> Fetching updates for github.com/go-openapi/swag.
[INFO]	--> Setting version for github.com/go-openapi/swag to 1d0bd113de87027671077d3c71eb3ac5d7dbba72.
[INFO]	--> Fetching updates for github.com/emicklei/go-restful.
[INFO]	--> Setting version for github.com/emicklei/go-restful to ff4f55a206334ef123e4f79bbf348980da81ca46.
[INFO]	--> Fetching updates for golang.org/x/net.
[INFO]	--> Setting version for golang.org/x/net to f2499483f923065a842d38eb4c7f1927e6fc6e6d.
[INFO]	--> Fetching updates for github.com/emicklei/go-restful-swagger12.
[INFO]	--> Setting version for github.com/emicklei/go-restful-swagger12 to dcef7f55730566d41eae5db10e7d6981829720f6.
[INFO]	--> Fetching updates for github.com/go-openapi/loads.
[INFO]	--> Setting version for github.com/go-openapi/loads to 18441dfa706d924a39a030ee2c3b1d8d81917b38.
[INFO]	--> Fetching updates for github.com/juju/ratelimit.
[INFO]	--> Setting version for github.com/juju/ratelimit to 77ed1c8a01217656d2080ad51981f6e99adaa177.
[INFO]	--> Fetching updates for github.com/docker/distribution.
[INFO]	--> Setting version for github.com/docker/distribution to cd27f179f2c10c5d300e6d09025b538c475b0d51.
[ERROR]	Error scanning k8s.io/client-go/pkg/api/v1/helper: open /home/rspazzol/.glide/cache/src/https-k8s.io-client-go/pkg/api/v1/helper: no such file or directory
[ERROR]	This error means the referenced package was not found.
[ERROR]	Missing file or directory errors usually occur when multiple packages
[ERROR]	share a common dependency and the first reference encountered by the scanner
[ERROR]	sets the version to one that does not contain a subpackage needed required
[ERROR]	by another package that uses the shared dependency. Try setting a
[ERROR]	version in your glide.yaml that works for all packages that share this
[ERROR]	dependency.
[INFO]	--> Fetching updates for golang.org/x/crypto.
[INFO]	--> Setting version for golang.org/x/crypto to d172538b2cfce0c13cee31e647d0367aa8cd2486.
[INFO]	--> Fetching updates for github.com/hashicorp/golang-lru.
[INFO]	--> Setting version for github.com/hashicorp/golang-lru to a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4.
[INFO]	--> Fetching updates for github.com/davecgh/go-spew.
[INFO]	--> Setting version for github.com/davecgh/go-spew to 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d.
[WARN]	Conflict: github.com/lpabon/godbc rev is currently 9577782, but github.com/heketi/heketi wants 9577782540c1398b710ddae1b86268ba03a19b0c
[INFO]	github.com/lpabon/godbc reference 9577782:
[INFO] - author: Luis Pabon <[email protected]>
[INFO] - commit date: Fri, 13 Jun 2014 12:58:03 -0400
[INFO] - subject (first line): Changed copyrights to Google Chromium style
[INFO]	github.com/lpabon/godbc reference 9577782540c1398b710ddae1b86268ba03a19b0c:
[INFO] - author: Luis Pabon <[email protected]>
[INFO] - commit date: Fri, 13 Jun 2014 12:58:03 -0400
[INFO] - subject (first line): Changed copyrights to Google Chromium style
[INFO]	Keeping github.com/lpabon/godbc 9577782
[INFO]	--> Setting version for github.com/hashicorp/hcl to d8c773c4cba11b11539e3d45f93daeaa5dcf1fa1.
[INFO]	--> Fetching updates for github.com/pelletier/go-buffruneio.
[INFO]	--> Setting version for github.com/pelletier/go-buffruneio to df1e16fde7fc330a0ca68167c23bf7ed6ac31d6d.
[INFO]	--> Fetching updates for github.com/pkg/sftp.
[INFO]	--> Setting version for github.com/pkg/sftp to 4d0e916071f68db74f8a73926335f809396d6b42.
[INFO]	--> Setting version for github.com/spf13/afero to b28a7effac979219c2a2ed6205a4d70e4b1bcd02.
[INFO]	--> Fetching updates for golang.org/x/text.
[INFO]	--> Setting version for golang.org/x/text to 2910a502d2bf9e43193af9d68ca516529614eed3.
[INFO]	--> Fetching updates for github.com/PuerkitoBio/purell.
[INFO]	--> Setting version for github.com/PuerkitoBio/purell to 8a290539e2e8629dbc4e6bad948158f790ec31f4.
[INFO]	--> Fetching updates for github.com/mailru/easyjson.
[INFO]	--> Setting version for github.com/mailru/easyjson to d5b7844b561a7bc640052f1b935f7b800330d7e0.
[INFO]	--> Fetching updates for github.com/go-openapi/analysis.
[INFO]	--> Setting version for github.com/go-openapi/analysis to b44dc874b601d9e4e2f6e19140e794ba24bead3b.
[INFO]	Found Godeps.json file in /home/rspazzol/.glide/cache/src/https-github.com-docker-distribution
[INFO]	--> Parsing Godeps metadata...
[INFO]	--> Fetching updates for github.com/ghodss/yaml.
[INFO]	--> Setting version for github.com/ghodss/yaml to 73d445a93680fa1a78ae23a5839bad48f32ba1ee.
[INFO]	--> Fetching updates for github.com/kr/fs.
[INFO]	--> Setting version for github.com/kr/fs to 2788f0dbd16903de03cb8186e5c7d97b69ad387b.
[INFO]	--> Fetching updates for github.com/pkg/errors.
[INFO]	--> Setting version for github.com/pkg/errors to a22138067af1c4942683050411a841ade67fe1eb.
[INFO]	--> Fetching updates for github.com/PuerkitoBio/urlesc.
[INFO]	--> Setting version for github.com/PuerkitoBio/urlesc to 5bd2802263f21d8788851d5305584c82a5c75d7e.
[ERROR]	Failed to retrieve a list of dependencies: Error resolving imports

it looks like there is a conflict between the required version of k8s client-go.

[nfs-provisioner] ganesha.nfsd status is not monitored

nfs-provisioner not watching for ganesha.nfsd status, and if ganesha.nfsd fails it becomes never reaped zombie.

The issue is not about bug due to which ganesha.nfsd failed, but here is some details of failure that I observed.

Zombie ganesha.nfsd process in nfs-provisioner container:

[root@nfs-provisioner-0 /]# ps auxf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       300  0.0  0.0  12664  3952 ?        Ss   19:09   0:00 bash
root       416  0.0  0.0  41836  3348 ?        R+   19:36   0:00  \_ ps auxf
root         1  0.0  0.3 225632 23852 ?        Ssl  Apr20   0:37 /nfs-provisione
rpc         15  0.0  0.0  59848  3224 ?        Ss   Apr20   0:00 /usr/sbin/rpcbi
rpcuser     17  0.0  0.1  50856  9212 ?        Ss   Apr20   0:00 /usr/sbin/rpc.s
dbus        20  0.0  0.0  50272  2256 ?        Ss   Apr20   0:33 dbus-daemon --s
root        22  0.2  0.0      0     0 ?        Zs   Apr20  31:34 [ganesha.nfsd] 

Ganesha logs:

[root@nfs-provisioner-0 /]# cat /var/log/ganesha.log 
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-21[main] main :MAIN :EVENT :nfs-ganesha Starting: Ganesha Version /nfs-ganesha-2.4.0.3/src, built at Mar 22 2017 22:52:45 on 
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] claim_posix_filesystems :FSAL :CRIT :Could not stat directory for path /nonexistent
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] vfs_create_export :FSAL :CRIT :resolve_posix_filesystem(/nonexistent) returned No such file or directory (2)
20/04/2017 01:39:22 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] fsal_cfg_commit :CONFIG :CRIT :Could not create export for (/nonexistent) to (/nonexistent)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] export_commit_common :CONFIG :CRIT :Export id 0 can only export "/" not (/nonexistent)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] export_commit_common :EXPORT :CRIT :Clients = (0x7f57cd9fc0e8,0x7f57cd9fc0e8) next = (0x7f57cd9fc0e8, 0x7f57cd9fc0e8)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] export_commit_common :EXPORT :CRIT :Clients = (0x7f57cd9fc4e8,0x7f57cd9fc4e8) next = (0x7f57cd9fc4e8, 0x7f57cd9fc4e8)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] export_commit_common :EXPORT :CRIT :Clients = (0x7f57cd9fc6e8,0x7f57cd9fc6e8) next = (0x7f57cd9fc6e8, 0x7f57cd9fc6e8)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] export_commit_common :EXPORT :CRIT :Clients = (0x7f57cd9fc8e8,0x7f57cd9fc8e8) next = (0x7f57cd9fc8e8, 0x7f57cd9fc8e8)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] export_commit_common :EXPORT :CRIT :Clients = (0x7f57cd9fcae8,0x7f57cd9fcae8) next = (0x7f57cd9fcae8, 0x7f57cd9fcae8)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:28): 1 validation errors in block FSAL
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:28): Errors processing block (FSAL)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:12): 1 validation errors in block EXPORT
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:12): Errors processing block (EXPORT)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs4_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
20/04/2017 01:39:23 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
20/04/2017 01:40:53 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE
24/04/2017 08:53:54 : epoch 58f8114a : nfs-provisioner-0 : nfs-ganesha-22[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health status is unhealthy.  Not sending heartbeat

Segfault of ganesha.nfsd:

[root@nfs-provisioner-0 /]# dmesg | grep ganesha
[1712139.760198] ganesha.nfsd[37985]: segfault at 40 ip 00000000004e1bd6 sp 00007f5748efff50 error 4 in ganesha.nfsd[400000+145000]

Container logs after failure with failing new volumes provisioning:

I0427 18:45:55.273718       1 controller.go:867] scheduleOperation[lock-provision-piwik/piwik-data[be9e66b7-2b79-11e7-8770-000d3a227fe2]]
I0427 18:45:55.796760       1 leaderelection.go:158] attempting to acquire leader lease...
I0427 18:45:55.959603       1 controller.go:867] scheduleOperation[lock-provision-piwik/piwik-data[be9e66b7-2b79-11e7-8770-000d3a227fe2]]
I0427 18:45:56.700829       1 leaderelection.go:180] successfully acquired lease to provision for pvc piwik/piwik-data
I0427 18:45:56.700990       1 controller.go:867] scheduleOperation[provision-piwik/piwik-data[be9e66b7-2b79-11e7-8770-000d3a227fe2]]
I0427 18:45:57.527326       1 provision.go:374] using service SERVICE_NAME=nfs-provisioner cluster IP 10.0.128.98 as NFS server IP
E0427 18:45:57.545842       1 controller.go:596] Failed to provision volume for claim "piwik/piwik-data" with StorageClass "nfs": error creating export for volume: error exporting export block 
EXPORT
{
	Export_Id = 6;
	Path = /export/pvc-be9e66b7-2b79-11e7-8770-000d3a227fe2;
	Pseudo = /export/pvc-be9e66b7-2b79-11e7-8770-000d3a227fe2;
	Access_Type = RW;
	Squash = no_root_squash;
	SecType = sys;
	Filesystem_id = 6.6;
	FSAL {
		Name = VFS;
	}
}
: error calling org.ganesha.nfsd.exportmgr.AddExport: The name org.ganesha.nfsd was not provided by any .service files
E0427 18:45:57.545888       1 goroutinemap.go:153] operation for "provision-piwik/piwik-data[be9e66b7-2b79-11e7-8770-000d3a227fe2]" failed with: error creating export for volume: error exporting export block 
EXPORT
{
	Export_Id = 6;
	Path = /export/pvc-be9e66b7-2b79-11e7-8770-000d3a227fe2;
	Pseudo = /export/pvc-be9e66b7-2b79-11e7-8770-000d3a227fe2;
	Access_Type = RW;
	Squash = no_root_squash;
	SecType = sys;
	Filesystem_id = 6.6;
	FSAL {
		Name = VFS;
	}
}
: error calling org.ganesha.nfsd.exportmgr.AddExport: The name org.ganesha.nfsd was not provided by any .service files
I0427 18:45:59.535031       1 leaderelection.go:200] stopped trying to renew lease to provision for pvc piwik/piwik-data, task failed

CephFS: Need to update readme

According to https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/cephfs/README.md, the command to start CephFS provisioner is docker run -ti -v /root/.kube:/kube --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1.

When the docker run command is executed, the following error is shown:

docker run -ti -v /root/.kube:/kube --privileged --net=host  cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1
F0406 05:55:45.198882       1 cephfs-provisioner.go:285] Failed to create config: invalid configuration: [unable to read client-cert /var/run/kubernetes/client-admin.crt for myself due to open /var/run/kubernetes/client-admin.crt: no such file or directory, unable to read client-key /var/run/kubernetes/client-admin.key for myself due to open /var/run/kubernetes/client-admin.key: no such file or directory, unable to read certificate-authority /var/run/kubernetes/server-ca.crt for local due to open /var/run/kubernetes/server-ca.crt: no such file or directory]

Need to add -v /var/run/kubernetes:/var/run/kubernetes to it. For example: docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1

[nfs-provisioner] Enable XFS quota with statefulset

Hi there,

How can I enable XFS quota when deploying the nfs-provisioner via the statefulset deployment ?

I'm currently facing the following error:

Error creating xfs quotaer! xfs path /export was not mounted with pquota nor prjquota

Here is the statefulset.yml I'm using:

kind: Service
apiVersion: v1
metadata:
  name: nfs-provisioner
  namespace: kube-system
  labels:
    app: nfs-provisioner
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
    - name: rpcbind-udp
      port: 111
      protocol: UDP
  selector:
    app: nfs-provisioner
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
  name: nfs-provisioner
  namespace: kube-system
spec:
  serviceName: "nfs-provisioner"
  replicas: 1
  template:
    metadata:
      labels:
        app: nfs-provisioner
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
        # Comment the following annotation if Dashboard must not be deployed on master
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      terminationGracePeriodSeconds: 0
      containers:
        - name: nfs-provisioner
          image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.4
          ports:
            - name: nfs
              containerPort: 2049
            - name: mountd
              containerPort: 20048
            - name: rpcbind
              containerPort: 111
            - name: rpcbind-udp
              containerPort: 111
              protocol: UDP
          securityContext:
            capabilities:
              add:
                - DAC_READ_SEARCH
          args:
            - "-provisioner=kube-ntn.local/nfs"
            - "-enable-xfs-quota=true"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_NAME
              value: nfs-provisioner
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: export-volume
              mountPath: /export
      volumes:
        - name: export-volume
          hostPath:
            path: /mnt/vol01

Errors while provisioning with NFS leader

We are using NFS provisioner for dynamic provisioning. While provisioning a claim, are we allowed to make changes to cluster? Can we add a worker/delete a node from cluster? If we do that, the leader is failing with below error.

We are seeing the following errors in storage pod:

E0328 15:59:13.926741       1 leaderelection.go:245] error initially creating leader election record: create not allowed, PVC should already exist!
E0328 15:59:15.927195       1 leaderelection.go:245] error initially creating leader election record: create not allowed, PVC should already exist!```

**Status logs:**

Mar 28 15:58:47.913: INFO: Waiting up to 15m0s for PersistentVolumeClaim pvc-gq10r to have phase Bound
Mar 28 15:58:47.915: INFO: PersistentVolumeClaim pvc
Mar 28 15:58:57.917: INFO: PersistentVolumeClaim pvc-gq10r found but phase is Pending instead of Bound.
Mar 28 15:59:07.920: INFO: PersistentVolumeClaim pvc-gq10r found but phase is Pending instead of Bound.
Mar 28 15:59:17.921: INFO: Get persistent volume claim pvc-gq10r in failed, ignoring for 10s: persistentvolumeclaims "pvc-gq10r" not found
Mar 28 15:59:27.923: INFO: Get persistent volume claim pvc-gq10r in failed, ignoring for 10s: persistentvolumeclaims "pvc-gq10r" not found
Mar 28 15:59:37.925: INFO: Get persistent volume claim pvc-gq10r in failed, ignoring for 10s: persistentvolumeclaims "pvc-gq10r" not found

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.