GithubHelp home page GithubHelp logo

ctrox / csi-s3 Goto Github PK

View Code? Open in Web Editor NEW
760.0 760.0 168.0 175 KB

A Container Storage Interface for S3

License: Apache License 2.0

Makefile 3.01% Go 93.15% Shell 0.43% Dockerfile 3.42%
csi golang k8s kubernetes s3

csi-s3's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-s3's Issues

Non working deployment on Minikube

Hi @ctrox,

I'm trying to test this CSI plugin and your deployment yamls on Minikube (k8s 1.15.0) but always get following:

$ kubectl -n kube-system logs csi-attacher-s3-0
I1004 11:10:13.302939       1 main.go:88] Version: v1.1.0-0-g70a1411
I1004 11:10:13.307788       1 connection.go:151] Connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:10:23.308922       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:10:33.308869       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:10:43.308912       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:10:53.308846       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:11:03.308792       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:11:13.308615       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:11:23.308777       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:11:33.308714       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:11:43.308814       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock

Output from provisioner:

$ kubectl -n kube-system logs csi-provisioner-s3-0 csi-provisioner
W1004 11:07:34.804247       1 deprecatedflags.go:53] Warning: option provisioner="ch.ctrox.csi.s3-driver" is deprecated and has no effect
I1004 11:07:34.804548       1 feature_gate.go:226] feature gates: &{map[]}
I1004 11:07:34.804649       1 csi-provisioner.go:95] Version: v1.1.0-0-gcecb5a96
I1004 11:07:34.804754       1 csi-provisioner.go:109] Building kube configs for running in cluster...
I1004 11:07:34.817565       1 connection.go:151] Connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:07:44.818266       1 connection.go:170] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
I1004 11:07:45.744751       1 connection.go:261] Probing CSI driver for readiness
I1004 11:07:45.748581       1 csi-provisioner.go:149] Detected CSI driver ch.ctrox.csi.s3-driver
I1004 11:07:45.752362       1 controller.go:621] Using saving PVs to API server in background
I1004 11:07:45.753080       1 controller.go:769] Starting provisioner controller ch.ctrox.csi.s3-driver_csi-provisioner-s3-0_31908329-e697-11e9-8a51-0242ac110006!
I1004 11:07:45.754170       1 reflector.go:123] Starting reflector *v1.PersistentVolumeClaim (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:800
I1004 11:07:45.754256       1 reflector.go:161] Listing and watching *v1.PersistentVolumeClaim from sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:800
I1004 11:07:45.754781       1 reflector.go:123] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:803
I1004 11:07:45.754993       1 reflector.go:161] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:803
I1004 11:07:45.755606       1 reflector.go:123] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:806
I1004 11:07:45.753151       1 volume_store.go:90] Starting save volume queue
I1004 11:07:45.755876       1 reflector.go:161] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:806
I1004 11:07:45.853942       1 shared_informer.go:123] caches populated
I1004 11:07:45.854557       1 controller.go:818] Started provisioner controller ch.ctrox.csi.s3-driver_csi-provisioner-s3-0_31908329-e697-11e9-8a51-0242ac110006!
I1004 11:14:56.769668       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:803: Watch close - *v1.PersistentVolume total 0 items received
I1004 11:16:04.766510       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:806: Watch close - *v1.StorageClass total 0 items received
$ kubectl -n kube-system logs csi-provisioner-s3-0 csi-s3
I1004 11:07:45.381501       1 s3-driver.go:80] Driver: ch.ctrox.csi.s3-driver 
I1004 11:07:45.381603       1 s3-driver.go:81] Version: v1.1.1 
I1004 11:07:45.381613       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I1004 11:07:45.381618       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I1004 11:07:45.381818       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I1004 11:07:45.746190       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I1004 11:07:45.747955       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I1004 11:07:45.749097       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I1004 11:07:45.750470       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities

I checked inside Minikube but socket not found by that path - unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock

$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ sudo -s
$ ls -l /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
ls: cannot access '/var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock': No such file or directory
$ ls -l /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/        
total 0
$ 

I tried to change /var/lib/kubelet/plugins to /usr/libexec/kubernetes/kubelet-plugins/volume/exec and did that in all yaml files but still have the same problem:

$ kubectl -n kube-system logs csi-attacher-s3-0
I1004 11:41:22.252433       1 main.go:88] Version: v1.1.0-0-g70a1411
I1004 11:41:22.254330       1 connection.go:151] Connecting to unix:///usr/libexec/kubernetes/kubelet-plugins/volume/exec/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:41:32.255227       1 connection.go:170] Still connecting to unix:///usr/libexec/kubernetes/kubelet-plugins/volume/exec/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:41:42.255328       1 connection.go:170] Still connecting to unix:///usr/libexec/kubernetes/kubelet-plugins/volume/exec/ch.ctrox.csi.s3-driver/csi.sock
W1004 11:41:52.255351       1 connection.go:170] Still connecting to unix:///usr/libexec/kubernetes/kubelet-plugins/volume/exec/ch.ctrox.csi.s3-driver/csi.sock

Do you have any recommendations ?

Testing csi-s3 with examples/pvc.yaml fails to create pvc

We are not able to get the "examples/pvc.yaml" test to succeed. We cloned the master branch of csi-s3 and are using kubectl 1.17.9 on an AWS EKS cluster. We followed all the steps documented at "https://github.com/ctrox/csi-s3" up to the example. Below is shell output showing the failure. We did first add appropriate AWS credentials to "examples/secret.yaml". Is there something else implied which needs to be done besides the exact steps in the documentation?

$ kubectl create -f provisioner.yaml ; kubectl create -f attacher.yaml ; kubectl create -f csi-s3.yaml ; kubectl create -f examples/storageclass.yaml
serviceaccount/csi-provisioner-sa created
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
service/csi-provisioner-s3 created
statefulset.apps/csi-provisioner-s3 created
serviceaccount/csi-attacher-sa created
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
service/csi-attacher-s3 created
statefulset.apps/csi-attacher-s3 created
serviceaccount/csi-s3 created
clusterrole.rbac.authorization.k8s.io/csi-s3 created
clusterrolebinding.rbac.authorization.k8s.io/csi-s3 created
daemonset.apps/csi-s3 created
storageclass.storage.k8s.io/csi-s3 created

$ kubectl create -f examples/pvc.yaml
persistentvolumeclaim/csi-s3-pvc created

$ kubectl get pvc csi-s3-pvc
NAME         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
csi-s3-pvc   Pending                                      csi-s3         7s

$ kubectl get events
LAST SEEN   TYPE      REASON                 OBJECT                             MESSAGE
8s          Normal    Provisioning           persistentvolumeclaim/csi-s3-pvc   External provisioner is provisioning volume for claim "default/csi-s3-pvc"
12s         Normal    ExternalProvisioning   persistentvolumeclaim/csi-s3-pvc   waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator
8s          Warning   ProvisioningFailed     persistentvolumeclaim/csi-s3-pvc   failed to provision volume with StorageClass "csi-s3": error getting secret csi-s3-secret in namespace kube-system: secrets "csi-s3-secret" not found

Unable to attach or mount volumes: unmounted volumes... timed out waiting for the condition

hello everyone
i have some problem with ctrox/csi-s3, i think so
part of containers from deploy not start.
what can i do for troubleshooting this?

  1. Error from POD (when he creation)
    Pod status - Pending
    Unable to attach or mount volumes: unmounted volumes=[var-srs], unattached volumes=[logs srslog certs-fastqp-billing var-srs kube-api-access-jdnrv]: timed out waiting for the condition

  2. Status pv

~ % kubectl get pv                   
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
pvc-43bebc4e-9402-484a-9ab8-ffef6d5ab541   1Gi        RWX            Delete           Bound    s3-srs                       csi-s3                  42h
pvc-9a12de8f-d108-41fa-85f1-f69786c1117e   1Gi        RWX            Delete           Bound    s3-docserver                 csi-s3                  23h
pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f   1Gi        RWX            Delete           Bound    var-srs                      csi-s3                  91d

  1. Status pvs
~ % kubectl get pvc 
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
s3-docserver   Bound    pvc-9a12de8f-d108-41fa-85f1-f69786c1117e   1Gi        RWX            csi-s3         47h
s3-srs         Bound    pvc-43bebc4e-9402-484a-9ab8-ffef6d5ab541   1Gi        RWX            csi-s3         21d
var-srs        Bound    pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f   1Gi        RWX            csi-s3         91d
  1. Logs from
oleginishev@Olegs-MacBook-Air ~ % kubectl logs --tail 200 -l app=csi-provisioner-s3 -c csi-s3 --namespace csi-s3 
I0329 07:14:09.802379       1 driver.go:73] Driver: ch.ctrox.csi.s3-driver 
I0329 07:14:09.802515       1 driver.go:74] Version: v1.2.0-rc.1 
I0329 07:14:09.802526       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0329 07:14:09.802533       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0329 07:14:09.802897       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I0329 07:14:10.196555       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0329 07:14:10.197502       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0329 07:14:10.197941       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0329 07:14:10.198398       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
kubectl logs --tail 1000 -l app=csi-s3 -c csi-s3 --namespace csi-s3  | grep 1efd572fe44

W0330 06:05:39.830163       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/2f31fe79-949f-4353-a573-8aab5d2a8564/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:39.830195       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 06:05:48.818253       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/a7df3c76-b643-4bba-99e6-6f0c5dd1968d/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:48.818274       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 06:05:58.829634       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/a8d95422-3d4b-4216-bb00-09465bc53b10/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:58.829676       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 05:31:24.709756       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/dfb5ea78-1246-479a-be10-66031bc629b4/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 05:31:24.709776       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 05:31:41.778525       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/9e44613f-a4fe-4222-a26a-30dd2f1518bb/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 05:31:41.778543       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
I0330 06:05:12.467267       1 nodeserver.go:79] target /var/lib/kubelet/pods/18573abf-cd0a-4928-a791-10fa35fb8959/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
volumeId pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f
I0330 06:05:12.475914       1 mounter.go:64] Mounting fuse with command: s3fs and args: [pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f:/csi-fs /var/lib/kubelet/pods/18573abf-cd0a-4928-a791-10fa35fb8959/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount -o use_path_request_style -o url=http://s3-devops.int.---.ru/ -o endpoint= -o allow_other -o mp_umask=000]
I0330 06:05:12.490621       1 nodeserver.go:99] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f successfuly mounted to /var/lib/kubelet/pods/18573abf-cd0a-4928-a791-10fa35fb8959/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
I0330 06:05:15.628376       1 nodeserver.go:79] target /var/lib/kubelet/pods/39c8ab02-b48f-44ce-8f6f-58b8a8dc85c0/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
volumeId pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f
I0330 06:05:15.637795       1 mounter.go:64] Mounting fuse with command: s3fs and args: [pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f:/csi-fs /var/lib/kubelet/pods/39c8ab02-b48f-44ce-8f6f-58b8a8dc85c0/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount -o use_path_request_style -o url=http://s3-devops.int.---.ru/ -o endpoint= -o allow_other -o mp_umask=000]
I0330 06:05:15.653314       1 nodeserver.go:99] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f successfuly mounted to /var/lib/kubelet/pods/39c8ab02-b48f-44ce-8f6f-58b8a8dc85c0/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
W0330 06:05:48.831073       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/ce10718e-1198-4dde-93f9-06b09a98ab35/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:48.831091       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.

Mounts are down if CSI restart happens

Hello, when CSI container mounts buckets it launchs mount daemons (when execute s3fs mount command for example)
But if container died daemons in container died also and mounts will be failed.
Can you tell me, what can be done with this?
Thanks for any help

Timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume

  • Configuration files as same as examples
  • AWS S3 bucket created after configuration files applied

Only test pod's mountPath changed to /var/www/html.

pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /var/www/html
         name: webroot
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: csi-s3-pvc
       readOnly: false

I get an error timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume while creating pods:

$ kubectl get events | tail
6m12s       Normal    Pulled                  pod/csi-s3-95nz6                   Successfully pulled image "ctrox/csi-s3:v1.2.0-rc.2" in 59.670915168s
6m12s       Normal    Created                 pod/csi-s3-95nz6                   Created container csi-s3
6m12s       Normal    Started                 pod/csi-s3-95nz6                   Started container csi-s3
6m9s        Normal    ExternalProvisioning    persistentvolumeclaim/csi-s3-pvc   waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator
6m7s        Normal    Provisioning            persistentvolumeclaim/csi-s3-pvc   External provisioner is provisioning volume for claim "prod/csi-s3-pvc"
6m5s        Normal    ProvisioningSucceeded   persistentvolumeclaim/csi-s3-pvc   Successfully provisioned volume pvc-53f12ea9-9398-49dd-b16c-0454b145b746
2m35s       Normal    Scheduled               pod/csi-s3-test-nginx              Successfully assigned prod/csi-s3-test-nginx to minikube
35s         Warning   FailedAttachVolume      pod/csi-s3-test-nginx              AttachVolume.Attach failed for volume "pvc-53f12ea9-9398-49dd-b16c-0454b145b746" : timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume pvc-53f12ea9-9398-49dd-b16c-0454b145b746
32s         Warning   FailedMount             pod/csi-s3-test-nginx              Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-m66ll]: timed out waiting for the condition
7m22s       Normal    SuccessfulCreate        daemonset/csi-s3                   Created pod: csi-s3-95nz6

Is it an network issue? Or any kind of mis-configurations? Thanks.


environment:

Docker version 20.10.18, build b40c2f6

minikube v1.26.1 on Ubuntu 20.04

Client Version: v1.25.1
Kustomize Version: v4.5.7
Server Version: v1.24.3

No matches for kind "StatefulSet" in version "apps/v1beta1"

Hi, I try to test this project but get following error:

kubectl apply -f https://raw.githubusercontent.com/ctrox/csi-s3/master/deploy/kubernetes/attacher.yaml
serviceaccount/csi-attacher-sa unchanged
clusterrole.rbac.authorization.k8s.io/external-attacher-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role unchanged
service/csi-attacher-s3 unchanged
error: unable to recognize "https://raw.githubusercontent.com/ctrox/csi-s3/master/deploy/kubernetes/attacher.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1"

Kubernets version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-07-07T14:04:52Z", GoVersion:"go1.13.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Mount existing bucket and access the data already in the bucket

One of my use case of csi-s3 is mounting existing bucket and access the data in it, anyone share the same use case with me?

Issue #14 propose a request to mount existing bucket, and PR #42 provided an implementation to meet this request, also the volume created by the solution in PR #42 will mount the root of existing bucket, which means it can access data already existed before the volume created.

@ctrox, your commit "Use volume ID as a prefix if the bucket is fixed in the storage class" will create new path for volume even if the bucket existed, so the volume can not access data already existed in the bucket.

@ctrox, is my use case suit for csi-s3? what's your suggestion?

Error creating PVC when using with Terraform

Trying to get this working using terraform, digital ocean managed k8s and digital ocean spaces. I converted the yaml files to the terraform format and it seems to create all the objects but when I try to create the PVC it stays in pending state:

❯ kubectl describe pvc -n tst
Name:          csi-s3-pvc
Namespace:     tst
StorageClass:  csi-s3
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: ch.ctrox.csi.s3-driver
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason                Age                   From                                                                      Message
  ----     ------                ----                  ----                                                                      -------
  Warning  ProvisioningFailed    17m (x2 over 21m)     ch.ctrox.csi.s3-driver_prometheus-0_bfba7a19-960e-4e24-a189-a2461abcd24e  failed to provision volume with StorageClass "csi-s3": failed to strip CSI Parameters of prefixed keys: found unknown parameter key "csi.storage.k8s.io/provisioner-secret-namespace:" with reserved namespace csi.storage.k8s.io/
  Normal   Provisioning          2m36s (x13 over 26m)  ch.ctrox.csi.s3-driver_prometheus-0_bfba7a19-960e-4e24-a189-a2461abcd24e  External provisioner is provisioning volume for claim "tst/csi-s3-pvc"
  Warning  ProvisioningFailed    2m36s (x11 over 26m)  ch.ctrox.csi.s3-driver_prometheus-0_bfba7a19-960e-4e24-a189-a2461abcd24e  failed to provision volume with StorageClass "csi-s3": failed to strip CSI Parameters of prefixed keys: found unknown parameter key "csi.storage.k8s.io/provisioner-secret-name:" with reserved namespace csi.storage.k8s.io/
  Normal   ExternalProvisioning  58s (x102 over 26m)   persistentvolume-controller                                               waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator

I don't know what this means. Any idea how can I track this issue down?

Both the logs for the provisioner and s3-driver return empty

~ 
❯ kubectl logs -l app=csi-provisioner-s3 -c csi-s3

~ 
❯ kubectl logs -l app=csi-s3 -c csi-s3

~ 
❯ 

Buckets are not cleaned up unless they are empty

The following behaviour is seen on Google Cloud using the S3 compatibility for Google Cloud Storage.

When a PVC is deleted, the behaviour w.r.t. the underlying PV should be defined entirely by the reclaim policy of the PV. By default, this is Delete - I would expect this to mean that the underlying bucket will be deleted when the PV is deleted. However, this fails with the following message:

controller.go:1138] Deletion of volume "pvc-ebee2c31-b501-11e8-a1ab-42010a9a022c" failed: rpc error: code = Unknown desc = The bucket you tried to delete is not empty.

With a reclaim policy of Delete, I would expect this to succeed, even if the bucket is non-empty. If data-retention is important, the PVs should use a reclaim policy of Retain.

Unable to Mount volume at Pod

Hey folks,

maybe someone can give me a hint around here. For testing proposes I use minio as S3 provider, creating and attaching a PVC is working fine but I'm unable to mount the volume at a given Pod:

Normal   Scheduled               12s                default-scheduler        Successfully assigned kube-system/csi-s3-test-nginx to worker04
  Normal   SuccessfulAttachVolume  12s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-50edc794-e00b-4be8-8ccf-35b9b545bd4a"
  Warning  FailedMount             1s (x4 over 4s)    kubelet                  MountVolume.MountDevice failed for volume "pvc-50edc794-e00b-4be8-8ccf-35b9b545bd4a" : rpc error: code = Unknown desc = Get "http://filelake.kube-system.svc.cluster.local:7777/pvc-50edc794-e00b-4be8-8ccf-35b9b545bd4a/?location=": dial tcp: lookup filelake.kube-system.svc.cluster.local on 1.1.1.1:53: no such host

I'm aware that the error says that the host is not resolvable but the funny fact is that I'm able to reach the url "filelake.kube-system.svc.cluster.local" from every Pod on my cluster and DNS resolution seems to work as expected ...

Looking at the persistentvolumeclaim itself seems also fine to me


Name:          csi-s3-pvc
Namespace:     kube-system
StorageClass:  csi-s3
Status:        Bound
Volume:        pvc-50edc794-e00b-4be8-8ccf-35b9b545bd4a
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ch.ctrox.csi.s3-driver
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       csi-s3-test-nginx
Events:
  Type    Reason                 Age   From                                                                              Message
  ----    ------                 ----  ----                                                                              -------
  Normal  ExternalProvisioning   8m2s  persistentvolume-controller                                                       waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator
  Normal  Provisioning           8m2s  ch.ctrox.csi.s3-driver_csi-provisioner-s3-0_c3a1a4d4-44f7-4673-be0e-436df8551b6d  External provisioner is provisioning volume for claim "kube-system/csi-s3-pvc"
  Normal  ProvisioningSucceeded  8m    ch.ctrox.csi.s3-driver_csi-provisioner-s3-0_c3a1a4d4-44f7-4673-be0e-436df8551b6d  Successfully provisioned volume pvc-50edc794-e00b-4be8-8ccf-35b9b545bd4a

What could be the cause of this issue as all logs seems to be fine, a bucket also gets provisioned at minio. Everything seems to work fine except the actual mount on Pod side.

Thanks in advance :D

k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource

$ kubectl logs pod/csi-attacher-s3-0 -n kube-system
I0425 17:50:09.496327       1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0425 17:50:09.502791       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
I0425 17:50:10.502928       1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0425 17:50:10.505112       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
I0425 17:50:11.505270       1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0425 17:50:11.513937       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource

This means that the test pod won't start.

$ kubectl describe pod/csi-s3-test-nginx
Name:         csi-s3-test-nginx
Namespace:    default
Priority:     0
Node:         atlas/192.168.0.254
Start Time:   Mon, 25 Apr 2022 13:41:06 -0400
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:
IPs:          <none>
Containers:
  csi-s3-test-nginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/lib/www/html from webroot (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mt78j (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  webroot:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-s3-pvc
    ReadOnly:   false
  kube-api-access-mt78j:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason              Age                 From                     Message
  ----     ------              ----                ----                     -------
  Warning  FailedMount         80s (x4 over 8m6s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-mt78j]: timed out waiting for the condition
  Warning  FailedAttachVolume  2s (x5 over 8m9s)   attachdetach-controller  AttachVolume.Attach failed for volume "pvc-0050921d-b7f2-4158-aab9-118231645848" : Attach timeout for volume pvc-0050921d-b7f2-4158-aab9-118231645848

The provisioner has worked and the bucket shows on my dashboard.

Shouldn't check bucket exist when using an specific exist bucket

When I am using an specific exist bucket, I prefer to use an s3 storage account without bucket related permission.

if nameOverride, ok := params[mounter.BucketKey]; ok {
    bucketName = nameOverride
   prefix = volumeID
   volumeID = path.Join(bucketName, prefix)
}

...

exists, err := client.BucketExists(bucketName)

Kubernetes 1.20.2-do.0 compatibility

I'm trying to get this installed on DigitalOcean's Kubernetes, but am running into an error when it tries to create the space in DO Spaces.

These are the errors that I'm seeing in the csi-s3 pod:

I0403 04:08:23.461558       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:803: Watch close - *v1.PersistentVolume total 0 items received
I0403 04:09:05.456786       1 reflector.go:235] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:800: forcing resync
I0403 04:09:05.456908       1 controller.go:979] Final error received, removing PVC eea86604-bc5c-440a-ad8b-7c5f144c667d from claims in progress
I0403 04:09:05.456916       1 controller.go:902] Provisioning succeeded, removing PVC eea86604-bc5c-440a-ad8b-7c5f144c667d from claims in progress
I0403 04:09:05.456957       1 controller.go:979] Final error received, removing PVC 9379489b-e891-4ef7-ab07-a8f82a685a6c from claims in progress
I0403 04:09:05.456964       1 controller.go:902] Provisioning succeeded, removing PVC 9379489b-e891-4ef7-ab07-a8f82a685a6c from claims in progress
I0403 04:09:05.456981       1 controller.go:979] Final error received, removing PVC 01f9456c-c354-415c-ad4f-c5670287ba5b from claims in progress
I0403 04:09:05.456984       1 controller.go:902] Provisioning succeeded, removing PVC 01f9456c-c354-415c-ad4f-c5670287ba5b from claims in progress
I0403 04:09:05.457050       1 controller.go:1196] provision "default/do-spaces-pvc" class "do-spaces": started
I0403 04:09:05.457142       1 controller.go:979] Final error received, removing PVC 2d996a23-e7f1-4ef9-8b53-7ad0fdc5c45f from claims in progress
I0403 04:09:05.457289       1 controller.go:902] Provisioning succeeded, removing PVC 2d996a23-e7f1-4ef9-8b53-7ad0fdc5c45f from claims in progress
I0403 04:09:05.458235       1 reflector.go:235] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:803: forcing resync
E0403 04:09:05.461874       1 controller.go:1213] provision "default/do-spaces-pvc" class "do-spaces": unexpected error getting claim reference: selfLink was empty, can't make reference
I0403 04:09:05.461935       1 controller.go:979] Final error received, removing PVC 4f62dc7d-077a-49a8-9e7e-9100eaea6086 from claims in progress
I0403 04:09:05.462122       1 controller.go:902] Provisioning succeeded, removing PVC 4f62dc7d-077a-49a8-9e7e-9100eaea6086 from claims in progress
I0403 04:09:06.545825       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:800: Watch close - *v1.PersistentVolumeClaim total 0 items received
I0403 04:10:50.459814       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:806: Watch close - *v1.StorageClass total 0 items received
I0403 04:14:09.463584       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:803: Watch close - *v1.PersistentVolume total 0 items received
I0403 04:14:35.547533       1 reflector.go:370] sigs.k8s.io/sig-storage-lib-external-provisioner/controller/controller.go:800: Watch close - *v1.PersistentVolumeClaim total 0 items received

I have done some research and found that 1.20 drops support for the unexpected error getting claim reference: selfLink was empty, can't make reference error. Something similar has occurred with csi-snapshotter as well. kubernetes/kubernetes#94660

Feature: Import CA/x509 certificates

We have an internal CA to deploy certificates to our internal services. We would like to import the CA certificate into the CSI-S3 service.

Discussion item: should we extend the CSI-S3 plugin to add a feature to import custom certificates or should we use a sidecar or init container?

I'm not a big fan of creating our own images only because we want to use our own CA.

On the problem of using nginx to deploy and test CSI plug-ins

Deploy the CSI plug-in in k8s, start PV PVC (in-tree) manually, deploy the service of CSI plug-in successfully (written in go language), and start the attacher and provisioner services at the same time (attacher and provisioner are the implementation of CSI plug-in)
The front-end service of mount, and the service starts normally), deploy nginx service, and test whether the CSI deployment is correct,
The pod of nginx service is always in containercreating state
To view the log:
Log error ( Warning FailedMount 4m (x213 over 8h) kubelet, node-1b2b45 Unable to mount volumes for pod
"deployment-beegfs-6748764898-5bhsn_default(318110f3-d13a-11ea-8669-0cda411d067f)": timeout expired waiting
for volumes to attach or mount for pod "default"/"deployment-beegfs-6748764898-5bhsn". list of unmounted
volumes=[beegfs-pvc]. list of unattached volumes=[beegfs-pvc default-token-nj9fw])
Check the deployed CSI plug-in log. The deployed pod does not call the CSI plug-in service
Thank you and look forward to your answer

Annotate persistent volume claim to influence bucket name

First of all great work on this CSI plugin.

I tested it in managed kubernetes cluster at digital ocean and it works pretty well. However I would like to influence how the buckets are named, now it gives me buckets called pvc- followed by some UUID. Having some control over the bucket naming would make it more clear what bucket is used for what.

I tried to make a persistent volume and claim manually by referencing a volumeHandle (and created the bucket beforehand) and volumeName on the claim. Unfortunately this prevented the container from starting, it keeps waiting in the ContainerCreating status indefinitely. The same configuration with only a persistent volume claim (without the volumehandle) works fine, it will create it's own PersistentVolume and mounts it in the container, and the container starts correctly.

It doesn't seem seem to work for me to make my own PV and use it with the driver, or I'm doing something wrong. But even more convenient would be if I could place an annotation on the PVC that gives a hint to the provisioner as to which bucketname to create/use.

E.g.:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  annotations:
    ch.ctrox.csi.s3-driver/bucketname: "mybucket"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-s3

The PV/PVC config I tried but didn't get to work looked like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  storageClassName: csi-s3
  accessModes:
  - ReadWriteMany
  capacity: 
    storage: 1Gi
  csi:
    driver: ch.ctrox.csi.s3-driver
    volumeHandle: mybucketname
    # I added this (volumeAttributes.mounter: s3fs) because I saw this in the generated PV
    volumeAttributes:
      mounter: s3fs
    readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-s3
  volumeName: my-pv

When I don't create the PV and removing the volumeName from the PVC it works fine, but then the bucketname is pvc-[uuid].

Maybe I'm missing something, or lack some configuration on the PV, but just having an annotation on the PVC would be even more convenient.

Can not write to mounted pvc

Hello,

I just went throug the installation description. The S3 is a mino instance without TLS and runs outside the kubernetes cluster. It's a demo installation. I can run a minio/mc docker inside the kubernetes and can connect to minio, create and delete buckets. So far so good networking works.

I'm running the example. So I use the secret.yaml, the pvc.yaml and the pod.yaml

So publishing user/password here is not critical
The secret.yaml looks like

apiVersion: v1
kind: Secret
metadata:
  # Namespace depends on the configuration in the storageclass.yaml
  namespace: kube-system
  name: csi-s3-secret
stringData:
  accessKeyID: minioadmin
  secretAccessKey: minioadmin
  # For AWS set it to "https://s3.<region>.amazonaws.com"
  endpoint: 'http://192.168.16.131:9000/'
  # If not on S3, set it to ""
  region: ''

The pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-s3-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-s3

The pvc is created fine. Calling kubectl describe pvc csi-s3-pvc does not show any errors and its bound, access mode rwo.

Now starting the nginx

apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /var/lib/www/html
         name: webroot
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: csi-s3-pvc
       readOnly: false

The nginx starts fine, logs and describe shows no errors

Next opening a shell with

kubectl exec -it csi-s3-test-nginx -- bash
mount | grep fuse

fuse shows a mounted pvc on /var/lib/www/html. Using the UI of minio shows a fresh created bucket in read/write mode. So everything is fine until now.
But comaring with the Readme. I get a different answer.

root@csi-s3-test-nginx:/# mount | grep fuse
:s3:pvc-ca79e362-0394-4fc5-9909-ba8e0ddc436a/csi-fs on /var/lib/www/html type fuse.rclone (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

In the readme the answer to the mount should start with

s3fs on /var/lib/www/html type log ....

Is there something that went wrong, because I got a different answer?

When executing

root@csi-s3-test-nginx:/#  ls /var/lib/www/html

I get an error

ls: reading directory '/var/lib/www/html': Input/output error

So what is going on there? I should be able read,write on that mounted pvc, but it does not do anything.

Is there something I have overseen? Or is this behaviour as expected.

Any help welcome

DigitalOcean example?

Hi,
Does anyone have a DigitalOcean example?
I can't seem to get it to work with it.

Thanks,
Jamie

x509: certificate signed by unknown authority but certificate added to ca-certificate

Hello,

After creating PVC I have an SSL issue on csi-provisioner pod. I contact a https endpoint ( compatible storage not AWS ) so I modify the pod with postStart task to add the CA.

With exec in container I add curl and try to resolve url, it's ok and I have certificates in ca-certificate :

curl -v https://backup.s3.xxx.xxx

  • Uses proxy env variable no_proxy == '10.96.0.1'
  • Trying 192.168.2.1...
  • TCP_NODELAY set
  • Connected to 192.168.2.1 (192.168.2.1) port 3128 (#0)
  • allocate connect buffer!
  • Establish HTTP proxy tunnel to backup.s3.xxx.xxx:443

CONNECT backup.s3.xxx.xxx:443 HTTP/1.1
Host: backup.s3.xxx.xxx:443
User-Agent: curl/7.64.1
Proxy-Connection: Keep-Alive

< HTTP/1.1 200 Connection established
<

  • Proxy replied 200 to CONNECT request
  • CONNECT phase completed!
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/certs/ca-certificates.crt
    CApath: none
  • TLSv1.3 (OUT), TLS handshake, Client hello (1):
  • CONNECT phase completed!
  • CONNECT phase completed!
  • TLSv1.3 (IN), TLS handshake, Server hello (2):
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  • TLSv1.2 (IN), TLS handshake, Server finished (14):
  • TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  • TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.2 (OUT), TLS handshake, Finished (20):
  • TLSv1.2 (IN), TLS handshake, Finished (20):
  • SSL connection using TLSv1.2 / DHE-RSA-AES256-GCM-SHA384
  • ALPN, server did not agree to a protocol
  • Server certificate:
  • subject: C=FR; O=xxx; OU=Private Group PKI; OU=xxx; CN=backup.s3.xxx.xxx
  • start date: May 15 13:59:53 2018 GMT
  • expire date: May 14 14:00:53 2022 GMT
  • subjectAltName: host "backup.s3.xxx.xxx" matched cert's "backup.s3.xxx.xxx"
  • issuer: C=FR; O=xxx; OU=Private Group PKI; OU=xxx; CN=xxx
  • SSL certificate verify ok.

GET / HTTP/1.1
Host: backup.s3.xxx.xxx
User-Agent: curl/7.64.1
Accept: /

< HTTP/1.1 403 Forbidden
< server: S3 Server
< x-amz-id-2: 150479598f4097ae9038
< x-amz-request-id: 150479598f4097ae9038
< Content-Type: application/xml
< Content-Length: 174
< Date: Tue, 14 May 2019 15:45:12 GMT
< Connection: keep-alive
<

  • Connection #0 to host 192.168.2.1 left intact
AccessDeniedAccess Denied150479598f4097ae9038* Closing connection 0

But I have always the ssl issue :

I0514 15:37:23.228685 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"csi-s3-pvc", UID:"ea071c6f-765d-11e9-bf08-024fe762fd88", APIVersion:"v1", ResourceVersion:"754640", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "csi-s3": rpc error: code = Unknown desc = failed to check if bucket pvc-ea071c6f-765d-11e9-bf08-024fe762fd88 exists: Get https://backup.s3.xxx.xxx/pvc-ea071c6f-765d-11e9-bf08-024fe762fd88/?location=: x509: certificate signed by unknown authority

Do you have an idea ?

Thanks,

Maxime

Pod created failed: rpc error: code = Unknown desc = The specified key does not exist.

csi components description:

$ kubectl  -n kube-system get pods
csi-attacher-s3-0                          1/1     Running   2          32m
csi-provisioner-s3-0                       2/2     Running   0          32m
csi-s3-mf6pj                               2/2     Running   0          10m
csi-s3-mnczm                               2/2     Running   0          11m
csi-s3-vfmw5                               2/2     Running   0          11m

$ kubectl -n kube-system logs -f csi-s3-mnczm csi-s3 
I0726 03:40:46.556126       1 s3-driver.go:80] Driver: ch.ctrox.csi.s3-driver 
I0726 03:40:46.556378       1 s3-driver.go:81] Version: v1.1.1 
I0726 03:40:46.556392       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0726 03:40:46.556401       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0726 03:40:46.557503       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0726 03:40:46.591459       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0726 03:40:46.591495       1 utils.go:98] GRPC request: {}
I0726 03:40:46.592577       1 utils.go:103] GRPC response: {}
I0726 03:40:46.594101       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0726 03:40:46.594113       1 utils.go:98] GRPC request: {}
I0726 03:40:46.594610       1 identityserver-default.go:32] Using default GetPluginInfo
I0726 03:40:46.594616       1 utils.go:103] GRPC response: {"name":"ch.ctrox.csi.s3-driver","vendor_version":"v1.1.1"}
I0726 03:40:46.595564       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0726 03:40:46.595579       1 utils.go:98] GRPC request: {}
I0726 03:40:46.596134       1 identityserver-default.go:53] Using default capabilities
I0726 03:40:46.596141       1 utils.go:103] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
I0726 03:40:46.597643       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0726 03:40:46.597657       1 utils.go:98] GRPC request: {}
I0726 03:40:46.598341       1 controllerserver-default.go:62] Using default ControllerGetCapabilities
I0726 03:40:46.598348       1 utils.go:103] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0726 03:40:46.599948       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0726 03:40:46.599958       1 utils.go:98] GRPC request: {}
I0726 03:40:46.601157       1 controllerserver-default.go:62] Using default ControllerGetCapabilities
I0726 03:40:46.601164       1 utils.go:103] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0726 03:40:47.121684       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0726 03:40:47.121704       1 utils.go:98] GRPC request: {}
I0726 03:40:47.122214       1 identityserver-default.go:32] Using default GetPluginInfo
I0726 03:40:47.122220       1 utils.go:103] GRPC response: {"name":"ch.ctrox.csi.s3-driver","vendor_version":"v1.1.1"}
I0726 03:40:48.441612       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetInfo
I0726 03:40:48.441648       1 utils.go:98] GRPC request: {}
I0726 03:40:48.442169       1 nodeserver-default.go:40] Using default NodeGetInfo
I0726 03:40:48.442176       1 utils.go:103] GRPC response: {"node_id":"shtl009063226"}
I0726 03:41:18.921095       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0726 03:41:18.921114       1 utils.go:98] GRPC request: {}
I0726 03:41:18.921505       1 utils.go:103] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0726 03:41:18.928505       1 utils.go:97] GRPC call: /csi.v1.Node/NodeStageVolume
I0726 03:41:18.928521       1 utils.go:98] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb66f0cc-e6c9-4bc9-9662-a7bd40b2dc09/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"mounter":"rclone","storage.kubernetes.io/csiProvisionerIdentity":"1627269615740-8081-ch.ctrox.csi.s3-driver"},"volume_id":"pvc-bb66f0cc-e6c9-4bc9-9662-a7bd40b2dc09"}
E0726 03:41:18.953948       1 utils.go:101] GRPC error: The specified key does not exist.

examples description:

$ kubectl  get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
csi-s3-pvc           Bound    pvc-bb66f0cc-e6c9-4bc9-9662-a7bd40b2dc09   5Gi        RWO            csi-s3                 14m

$ kubectl  get pod
NAME                                                             READY   STATUS              RESTARTS   AGE
csi-s3-test-nginx                                                0/1     ContainerCreating   0          14m

$ kubectl  describe pods csi-s3-test-nginx
...
Events:
  Type     Reason                  Age                  From                     Message
  ----     ------                  ----                 ----                     -------
  Normal   Scheduled               17m                  default-scheduler        Successfully assigned default/csi-s3-test-nginx to shtl009063226
  Normal   SuccessfulAttachVolume  17m                  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-bb66f0cc-e6c9-4bc9-9662-a7bd40b2dc09"
  Warning  FailedMount             4m26s (x6 over 15m)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot default-token-pxlf7]: timed out waiting for the condition
  Warning  FailedMount             2m10s                kubelet                  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[default-token-pxlf7 webroot]: timed out waiting for the condition
  Warning  FailedMount             82s (x16 over 17m)   kubelet                  MountVolume.MountDevice failed for volume "pvc-bb66f0cc-e6c9-4bc9-9662-a7bd40b2dc09" : rpc error: code = Unknown desc = The specified key does not exist.

Thanks for any help.

New PVC doesn't persist after pvc recreated

Setting reclaimPolicy to Retain in my storageclass (from a MinIO server) under csi-s3 doesn't seem to work, even though mounts appears properly in pods with s3fs.

Every time a new pvc is created, a new pvc-xxx-xxxxx-xxxxxx-xxxxxx object is created in MinIO inside bucket mybucket, which means the data becomes not persistent. (I would think it should pick up the existing pvc instead of generating a new one?)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-minio-permvol
  namespace: dev
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: s3bucket-permstore

and storageclass:

parameters:
  bucket: mybucket
  csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  mounter: s3fs
provisioner: ch.ctrox.csi.s3-driver
reclaimPolicy: Retain
volumeBindingMode: Immediate

Any idea why ?

Unable to attach or mount volumes:

hi,can you help me solve this problem?
Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[istio-token istio-podinfo kube-api-access-g9wh7 webroot istiod-ca-cert istio-data istio-envoy]: timed out waiting for the condition

I think there is no problem with my PVC

Name: csi-s3-existing-bucket
Namespace: default
StorageClass: csi-s3-existing-bucket
Status: Bound
Volume: pvc-68bdbede-e355-48ce-9ce7-7d47a9219da8
Labels:
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: ch.ctrox.csi.s3-driver
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: csi-s3-test-nginx
Events:

`apiVersion: v1
kind: Pod
metadata:
name: csi-s3-test-nginx
namespace: default
spec:
containers:

  • name: csi-s3-test-nginx
    image: nginx
    volumeMounts:
    • mountPath: /var/lib/www/html
      name: webroot
      volumes:
  • name: webroot
    persistentVolumeClaim:
    claimName: csi-s3-existing-bucket
    readOnly: false
    `

Attacher Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock

Provisioner is working fine and is creating buckets in S3. However, the daemon set sits in Container Creation and the attacher is erroring.

$ kubectl logs -l app=csi-provisioner-s3 -c csi-s3 -n kube-system
I0425 15:19:35.175361       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0425 15:19:35.175367       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0425 15:19:35.175571       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I0425 15:19:35.571124       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0425 15:19:35.572567       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0425 15:19:35.573690       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0425 15:19:35.574246       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0425 15:20:48.500878       1 utils.go:97] GRPC call: /csi.v1.Controller/CreateVolume
I0425 15:20:48.500900       1 controllerserver.go:87] Got a request to create volume pvc-0050921d-b7f2-4158-aab9-118231645848
I0425 15:20:48.847037       1 controllerserver.go:133] create volume pvc-0050921d-b7f2-4158-aab9-118231645848
$ kubectl logs pod/csi-attacher-s3-0 -n kube-system
I0425 15:19:28.175346       1 main.go:91] Version: v2.2.0-0-g97411fa7
I0425 15:19:28.177151       1 connection.go:153] Connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:38.177314       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:48.177288       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:58.177282       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:08.177357       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:18.177324       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:28.178425       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:38.177322       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:48.177307       1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
$ kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS              RESTARTS   AGE
kube-system   pod/calico-node-crbmq                          1/1     Running             0          141m
kube-system   pod/coredns-64c6478b6c-w99ts                   1/1     Running             0          141m
kube-system   pod/calico-kube-controllers-75b46474ff-lnlhw   1/1     Running             0          141m
kube-system   pod/csi-attacher-s3-0                          1/1     Running             0          137m
kube-system   pod/csi-provisioner-s3-0                       2/2     Running             0          137m
default       pod/csi-s3-test-nginx                          0/1     ContainerCreating   0          134m
kube-system   pod/hostpath-provisioner-7764447d7c-5xn8q      1/1     Running             0          133m
kube-system   pod/csi-s3-2wshf                               0/2     ContainerCreating   0          133m

NAMESPACE     NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes           ClusterIP   10.152.183.1     <none>        443/TCP                  141m
kube-system   service/kube-dns             ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   141m
kube-system   service/csi-provisioner-s3   ClusterIP   10.152.183.22    <none>        65535/TCP                137m
kube-system   service/csi-attacher-s3      ClusterIP   10.152.183.217   <none>        65535/TCP                137m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   141m
kube-system   daemonset.apps/csi-s3        1         1         0       1            0           <none>                   137m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                   1/1     1            1           141m
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           141m
kube-system   deployment.apps/hostpath-provisioner      1/1     1            1           133m

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-64c6478b6c                   1         1         1       141m
kube-system   replicaset.apps/calico-kube-controllers-75b46474ff   1         1         1       141m
kube-system   replicaset.apps/hostpath-provisioner-7764447d7c      1         1         1       133m

NAMESPACE     NAME                                  READY   AGE
kube-system   statefulset.apps/csi-attacher-s3      1/1     137m
kube-system   statefulset.apps/csi-provisioner-s3   1/1     137m

Windows support

With CSI being supported in 1.16 on windows would it be possible to get this supported on windows with rclone

pvc is not getting bound.

PVC remains in pending state.

waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator

there is no error in provisioner logs

I0104 12:13:45.113578       1 driver.go:73] Driver: ch.ctrox.csi.s3-driver
I0104 12:13:45.113867       1 driver.go:74] Version: v1.2.0-rc.1
I0104 12:13:45.113884       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0104 12:13:45.113896       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0104 12:13:45.114286       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I0104 12:13:45.609226       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0104 12:13:45.611077       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0104 12:13:45.611584       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0104 12:13:45.611991       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities

secret.yaml file. i even tried commenting the region
i have created secret with minio as endpoint

kind: Secret
metadata:
  namespace: kube-system
  name: csi-s3-secret
stringData:
  accessKeyID: *
  secretAccessKey: *
  # For AWS set it to "https://s3.<region>.amazonaws.com"
  endpoint: https://*
  # If not on S3, set it to ""
#  region: ""

Why not adding s3ql

I have seen in the history that s3ql was removed, for me it sounded like the perfect fit feature wise.

any reason why it flew out?

Make CSI-S3 compatible with kubernetes 1.12/1.13

Hi,

I get the following error: MountVolume.NewMounter initialization failed for volume "pvc-bfdf223a-1065-11e9-ba2d-2654ae5646b0" : driver name ch.ctrox.csi.s3-driver not found in the list of registered CSI drivers

Before i get the claiming working i needed to update the following images from 0.2.0 to

quay.io/k8scsi/driver-registrar:v0.4.2
quay.io/k8scsi/csi-provisioner:v1.0.1
quay.io/k8scsi/csi-attacher:v0.4.2

I think those problems are caused by that the registration of the CSI or not compatible with 1.12+ and needs to implements more functionality to make it work. See intel/pmem-csi#61 and kubernetes/kubernetes#68688

Kind regards,
Lennard Westerveld

BTW: awesome work 👍! I hope we can make this work with 1.12+ that would be nice :) !

csi.sock: connect: connection refused

Events:
Type Reason Age From Message


Normal   SuccessfulAttachVolume  40m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-e97dce00-460a-4e09-bfb1-a6b07ca3eab8"
Warning  FailedMount             38m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[test-image-pvc], unattached volumes=[test-image-pvc sample model-test test-code dynamic-test-code test ev-sdk-log output-path default-token-pf7wf test-log clean-data]: timed out waiting for the condition
Warning  FailedMount             9m37s (x22 over 40m)  kubelet                  MountVolume.MountDevice failed for volume "pvc-e97dce00-460a-4e09-bfb1-a6b07ca3eab8" : rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock: connect: connection refused"
Warning  FailedMount             4m14s (x9 over 20m)   kubelet                  (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[test-image-pvc], unattached volumes=[test-image-pvc default-token-pf7wf model-test sample test-log test-code clean-data ev-sdk-log output-path test dynamic-test-code]: timed out waiting for the condition

Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock

kubernetes version: v1.20.6
attacher.yaml image: quay.io/k8scsi/csi-attacher:canary

[root@master01 minio]# kubectl logs pod/csi-attacher-s3-0 -n kube-system
I0928 08:59:46.006784 1 main.go:96] Version: v3.1.0-15-g31ad351b
W0928 08:59:56.010802 1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0928 09:00:06.010778 1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0928 09:00:16.010769 1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0928 09:00:26.010897 1 connection.go:172] Still connecting to unix:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock

What does csi.sock refer to? How to get this file?

Non working deployment on OKD

Hello,

I'm trying to deploy csi-s3 on OKD 3.11, and so far it is failing.

I've configured the docker daemon with MountFlags=shared parameter in the systemd service configuration.

OKD's kubelet comes preconfigured to allow privileged containers, privileged SCC takes care of that.

So far I've:

  1. Created a dedicated namespace
  2. Assigned privileged SCC to the following SAs: csi-attacher-sa, csi-provisioner-sa, csi-s3.
  3. Created the secret with the required info
  4. Created provisioner.yaml, attacher.yaml and csi-s3.yaml

After doing that, I end up with three pods:

csi-attacher-s3-0 - CrashLoopBackOff

I0204 16:46:07.096818       1 connection.go:88] Connecting to /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
I0204 16:46:07.097102       1 connection.go:115] Still trying, connection is CONNECTING
I0204 16:46:07.097374       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
I0204 16:46:08.097574       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
I0204 16:46:09.139837       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
I0204 16:46:10.316272       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
I0204 16:46:11.382153       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
I0204 16:46:12.358325       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
I0204 16:46:13.330933       1 connection.go:115] Still trying, connection is CONNECTING
I0204 16:46:13.331496       1 connection.go:115] Still trying, connection is TRANSIENT_FAILURE
...
I0204 16:47:07.097028       1 connection.go:108] Connection timed out
E0204 16:47:07.097140       1 main.go:95] rpc error: code = Unavailable desc = all SubConns are in TransientFailure

csi-s3-tctsq[driver-registrar] - CrashLoopBackOff

I0204 16:51:49.977366       1 main.go:75] Attempting to open a gRPC connection with: %!q(*string=0xc4202ba110)
I0204 16:51:49.977499       1 connection.go:68] Connecting to /csi/csi.sock
I0204 16:51:49.977762       1 connection.go:95] Still trying, connection is CONNECTING
I0204 16:51:49.978625       1 connection.go:95] Still trying, connection is TRANSIENT_FAILURE
I0204 16:51:50.978946       1 connection.go:95] Still trying, connection is TRANSIENT_FAILURE
I0204 16:51:52.020579       1 connection.go:95] Still trying, connection is TRANSIENT_FAILURE
I0204 16:51:53.197619       1 connection.go:95] Still trying, connection is TRANSIENT_FAILURE
...
I0204 16:52:49.978627       1 connection.go:88] Connection timed out
I0204 16:52:49.978746       1 main.go:83] Calling CSI driver to discover driver name.
E0204 16:52:49.978974       1 main.go:88] rpc error: code = Unavailable desc = all SubConns are in TransientFailure

csi-s3-tctsq[csi-s3] - Running

I0204 16:03:18.870619       1 s3-driver.go:92] Driver: ch.ctrox.csi.s3-driver 
I0204 16:03:18.871028       1 s3-driver.go:93] Version: 1.0.1-alpha 
I0204 16:03:18.871043       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0204 16:03:18.871049       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0204 16:03:18.871391       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}

csi-provisioner-s3-0[csi-provisioner] - Running

I0204 15:55:02.982618       1 csi-provisioner.go:70] Building kube configs for running in cluster...
I0204 15:55:03.055013       1 controller.go:93] Connecting to /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
I0204 15:55:03.055256       1 controller.go:120] Still trying, connection is CONNECTING
I0204 15:55:03.055519       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0204 15:55:04.055632       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0204 15:55:05.232138       1 controller.go:117] Connected
I0204 15:55:05.234955       1 controller.go:492] Starting provisioner controller 3d210477-2895-11e9-9dab-0a580a800013!
I0204 15:55:05.235582       1 reflector.go:202] Starting reflector *v1.StorageClass (15s) from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498
I0204 15:55:05.235621       1 reflector.go:240] Listing and watching *v1.StorageClass from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498
I0204 15:55:05.238102       1 reflector.go:202] Starting reflector *v1.PersistentVolumeClaim (15s) from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496
I0204 15:55:05.238129       1 reflector.go:240] Listing and watching *v1.PersistentVolumeClaim from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496
I0204 15:55:05.238576       1 reflector.go:202] Starting reflector *v1.PersistentVolume (15s) from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497
I0204 15:55:05.238591       1 reflector.go:240] Listing and watching *v1.PersistentVolume from github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497
I0204 15:55:20.248633       1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync
...

csi-provisioner-s3-0[csi-s3] - Running

I0204 16:03:13.375632       1 s3-driver.go:92] Driver: ch.ctrox.csi.s3-driver 
I0204 16:03:13.375879       1 s3-driver.go:93] Version: 1.0.1-alpha 
I0204 16:03:13.375943       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0204 16:03:13.375950       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0204 16:03:13.376572       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}

Any ideas?

Thanks in advance,

defunct processes, the possible explanation

introduction

The following explanation is focused on using csi-s3 with goofys as a backend. All the components are in their latest version.
The issue I stumbled upon is the number of goofys Zombie processes.
The number doesn't have any importance in the understanding.

explanation

I looked in the csi-s3 code and more importantly at the FuseUnmount function and then at waitForProcess

func waitForProcess(p *os.Process, backoff int) error {
if backoff == 20 {
return fmt.Errorf("Timeout waiting for PID %v to end", p.Pid)
}
cmdLine, err := getCmdLine(p.Pid)
if err != nil {
glog.Warningf("Error checking cmdline of PID %v, assuming it is dead: %s", p.Pid, err)
return nil
}
if cmdLine == "" {
// ignore defunct processes
// TODO: debug why this happens in the first place
// seems to only happen on k8s, not on local docker
glog.Warning("Fuse process seems dead, returning")
return nil
}
if err := p.Signal(syscall.Signal(0)); err != nil {
glog.Warningf("Fuse process does not seem active or we are unprivileged: %s", err)
return nil
}
glog.Infof("Fuse process with PID %v still active, waiting...", p.Pid)
time.Sleep(time.Duration(backoff*100) * time.Millisecond)
return waitForProcess(p, backoff+1)
}

Due to the name of the function I was expected to see a wait4 syscall to consume the child process, in our case goofys.
If we look at the below outputs:

  • we have a goofys Zombie process with pid=32767
$ ps aux | grep goofys
root     32767  0.0  0.0      0     0 ?        Zs   Jun14   0:00 [goofys] <defunct>
  • its parent process the s3driver
$ pstree -s 32767
systemd───containerd-shim───s3driver───goofys

As s3driver launches goofys backend (I guess it is the case for the other backends 🤷🏼‍♂️), s3driver is the parent process. Then as a good parent 😃 it should wait4 its child to know what was its status.

In other words, there is a leak on child termination. The fix should be trivial; in the waitForProcess when the cmdLine is empty, we have to syscall.wait4 on the given pid.

if cmdLine == "" {
// ignore defunct processes
// TODO: debug why this happens in the first place
// seems to only happen on k8s, not on local docker
glog.Warning("Fuse process seems dead, returning")
return nil
}

wdyt @ctrox?

s3 csi for a nomad cluster

I'm looking for a CSI solution for a nomad cluster, that would use minio to provision the storage.

Can I use this image ctrox/csi-s3:v1.2.0-rc.2, configure it to connect to my minio, then bind to the socket? I don't expect full support for this, just want to ask if it's reasonable before jumping into it. This is the most maintained solution I could find so far, but kubernetes is not an option for what I'm doing here.

Is there any way to influence the owner, group or permissions of a mounted volume?

As per title. I've got this all working with DO spaces but can't figure out if there's any method of mounting into containers with permissions other than root:root 755.

So far I've tried:

  • chmod and chown ing the mount in a running container which does nothing
  • Ensuring the mount point exists with different UID GID and permissions in the base image which gets replaced with root:root 775

If I've missed anything obvious let me know.

Cheers 👌

error when create pvc.

apply examples/pvc.yaml, describe pvc:

  Normal   Provisioning          105s (x11 over 15m)  ch.ctrox.csi.s3-driver_csi-provisioner-s3-0_11352194-f196-4f57-af40-e1fdd0e57ee3  External provisioner is provisioning volume for claim "default/csi-s3-pvc"
  Warning  ProvisioningFailed    105s (x11 over 15m)  ch.ctrox.csi.s3-driver_csi-provisioner-s3-0_11352194-f196-4f57-af40-e1fdd0e57ee3  failed to provision volume with StorageClass "csi-s3": rpc error: code = Unknown desc = failed to check if bucket ui-root-2/pvc-6131922c-2466-4bd2-a3b9-a0bd1433eaf1 exists: 400 Bad Request
  Normal   ExternalProvisioning  9s (x62 over 15m)    persistentvolume-controller                                                       waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator

provisioner log:

I0421 09:02:26.482248       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"csi-s3-pvc", UID:"6131922c-2466-4bd2-a3b9-a0bd1433eaf1", APIVersion:"v1", ResourceVersion:"261004", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/csi-s3-pvc"
I0421 09:02:26.670216       1 controller.go:1102] Final error received, removing PVC 6131922c-2466-4bd2-a3b9-a0bd1433eaf1 from claims in progress
W0421 09:02:26.670285       1 controller.go:961] Retrying syncing claim "6131922c-2466-4bd2-a3b9-a0bd1433eaf1", failure 10
E0421 09:02:26.670340       1 controller.go:984] error syncing claim "6131922c-2466-4bd2-a3b9-a0bd1433eaf1": failed to provision volume with StorageClass "csi-s3": rpc error: code = Unknown desc = failed to check if bucket ui-root-2/pvc-6131922c-2466-4bd2-a3b9-a0bd1433eaf1 exists: 400 Bad Request
I0421 09:02:26.670874       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"csi-s3-pvc", UID:"6131922c-2466-4bd2-a3b9-a0bd1433eaf1", APIVersion:"v1", ResourceVersion:"261004", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "csi-s3": rpc error: code = Unknown desc = failed to check if bucket ui-root-2/pvc-6131922c-2466-4bd2-a3b9-a0bd1433eaf1 exists: 400 Bad Request
I0421 09:04:18.773154       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync

Git clone the code master branch. Kubernetes v1.19.7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.