GithubHelp home page GithubHelp logo

cvmfs-contrib / cvmfs-csi Goto Github PK

View Code? Open in Web Editor NEW
17.0 9.0 25.0 26.53 MB

CSI driver for CernVM-FS

License: Apache License 2.0

Makefile 4.18% Go 86.92% Dockerfile 1.85% Shell 2.88% Mustache 4.16%
csi cvmfs kubernetes

cvmfs-csi's Introduction

CVMFS CSI plugin

CVMFS Container Storage Interface (CSI) plugin provides read-only mounting of CernVM-FS (CVMFS) repositories in CSI-enabled container orchestrators.

See project documentation at docs/.

Reporting bugs

Please report issues at https://github.com/cvmfs-contrib/cvmfs-csi/issues/new.

Contributing

Please read the Contributing document first.

You can submit patches using GitHub pull requests at https://github.com/cvmfs-contrib/cvmfs-csi/pulls. For larger changes please open an Issue to discuss them first before submitting patches.

cvmfs-csi's People

Contributors

dimm0 avatar gman0 avatar jacksgt avatar jblomer avatar jcpunk avatar jtorrex avatar mansalu avatar netscruff avatar nuwang avatar rochaporto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cvmfs-csi's Issues

prometheus metrics for cvmfs-csi?

I'm not really sure what metrics would be useful, but I tend to dislike running applications I can't get good telemetry out of.

Have you considered producing a metrics endpoint compatible with the prometheus-operator framework? In theory that could provide you with some information on systems where folks are showing performance weirdness.

Add mount reconciler for automount-runner

If cvmfs2 process exits unexpectedly, it leaves its respective mount in a broken state (ENOTCONN). The automount daemon then cannot clean up this mount, blocking application's access.

Implement a mount reconciler component for this scenario: detect broken mounts in autofs-managed /cvmfs and umount them. Re-mount will be done automatically by the automount daemon, restoring access to the CVMFS repo(s).

Comment out repositories value as it is being merged with custom values

When cvmfs-csi helm chart is being handled by argo-cd, the values specified in repositories are being merged to the custom values I am passing, so this means:

helm:
      values: |
        repositories:
          mykey: myvalue

Is merging https://github.com/cernops/cvmfs-csi/blob/master/deployments/helm/cvmfs-csi/values.yaml#L41
With the result of:

mykey: myvalue
cern-repo: repository.cern.ch

I assume cern-repo was added as an example, I would suggest to delete this parameter

As a workaround in argo-cd, I added the following to get rid of cern-repo

helm:
      values: |
        repositories:
          mykey: myvalue
          cern-repo: null

Won't mount repositories

I've been trying to to deploy cvmfs-csi on Kubernetes cluster but listing the cvmfs folder shows nothing.

I followed the instructions for deploying here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/deploying.md and for testing here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/how-to-use.md.

If i execute into the cvmfs-demo pod ls -l /my-cvmfs/atlas.cern.ch there is a delay but I just get the error file or directory not found.

My system is:
OS: Centos8
Kubernetes: 1.26.0

These are the logs from a csi nodeplugin pod:

Defaulted container "registrar" out of: registrar, nodeplugin
I0427 16:22:32.995552 589994 main.go:166] Version: v2.5.1
I0427 16:22:32.995594 589994 main.go:167] Running node-driver-registrar in mode=registration
I0427 16:22:32.996079 589994 main.go:191] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0427 16:22:32.996117 589994 connection.go:154] Connecting to unix:///csi/csi.sock
W0427 16:22:42.996364 589994 connection.go:173] Still connecting to unix:///csi/csi.sock
W0427 16:22:52.997075 589994 connection.go:173] Still connecting to unix:///csi/csi.sock
I0427 16:22:54.955114 589994 main.go:198] Calling CSI driver to discover driver name
I0427 16:22:54.955147 589994 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0427 16:22:54.955152 589994 connection.go:184] GRPC request: {}
I0427 16:22:54.960901 589994 connection.go:186] GRPC response: {"name":"cvmfs.csi.cern.ch","vendor_version":"v2.1.2"}
I0427 16:22:54.960968 589994 connection.go:187] GRPC error:
I0427 16:22:54.960977 589994 main.go:208] CSI driver name: "cvmfs.csi.cern.ch"
I0427 16:22:54.961008 589994 node_register.go:53] Starting Registration Server at: /registration/cvmfs.csi.cern.ch-reg.sock
I0427 16:22:54.961162 589994 node_register.go:62] Registration Server started at: /registration/cvmfs.csi.cern.ch-reg.sock
I0427 16:22:54.961417 589994 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0427 16:22:55.685156 589994 main.go:102] Received GetInfo call: &InfoRequest{}
I0427 16:22:55.685580 589994 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/cvmfs.csi.cern.ch/registration"
I0427 16:22:55.705064 589994 main.go:120] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}

I also had the same behaviour by changing the CVMFS_HTTP_PROXY='http://ca-proxy.cern.ch:3128' to DIRECT.

When trying to mount a repository using the repository storage class paratemeter, i get the error of FailedtoMount in the demo pod.

Would appreciate any ideas how to debug this.

ClusterRoleBinding missing namespace in csi-provisioner-rbac.yaml

Executing this yaml ClusterRoleBinding fails because it's missing a required namespace:
https://github.com/cernops/cvmfs-csi/blob/master/deployments/kubernetes/csi-provisioner-rbac.yaml#L58

...
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cvmfs-csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: cvmfs-csi-provisioner
    namespace: cvmfs
...

I guess this one doesn't matter when you'll have the helm chart for installation.

FIX: HELM values file on master branch still points to the old "magnum" registry enpoint

Hi,

I was previously using an old release of the driver working properly on an EKS cluster.

Now I'm trying to upgrade it to the latest version (2.3.2) and I saw that the master branch points to the 'old' endpoint magnum on some references of the values.yaml file:

https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/deployments/helm/cvmfs-csi/values.yaml#L72C1-L73C1
https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/deployments/helm/cvmfs-csi/values.yaml#L80C52-L80C52
etc

This endpoint is not working anymore:

$ helm pull oci://registry.cern.ch/magnum/cvmfs-csi --version latest
Error: Unable to locate any tags in provided repository: oci://registry.cern.ch/magnum/cvmfs-csi

Should they need to point to the newer registry endpoint?

helm pull oci://registry.cern.ch/kubernetes/charts/cvmfs-csi --version 2.3.2
Pulled: registry.cern.ch/kubernetes/charts/cvmfs-csi:2.3.2
Digest: sha256:f016d4a1488705d8cddd67f7fb48caffbe564e61f1d0544cf7a33c4285a591cf

Command line flags provided to cvmfsplugin container fail at runtime

The command line arguments found below are rejected by the application:

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin.yaml#L51

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin.yaml#L52

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin.yaml#L56

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin.yaml#L57

This causes both deployments to enter CrashLoopBackoff

Each flag produces its own version of the following error message in the container:

flag provided but not defined: -metadatastorage
Usage of /csi-cvmfsplugin:
  -alsologtostderr
    	log to standard error as well as files
  -drivername string
    	name of the driver (default "csi-cvmfs")
  -endpoint string
    	CSI endpoint (default "unix://tmp/csi.sock")
  -log_backtrace_at value
    	when logging hits line file:N, emit a stack trace
  -log_dir string
    	If non-empty, write log files in this directory
  -logtostderr
    	log to standard error instead of files
  -nodeid string
    	node id
  -stderrthreshold value
    	logs at or above this threshold go to stderr
  -v value
    	log level for V logs
  -vmodule value
    	comma-separated list of pattern=N settings for file-filtered logging

This can be reproduced by building the binary and container using the README make instructions.

Likewise this error also happens in the csi-cvmfsplugin-provisioner.yaml deployment.

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin-provisioner.yaml#L72

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin-provisioner.yaml#L73

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin-provisioner.yaml#L77

https://github.com/cernops/cvmfs-csi/blob/cf010f9cd957ea0b7875cd2966fbba2a23710685/deployments/kubernetes/csi-cvmfsplugin-provisioner.yaml#L78

Unable to run Helm chart v2.2.0 on OpenShift

Hi,

I'm trying to upgrade from v2.1.2 to v2.2.0 on our OKD cluster, but the controllerplugin is stuck with:

$ oc logs cvmfs-csi-controllerplugin-786ccbd9cc-vbffq -c provisioner
I0614 07:39:30.215802       1 feature_gate.go:249] feature gates: &{map[]}
I0614 07:39:30.216116       1 csi-provisioner.go:154] Version: v3.5.0
I0614 07:39:30.216144       1 csi-provisioner.go:177] Building kube configs for running in cluster...
W0614 07:39:40.217914       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:39:50.217133       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:40:00.218146       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:40:10.217237       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:40:20.217113       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:40:30.218064       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:40:40.217546       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:40:50.217479       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:41:00.217731       1 connection.go:183] Still connecting to unix:///csi/csi.sock
W0614 07:41:10.217187       1 connection.go:183] Still connecting to unix:///csi/csi.sock

$ oc logs cvmfs-csi-controllerplugin-786ccbd9cc-vbffq -c controllerplugin
panic: mkdir /var/lib/cvmfs.csi.cern.ch: permission denied

goroutine 1 [running]:
github.com/cernops/cvmfs-csi/internal/cvmfs/singlemount.init.0()
	/builds/kubernetes/storage/cvmfs-csi/internal/cvmfs/singlemount/sharedmount.go:108 +0x45

It looks like the controller plugin is trying to write into the root FS of the container image.
I wanted to mount an emptyDir at /var/lib/cvmfs.csi.cern.ch, but the Helm chart doesn't support extraVolumeMounts for the controllerplugin.plugin section (even though it's mentioned in the Helm values):

If the controllerplugin tries to write into this directory, should the Helm chart mount an emptyDir there generally?

Consider using go modules

Upstream go has created a system called modules for dependency tracking so users don't need to vendor dependencies. Consider using this instead of Gopkg.lock and vendoring.

Cut the next release?

Would it be possible to make another release with some of the recent changes that were merged? Or is there a defined timeline/milestone when a release will happen?

Very slow CVMFS client startup when rlimit_nofile is too high

CVMFS client startup time is proportional to rlimit_nofile. When it forks to run exec(), it will first try to close all file descriptors, from 3 up until the maximum possible fd number. If rlimit_nofile is too high, this may take a very long time.

xref: cvmfs/cvmfs#3158
See details in https://cernvm-forum.cern.ch/t/cvmfs-config-setup-hangs/275

This issue will be kept open until there is a more permanent fix in the client itself. Until then, these are some of the possible workarounds:

  • set node-wide rlimit_nofile to a reasonable number
  • set rlimit_nofile in container runtime
  • run ulimit -n <value> inside cvmfs-csi nodeplugin container (#58)

Suggested fixes to kubernetes manifests

Hi,
thanks for developing and sharing this great tool!

While trying it out I noticed some lines of the manifests breaking the kubectl apply.
They seem typos to me, so you may want to fix them.

Also, I had to modify the http proxy to "DIRECT" in the configmap for running from outside CERN, but this http://ca-proxy.cern.ch:3128 is probably intended to be the default in this repo. Maybe consider adding one more explicit line in docs.

Thanks again!

Implement the LIST_VOLUMES_PUBLISHED_NODES method

As mentioned in kubernetes/kubernetes#84169 (comment), CSI drivers now have to implement the LIST_VOLUMES_PUBLISHED_NODES method, which significantly decreases the load on controller when the number of volumes is large.

I might make a PR soon, but if somebody else can do it quicker, it would be appreciated. Currently we can't use the driver, because the time of volume attach increases to tens of minutes (see the ticket)

PodSecurityPolicy: unable to admit pod

Trying to deploy this csi, I encounter an error starting

NAME                                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR         AGE
daemonset.apps/cvmfs-csi-nodeplugin   0         0         0       0            0           metallb_nfs_lb=true   25h
Warning  FailedCreate  41s (x109 over 25h)  daemonset-controller  Error creating: pods "cvmfs-csi-nodeplugin-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[1].securityContext.capabilities.add: Invalid value: "SYS_ADMIN": capability may not be added]

I don't see a PodSecurityPolicy in the helm chart.

What should I do?

volumeoptions.go defaults logic broken

I came across this problem with a docker image built from the cvmfs-csi master branch code.

Hash and tag options are supposed to be optional, so I didn't set them. However I was getting
"missing required field tag" or "missing required field hash" errors.

I don't know go, but I think the problem is in the newVolumeOptions function. The call to validate, which sets a default for tag, needs to be moved up before the extractOption(tag) call

https://github.com/cernops/cvmfs-csi/blob/b39da4aed09e234bc6da7b1821b1bb8f341962c1/pkg/cvmfs/volumeoptions.go#L82

https://github.com/cernops/cvmfs-csi/blob/b39da4aed09e234bc6da7b1821b1bb8f341962c1/pkg/cvmfs/volumeoptions.go#L73

When setting both tag and hash, the error "specifying both hash and tag is not allowed" jumps, so the image is unusable.

Is it still alive?

Hi,
Should we still use this CSI in case we want to mount CVMFS in k8s? Should we switch to something else?
Cheers.

Skip attach?

I was looking at the attacher container logs and saw this line: main.go:165] CSI driver does not support ControllerPublishUnpublish, using trivial handler. It might be worth doing this: https://kubernetes-csi.github.io/docs/skip-attach.html . Isn’t it relevant here?
I don’t know what kubernetes version this project supports...

using default.local configmaps

Following
https://github.com/cernops/cvmfs-csi/blob/master/README.md#configuration

I tried to create a simple default.local configmap like this:

CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch
CVMFS_HTTP_PROXY=DIRECT

This does not work.

The cvmfs-csi driver merges the options from my default.local and default.conf into a new config file that is passed to the mount command. But there are a couple of mandatory options that (at least in the cases I've seen) come from /etc/cvmfs/domain.d/cern.ch.

CVMFS_SERVER_URL="http://cvmfs-stratum-one.cern.ch/cvmfs/@fqrn@;http://cernvmfs.gridpp.rl.ac.uk/cvmfs/@fqrn@;http://cvmfs-s1bnl.opensciencegrid.org/cvmfs/@fqrn@;http://cvmfs-s1fnal.opensciencegrid.org/cvmfs/@fqrn@"
CVMFS_KEYS_DIR=/etc/cvmfs/keys/cern.ch
CVMFS_USE_GEOAPI=yes

In addition, the cvmfs csi driver requires to run with CVMFS_USER=root.

In the CERN Openstack cluster it works, because your default.local includes these couple of options.

[root@cern-extension-kubernetes-1-15-tybqdknmjtvh-node-58 cvmfs]# cat default.local
CVMFS_USER=root
CVMFS_SERVER_URL="http://cvmfs-stratum-one.cern.ch/cvmfs/@fqrn@;http://cernvmfs.gridpp.rl.ac.uk/cvmfs/@fqrn@;http://cvmfs.racf.bnl.gov/cvmfs/@fqrn@;http://cvmfs.fnal.gov/cvmfs/@fqrn@"
CVMFS_KEYS_DIR=/etc/cvmfs/keys/cern.ch
CVMFS_USE_GEOAPI=yes

CVMFS_HTTP_PROXY="http://ca-proxy.cern.ch:3128"
CVMFS_QUOTA_LIMIT=20000

Publish helm image to artifacthub.io

Can an official version of this be published up at https://artifacthub.io/?

I'm showing #14 merged in, but I'm not seeing it in the repo right now....

If you could note anything required for Pod Security Admission recommended level too that would be great! I'd love to set things to restricted but I'm not sure how much that would break things using this....

example for nomad

give an example of using cvmfs-csi in hashicorp nomad

nomad is a very flexible alternative to k8s, and it is also very suitable for use in HPC environments. I hope to improve the relevant documents support for nomad

Helm Chart

Since the original PR-ed chart (#3), our chart has evolved and been tested in the wild for a few months now, becoming more configurable with support for local vs alien cache (with alien cache tested on NFS), as well as cache preloading capabilities. The current version that we are using is at: https://github.com/CloudVE/galaxy-cvmfs-csi-helm
I was looking to PR the chart upstream so that other users might use it and hopefully have a community spanning beyond our project working on developing/maintaining it. I was wondering if you'd recommend me close that PR and issue a new one similarly putting the chart under deploy/helm or if there is another way you'd prefer it PR-ed. I can transform the default values to be more generic and not pointing to our repositories if that is desirable. Let me know what the best way would be to contribute this back to the community!

support for nomad

I try to use cvmfs through CSI Plugin in nomad, but I encounter problems when creating volume it seems that the access_mode parameter configured in nomad is not supported

here is my volume config file:

type = "csi"
id   = "cvmfs-volume"
name = "cvmfs-volume"

plugin_id = "cvmfs0"

capability {
  access_mode     = "multi-node-reader-only"
  attachment_mode = "file-system"
}

mount_options {
  fs_type = "cvmfs2"
}

secrets {}

the error info:

root@ubuntu:~/nomad-jobs# nomad volume create cvmfs.volume.hcl 
Error creating volume: Unexpected response code: 500 (1 error occurred:
        * controller create volume: CSI.ControllerCreateVolume: volume "cvmfs-volume" snapshot source &{"" ""} is not compatible with these parameters: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported)

Can you provide an example of using cvmfs csi in nomad
Ref: #51

start using `app.kubernetes.io/` labels

The app.kubernetes.io/ labels are becoming part of the standard way of labeling things within the infrastructure. Can this be updated to start applying those labels to various elements?

"Too many levels of symbolic links issue" when working with Jupyterhub

I have successfully deployed cvmfs-csi and can access the cvmfs repos while when I tried to mount it to a jupyterhub instance, it always failed with "Too many levels of symbolic links" error:
jovyan@jupyter-ee069xxx ~$ ls /my-cvmfs/
atlas.cern.ch cvmfs-config.cern.ch
jovyan@jupyter-ee069xxx ~$ ls /my-cvmfs/atlas.cern.ch
ls: cannot open directory '/my-cvmfs/atlas.cern.ch': Too many levels of symbolic links

I am wondering if this is cvmfs related issue or the jupyterhub related.
Some from cvmfs side reported the same error and suspect it's the automount/autofs fault so suggest to disable autofs. But I checked the working pod/container there is also autofs and it works well with accessing the cvmfs repos.

Below is the stanza of the config of jupyterhub:
singleuser:
storage:
...
extraVolumes:
- name: cvmfs-jhub-shared
persistentVolumeClaim:
claimName: cvmfs-jhub-shared
extraVolumeMounts:
- name: cvmfs-jhub-shared
mountPath: /my-cvmfs

and the pvc that works well for a pod created manually under the same namespace.
cvmfs-jhub-shared Bound pvc-e3f1a126-ed9e-4d77-8d49-5c8c8ce9b93a 1 ROX cvmfs 3h56m

I may need to raise this issue to jupyterhub however before doing that I just see if any insight from this channel.

Fails to access large directories

Hi,

we have a case where a user tries to access a large directory (200+GB), but is fails with a generic error message.
Reproducer:

$ kubectl create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cvmfs
spec:
  accessModes:
  - ReadOnlyMany
  resources:
    requests:
      # Volume size value has no effect and is ignored
      # by the driver, but must be non-zero.
      storage: 1
  storageClassName: cvmfs
---
apiVersion: v1
kind: Pod
metadata:
  name: cvmfs-demo
spec:
  containers:
    - name: demo
      image: busybox
      imagePullPolicy: IfNotPresent
      command: [ "/bin/sh", "-c", "trap : TERM INT; (while true; do sleep 1000; done) & wait" ]
      volumeMounts:
        - name: cvmfs
          mountPath: /cvmfs
          # CVMFS automount volumes must be mounted with HostToContainer mount propagation.
          mountPropagation: HostToContainer
  volumes:
    - name: cvmfs
      persistentVolumeClaim:
        claimName: cvmfs
EOF

$ kubectl exec -it cvmfs-demo -- /bin/sh
/ # ls /cvmfs/atlas.cern.ch/repo/sw/software/23.0/Athena/23.0.26
ls: /cvmfs/atlas.cern.ch/repo/sw/software/23.0/Athena/23.0.26: Input/output error
/ # ls /cvmfs/atlas.cern.ch/repo/sw/software/23.0/
AthSimulation           Athena                  DetCommon               atlas                   tdaq
AthSimulationExternals  AthenaExternals         Geant4                  sw                      tdaq-common
/ # ls /cvmfs/atlas.cern.ch/repo/sw/software/23.0/Athena/
ls: can't open '/cvmfs/atlas.cern.ch/repo/sw/software/23.0/Athena/': Input/output error

Logs from the nodeplugin:

$ kubectl -n kube-system logs cern-magnum-cvmfs-csi-nodeplugin-fn6zh -c nodeplugin
I0614 08:00:30.816995    7836 main.go:95] Running CVMFS CSI plugin with [/csi-cvmfsplugin -v=4 --nodeid=jack-test-cvmfs-dik24wgoresr-node-0 --endpoint=unix:///var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock --drivername=cvmfs.csi.cern.ch --start-automount-daemon=true --automount-startup-timeout=5 --automount-unmount-timeout=600 --role=identity,node --has-alien-cache=false]
I0614 08:00:30.817074    7836 driver.go:149] Driver: cvmfs.csi.cern.ch
I0614 08:00:30.817078    7836 driver.go:151] Version: v2.1.1 (commit: a90329e7bf6bfe1d936168d286af6219f9a36a80; build time: 2023-06-14 08:00:30.815615071 +0000 UTC m=+0.001888489; metadata: )
I0614 08:00:30.817105    7836 driver.go:165] Registering Identity server
I0614 08:00:30.817144    7836 driver.go:220] Exec-ID 1: Running command env=[] prog=/usr/bin/cvmfs2 args=[cvmfs2 --version]
I0614 08:00:30.820747    7836 driver.go:220] Exec-ID 1: Process exited: exit status 0
I0614 08:00:30.820766    7836 driver.go:180] CernVM-FS version 2.10.0
I0614 08:00:30.820813    7836 driver.go:237] Exec-ID 2: Running command env=[] prog=/usr/bin/cvmfs_config args=[cvmfs_config setup]
I0614 08:00:31.837351    7836 driver.go:237] Exec-ID 2: Process exited: exit status 0
I0614 08:00:31.837518    7836 driver.go:292] Exec-ID 3: Running command env=[] prog=/usr/sbin/automount args=[automount --verbose]
I0614 08:00:31.851252    7836 driver.go:292] Exec-ID 3: Process exited: exit status 0
I0614 08:00:31.851301    7836 driver.go:250] Exec-ID 4: Running command env=[] prog=/usr/bin/mount args=[mount --make-shared /cvmfs]
I0614 08:00:31.852237    7836 driver.go:250] Exec-ID 4: Process exited: exit status 0
I0614 08:00:31.852251    7836 driver.go:197] Registering Node server with capabilities []
I0614 08:00:31.852415    7836 grpcserver.go:106] Listening for connections on /var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock
I0614 08:00:32.903601    7836 grpcserver.go:136] Call-ID 1: Call: /csi.v1.Identity/GetPluginInfo
I0614 08:00:32.904568    7836 grpcserver.go:137] Call-ID 1: Request: {}
I0614 08:00:32.904612    7836 grpcserver.go:143] Call-ID 1: Response: {"name":"cvmfs.csi.cern.ch","vendor_version":"v2.1.1"}
I0614 08:00:33.559631    7836 grpcserver.go:136] Call-ID 2: Call: /csi.v1.Node/NodeGetInfo
I0614 08:00:33.559685    7836 grpcserver.go:137] Call-ID 2: Request: {}
I0614 08:00:33.559758    7836 grpcserver.go:143] Call-ID 2: Response: {"node_id":"jack-test-cvmfs-dik24wgoresr-node-0"}
I0614 08:03:49.148476    7836 grpcserver.go:136] Call-ID 3: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:03:49.148551    7836 grpcserver.go:137] Call-ID 3: Request: {}
I0614 08:03:49.148668    7836 grpcserver.go:143] Call-ID 3: Response: {}
I0614 08:03:49.149891    7836 grpcserver.go:136] Call-ID 4: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:03:49.149954    7836 grpcserver.go:137] Call-ID 4: Request: {}
I0614 08:03:49.149979    7836 grpcserver.go:143] Call-ID 4: Response: {}
I0614 08:03:49.150370    7836 grpcserver.go:136] Call-ID 5: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:03:49.150425    7836 grpcserver.go:137] Call-ID 5: Request: {}
I0614 08:03:49.150444    7836 grpcserver.go:143] Call-ID 5: Response: {}
I0614 08:03:49.151368    7836 grpcserver.go:136] Call-ID 6: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:03:49.151430    7836 grpcserver.go:137] Call-ID 6: Request: {}
I0614 08:03:49.151457    7836 grpcserver.go:143] Call-ID 6: Response: {}
I0614 08:03:49.153965    7836 grpcserver.go:136] Call-ID 7: Call: /csi.v1.Node/NodePublishVolume
I0614 08:03:49.154125    7836 grpcserver.go:137] Call-ID 7: Request: {"target_path":"/var/lib/kubelet/pods/ef9506a5-a038-4bc8-a483-5efb21a6da36/volumes/kubernetes.io~csi/pvc-0a836218-b8e6-4f24-ac82-ee00b03e444c/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":3}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1686729550118-8081-cvmfs.csi.cern.ch"},"volume_id":"pvc-0a836218-b8e6-4f24-ac82-ee00b03e444c"}
I0614 08:03:49.155342    7836 mountutil.go:75] Exec-ID 5: Running command env=[] prog=/usr/bin/mount args=[mount /cvmfs /var/lib/kubelet/pods/ef9506a5-a038-4bc8-a483-5efb21a6da36/volumes/kubernetes.io~csi/pvc-0a836218-b8e6-4f24-ac82-ee00b03e444c/mount --rbind --make-slave]
I0614 08:03:49.156541    7836 mountutil.go:75] Exec-ID 5: Process exited: exit status 0
I0614 08:03:49.156587    7836 grpcserver.go:143] Call-ID 7: Response: {}
I0614 08:04:32.359839    7836 grpcserver.go:136] Call-ID 8: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:04:32.360061    7836 grpcserver.go:137] Call-ID 8: Request: {}
I0614 08:04:32.360101    7836 grpcserver.go:143] Call-ID 8: Response: {}
I0614 08:05:29.260958    7836 grpcserver.go:136] Call-ID 9: Call: /csi.v1.Node/NodeUnpublishVolume
I0614 08:05:29.261201    7836 grpcserver.go:137] Call-ID 9: Request: {"target_path":"/var/lib/kubelet/pods/ef9506a5-a038-4bc8-a483-5efb21a6da36/volumes/kubernetes.io~csi/pvc-0a836218-b8e6-4f24-ac82-ee00b03e444c/mount","volume_id":"pvc-0a836218-b8e6-4f24-ac82-ee00b03e444c"}
I0614 08:05:29.261355    7836 mountutil.go:99] Exec-ID 6: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/ef9506a5-a038-4bc8-a483-5efb21a6da36/volumes/kubernetes.io~csi/pvc-0a836218-b8e6-4f24-ac82-ee00b03e444c/mount]
I0614 08:05:29.265108    7836 mountutil.go:99] Exec-ID 6: Process exited: exit status 0
I0614 08:05:29.265294    7836 grpcserver.go:143] Call-ID 9: Response: {}
I0614 08:05:29.362572    7836 grpcserver.go:136] Call-ID 10: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:05:29.362783    7836 grpcserver.go:137] Call-ID 10: Request: {}
I0614 08:05:29.362833    7836 grpcserver.go:143] Call-ID 10: Response: {}
I0614 08:06:04.227396    7836 grpcserver.go:136] Call-ID 11: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:06:04.227436    7836 grpcserver.go:137] Call-ID 11: Request: {}
I0614 08:06:04.227448    7836 grpcserver.go:143] Call-ID 11: Response: {}
I0614 08:06:04.229122    7836 grpcserver.go:136] Call-ID 12: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:06:04.229149    7836 grpcserver.go:137] Call-ID 12: Request: {}
I0614 08:06:04.229159    7836 grpcserver.go:143] Call-ID 12: Response: {}
I0614 08:06:04.229797    7836 grpcserver.go:136] Call-ID 13: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:06:04.229949    7836 grpcserver.go:137] Call-ID 13: Request: {}
I0614 08:06:04.230027    7836 grpcserver.go:143] Call-ID 13: Response: {}
I0614 08:06:04.230514    7836 grpcserver.go:136] Call-ID 14: Call: /csi.v1.Node/NodeGetCapabilities
I0614 08:06:04.230609    7836 grpcserver.go:137] Call-ID 14: Request: {}
I0614 08:06:04.230679    7836 grpcserver.go:143] Call-ID 14: Response: {}
I0614 08:06:04.231181    7836 grpcserver.go:136] Call-ID 15: Call: /csi.v1.Node/NodePublishVolume
I0614 08:06:04.231238    7836 grpcserver.go:137] Call-ID 15: Request: {"target_path":"/var/lib/kubelet/pods/4d81609b-5fba-4c63-a1a7-265a33b13729/volumes/kubernetes.io~csi/pvc-74ad843d-083b-4f9b-ae66-4ed691a37322/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":3}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1686729550118-8081-cvmfs.csi.cern.ch"},"volume_id":"pvc-74ad843d-083b-4f9b-ae66-4ed691a37322"}
I0614 08:06:04.231692    7836 mountutil.go:75] Exec-ID 7: Running command env=[] prog=/usr/bin/mount args=[mount /cvmfs /var/lib/kubelet/pods/4d81609b-5fba-4c63-a1a7-265a33b13729/volumes/kubernetes.io~csi/pvc-74ad843d-083b-4f9b-ae66-4ed691a37322/mount --rbind --make-slave]
I0614 08:06:04.233541    7836 mountutil.go:75] Exec-ID 7: Process exited: exit status 0
I0614 08:06:04.233563    7836 grpcserver.go:143] Call-ID 15: Response: {}

Please advise how to troubleshoot the issue further.

failed to create subPath directory for volumeMount "foo" of container "bar"

Seemingly random but pretty frequently, we are seeing pods fail to mount a volume with the following CreateContainerConfigError:

failed to create subPath directory for volumeMount "refdata-gxy" of container "galaxy-db-init"

The volumeMounts definition is as follows, and this works until it doesn't.

    - mountPath: /cvmfs/data.galaxyproject.org
      mountPropagation: HostToContainer
      name: refdata-gxy
      subPath: data.galaxyproject.org

When the issue shows up, restarting the nodeplugin DaemonSet pod will most of the time resolve the issue.

Probably the most reproducible way we see this is by restarting one of the workload Deployment pods that mounts the CVMFS volume. The issue does happen independently as well when Jobs run. For example, we run automated tests via Github Actions, and about 70% of the jobs/clusters exhibit this issue (evidenced here by the failing Action after the 6hr timeout is reached: https://github.com/anvilproject/galaxy-tests/actions/workflows/edgetest.yaml).

We started seeing this with the cvmfs-csi 2.0.0. We tried the latest release (2.1.1) but the same issue is occurring. There is nothing in the nodeplugin pod log that would indicate what is going on, but we'd be happy to provide any additional info that might be helpful. If it's helpful for debugging purposes, we can also give access to one of our temp dev clusters where the issue has occurred.

Attacher requires "patch" permission

Additional permissions are required that are not demonstrated in the example deployments.

Attacher cluster role:

...
  rule {
    api_groups = ["storage.k8s.io"]
    resources  = ["volumeattachments"]
    verbs      = ["get", "list", "watch", "update", "patch"]
  }
  rule {
    api_groups = ["storage.k8s.io"]
    resources  = ["volumeattachments/status"]
    verbs      = ["patch"]
  }

Repositories not mounting

I've been trying to upgrade to cvmfs-csi v2 and get it working with the galaxyproject repos but listing the cvmfs repo folder shows nothing. A slight delay can be seen, indicating that maybe autofs is trying to mount the folder.

I then backtracked and tried to recreate the basic deployment examples by following the instructions here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/deploying.md and here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/how-to-use.md. Same issue.

If I exec into the cvmfs-demo pod and exec ls -l /my-cvmfs/atlas.cern.ch I can see an additional delay, but then it just errors out with file or directory not found. I didn't see anything of note in the csi pod logs. I wasn't quite sure how to get to the autofs logs.

Would appreciate some insights into how to debug this issue.

Environment:
Provider: RKE2
Kubernetes Version: v1.24.4
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS

node plugin logs:

Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.403738 482051 main.go:92] Running CVMFS CSI plugin with [/csi-cvmfsplugin -v=5 --nodeid=ip-10-0-35-178.ec2.internal --endpoint=unix:///var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock --drivername=cvmfs.csi.cern.ch --start-automount-daemon=true --role=identity,node]
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.403824 482051 driver.go:139] Driver: cvmfs.csi.cern.ch
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.403833 482051 driver.go:141] Version: v2.0.0 (commit: 1e30ea639263f74179f971aea1cfad170a6b5c89; build time: 2022-12-11 06:07:56.402341425 +0000 UTC m=+0.002192591; metadata: )
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.403860 482051 driver.go:155] Registering Identity server
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.403939 482051 driver.go:210] Exec-ID 1: Running command env=[] prog=/usr/bin/cvmfs2 args=[cvmfs2 --version]
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.407309 482051 driver.go:210] Exec-ID 1: Process exited: exit status 0
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.407333 482051 driver.go:170] CernVM-FS version 2.9.4
Sun, Dec 11 2022 11:37:56 am | I1211 06:07:56.407360 482051 driver.go:227] Exec-ID 2: Running command env=[] prog=/usr/bin/cvmfs_config args=[cvmfs_config setup]
Sun, Dec 11 2022 11:37:57 am | I1211 06:07:57.982777 482051 driver.go:227] Exec-ID 2: Process exited: exit status 0
Sun, Dec 11 2022 11:37:57 am | I1211 06:07:57.982831 482051 driver.go:233] Exec-ID 3: Running command env=[] prog=/usr/sbin/automount args=[automount]
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.001347 482051 driver.go:233] Exec-ID 3: Process exited: exit status 0
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.001409 482051 driver.go:240] Exec-ID 4: Running command env=[] prog=/usr/bin/mount args=[mount --make-shared /cvmfs]
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.002743 482051 driver.go:240] Exec-ID 4: Process exited: exit status 0
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.002760 482051 driver.go:187] Registering Node server with capabilities []
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.002900 482051 grpcserver.go:106] Listening for connections on /var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.408741 482051 grpcserver.go:136] Call-ID 1: Call: /csi.v1.Identity/GetPluginInfo
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.410334 482051 grpcserver.go:137] Call-ID 1: Request: {}
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.410419 482051 grpcserver.go:143] Call-ID 1: Response: {"name":"cvmfs.csi.cern.ch","vendor_version":"v2.0.0"}
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.707310 482051 grpcserver.go:136] Call-ID 2: Call: /csi.v1.Node/NodeGetInfo
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.707353 482051 grpcserver.go:137] Call-ID 2: Request: {}
Sun, Dec 11 2022 11:37:58 am | I1211 06:07:58.707445 482051 grpcserver.go:143] Call-ID 2: Response: {"node_id":"ip-10-0-35-178.ec2.internal"}
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.440837 482051 grpcserver.go:136] Call-ID 3: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.440918 482051 grpcserver.go:137] Call-ID 3: Request: {"target_path":"/var/lib/kubelet/pods/03a7dca9-87d7-4085-9871-88bda5597ab6/volumes/kubernetes.io~csi/pvc-ae299e01-0ec5-4925-a2bf-b440eb423aef/mount","volume_id":"pvc-ae299e01-0ec5-4925-a2bf-b440eb423aef"}
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.440982 482051 mountutil.go:99] Exec-ID 5: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/03a7dca9-87d7-4085-9871-88bda5597ab6/volumes/kubernetes.io~csi/pvc-ae299e01-0ec5-4925-a2bf-b440eb423aef/mount]
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.448234 482051 mountutil.go:99] Exec-ID 5: Process exited: exit status 0
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.448356 482051 grpcserver.go:143] Call-ID 3: Response: {}
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.555011 482051 grpcserver.go:136] Call-ID 4: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.555049 482051 grpcserver.go:137] Call-ID 4: Request: {}
Sun, Dec 11 2022 11:38:08 am | I1211 06:08:08.555138 482051 grpcserver.go:143] Call-ID 4: Response: {}
Sun, Dec 11 2022 11:38:17 am | I1211 06:08:17.355531 482051 grpcserver.go:136] Call-ID 5: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:17 am | I1211 06:08:17.355589 482051 grpcserver.go:137] Call-ID 5: Request: {"target_path":"/var/lib/kubelet/pods/25145b26-8019-4ce4-9195-c50dc0d69035/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount","volume_id":"pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5"}
Sun, Dec 11 2022 11:38:17 am | I1211 06:08:17.355645 482051 mountutil.go:99] Exec-ID 6: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/25145b26-8019-4ce4-9195-c50dc0d69035/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount]
Sun, Dec 11 2022 11:38:17 am | I1211 06:08:17.359151 482051 mountutil.go:99] Exec-ID 6: Process exited: exit status 0
Sun, Dec 11 2022 11:38:17 am | I1211 06:08:17.359227 482051 grpcserver.go:143] Call-ID 5: Response: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675670 482051 grpcserver.go:136] Call-ID 6: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675743 482051 grpcserver.go:136] Call-ID 7: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675741 482051 grpcserver.go:137] Call-ID 6: Request: {"target_path":"/var/lib/kubelet/pods/7d15c20f-5d95-4b82-a2d4-fdb518d40f1f/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount","volume_id":"pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5"}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675818 482051 mountutil.go:99] Exec-ID 7: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/7d15c20f-5d95-4b82-a2d4-fdb518d40f1f/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount]
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675802 482051 grpcserver.go:137] Call-ID 7: Request: {"target_path":"/var/lib/kubelet/pods/b483ac10-c6e3-444d-96f0-3bd3ec9e0e8b/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount","volume_id":"pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5"}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675922 482051 mountutil.go:99] Exec-ID 8: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/b483ac10-c6e3-444d-96f0-3bd3ec9e0e8b/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount]
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675945 482051 grpcserver.go:136] Call-ID 9: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675962 482051 grpcserver.go:136] Call-ID 10: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675999 482051 grpcserver.go:137] Call-ID 9: Request: {"target_path":"/var/lib/kubelet/pods/b3e59a3a-12db-48bb-afdc-7c106d1de576/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount","volume_id":"pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5"}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.676007 482051 grpcserver.go:137] Call-ID 10: Request: {"target_path":"/var/lib/kubelet/pods/d5679f5d-8ff2-41d2-8869-e36e81e7f6da/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount","volume_id":"pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5"}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.675920 482051 grpcserver.go:136] Call-ID 8: Call: /csi.v1.Node/NodeUnpublishVolume
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.676044 482051 mountutil.go:99] Exec-ID 9: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/d5679f5d-8ff2-41d2-8869-e36e81e7f6da/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount]
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.676049 482051 mountutil.go:99] Exec-ID 10: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/b3e59a3a-12db-48bb-afdc-7c106d1de576/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount]
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.676112 482051 grpcserver.go:137] Call-ID 8: Request: {"target_path":"/var/lib/kubelet/pods/3863a604-5200-4648-bf81-ae82cfa168eb/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount","volume_id":"pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5"}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.676157 482051 mountutil.go:99] Exec-ID 11: Running command env=[] prog=/usr/bin/umount args=[umount --recursive /var/lib/kubelet/pods/3863a604-5200-4648-bf81-ae82cfa168eb/volumes/kubernetes.io~csi/pvc-486c1c4b-0be4-4fbe-983d-32a245b731b5/mount]
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.679577 482051 mountutil.go:99] Exec-ID 9: Process exited: exit status 0
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.679660 482051 grpcserver.go:143] Call-ID 10: Response: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.680455 482051 mountutil.go:99] Exec-ID 7: Process exited: exit status 0
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.680514 482051 grpcserver.go:143] Call-ID 6: Response: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.680709 482051 mountutil.go:99] Exec-ID 8: Process exited: exit status 0
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.680787 482051 grpcserver.go:143] Call-ID 7: Response: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.687334 482051 mountutil.go:99] Exec-ID 10: Process exited: exit status 0
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.687382 482051 mountutil.go:99] Exec-ID 11: Process exited: exit status 0
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.687442 482051 grpcserver.go:143] Call-ID 9: Response: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.687470 482051 grpcserver.go:143] Call-ID 8: Response: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.780016 482051 grpcserver.go:136] Call-ID 11: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.780087 482051 grpcserver.go:137] Call-ID 11: Request: {}
Sun, Dec 11 2022 11:38:18 am | I1211 06:08:18.780109 482051 grpcserver.go:143] Call-ID 11: Response: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.342514 482051 grpcserver.go:136] Call-ID 12: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.342568 482051 grpcserver.go:137] Call-ID 12: Request: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.342585 482051 grpcserver.go:143] Call-ID 12: Response: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.343382 482051 grpcserver.go:136] Call-ID 13: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.343428 482051 grpcserver.go:137] Call-ID 13: Request: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.343445 482051 grpcserver.go:143] Call-ID 13: Response: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.344110 482051 grpcserver.go:136] Call-ID 14: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.344155 482051 grpcserver.go:137] Call-ID 14: Request: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.344173 482051 grpcserver.go:143] Call-ID 14: Response: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.344765 482051 grpcserver.go:136] Call-ID 15: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.344804 482051 grpcserver.go:137] Call-ID 15: Request: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.344818 482051 grpcserver.go:143] Call-ID 15: Response: {}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.345580 482051 grpcserver.go:136] Call-ID 16: Call: /csi.v1.Node/NodePublishVolume
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.345744 482051 grpcserver.go:137] Call-ID 16: Request: {"target_path":"/var/lib/kubelet/pods/110e7ec0-1c52-4bfe-b8f6-871d0b768115/volumes/kubernetes.io~csi/pvc-b08b6c81-c533-40a6-b29e-3a68016977a5/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":3}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1670738877264-8081-cvmfs.csi.cern.ch"},"volume_id":"pvc-b08b6c81-c533-40a6-b29e-3a68016977a5"}
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.346563 482051 mountutil.go:75] Exec-ID 12: Running command env=[] prog=/usr/bin/mount args=[mount /cvmfs /var/lib/kubelet/pods/110e7ec0-1c52-4bfe-b8f6-871d0b768115/volumes/kubernetes.io~csi/pvc-b08b6c81-c533-40a6-b29e-3a68016977a5/mount --rbind --make-slave]
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.348105 482051 mountutil.go:75] Exec-ID 12: Process exited: exit status 0
Sun, Dec 11 2022 11:41:24 am | I1211 06:11:24.348169 482051 grpcserver.go:143] Call-ID 16: Response: {}
Sun, Dec 11 2022 11:41:27 am | I1211 06:11:27.056814 482051 grpcserver.go:136] Call-ID 17: Call: /csi.v1.Node/NodeGetCapabilities
Sun, Dec 11 2022 11:41:27 am | I1211 06:11:27.056860 482051 grpcserver.go:137] Call-ID 17: Request: {}
Sun, Dec 11 2022 11:41:27 am | I1211 06:11:27.056875 482051 grpcserver.go:143] Call-ID 17: Response: {}
Sun, Dec 11 2022 11:42:41 am | I1211 06:12:41.003951 482051 grpcserver.go:136] Call-ID 18: Call: /csi.v1.Node/NodeGetCapabilities

Requesting help debugging a default installation

Hi,

I'm trying to test CVMFS automounts with the default deployment before modifying it to configure for another CVMFS repository.

I am testing this on a Kubernetes cluster in the Jetstream2 Openstack environment with a containerd runtime. I used the Helm chart to deploy the latest version of cvmfs-csi and am trying to test automounting CVMFS repositories as instructed in https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/how-to-use.md.

However, I keep getting a no such file or directory error when I try to access the cern.ch CVMFS repositories from the cvmfs-demo pod:

/ # ls -l /my-cvmfs/atlas.cern.ch
ls: /my-cvmfs/atlas.cern.ch: No such file or directory

I can verify that all the cvmfs-csi resources are running and do not see any errors in the nodeplugin or controllerplugin pods:

ubuntu@terraform-ubuntu20-leader:~$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/cvmfs-cvmfs-csi-controllerplugin-5648d9b87d-hmsqc 2/2 Running 0 23s
pod/cvmfs-cvmfs-csi-nodeplugin-4jzwj 2/2 Running 0 23s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 79m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/cvmfs-cvmfs-csi-nodeplugin 1 1 1 1 1 23s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cvmfs-cvmfs-csi-controllerplugin 1/1 1 1 23s

NAME DESIRED CURRENT READY AGE
replicaset.apps/cvmfs-cvmfs-csi-controllerplugin-5648d9b87d 1 1 1 23s

Could you please point me to any troubleshooting steps I can perform to check why the repositories cannot be accessed? I have also tried other versions (2.0.0 and 2.1.0) of the Helm chart and run into the same issue.

Thanks!

configMap for default.d directory

Plugin version used: v2.0.0

On the latest release, only two configMaps are allowed, for defining default.local file, and config.d directory files.

I'm trying to setup the plugin with the need of using the parameter CVMFS_CONFIG_REPOSITORY.

I tried to define it on the default.local file, but running cvmfs_config chksetup, I'm facing this error:

Error: CVMFS_CONFIG_REPOSITORY can only be set in /etc/cvmfs/default.conf and /etc/cvmfs/default.d/*.conf (not in /etc/cvmfs/default.local)

In previous versions, there is a configMap to define files inside the default.d directory.

  • Why is it the default.d configMap deprecated?
  • Is there an alternative or workaround to define the parameter CVMFS_CONFIG_REPOSITORY using the default.local and config.d configMaps?
  • Can be added again a configMap to map the file inside the default.d directory?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.