GithubHelp home page GithubHelp logo

synology-csi's Introduction

Synology CSI Driver for Kubernetes

The official Container Storage Interface driver for Synology NAS.

Container Images & Kubernetes Compatibility

Driver Name: csi.san.synology.com

Driver Version Image Supported K8s Version
v1.1.3 synology-csi:v1.1.3 1.20+

The Synology CSI driver supports:

  • Access Modes: Read/Write Multiple Pods
  • Cloning
  • Expansion
  • Snapshot

Installation

Prerequisites

  • Kubernetes versions 1.19 or above
  • Synology NAS running:
    • DSM 7.0 or above
    • DSM UC 3.1 or above
  • Go version 1.21 or above is recommended
  • (Optional) Both Volume Snapshot CRDs and the common snapshot controller must be installed in your Kubernetes cluster if you want to use the Snapshot feature

Notice

  1. Before installing the CSI driver, make sure you have created and initialized at least one storage pool and one volume on your DSM.
  2. Make sure that all the worker nodes in your Kubernetes cluster can connect to your DSM.
  3. After you complete the steps below, the full deployment of the CSI driver, including the snapshotter, will be installed. If you don’t need the Snapshot feature, you can install the basic deployment of the CSI driver instead.

Procedure

  1. Clone the git repository. git clone https://github.com/SynologyOpenSource/synology-csi.git

  2. Enter the directory. cd synology-csi

  3. Copy the client-info-template.yml file. cp config/client-info-template.yml config/client-info.yml

  4. Edit config/client-info.yml to configure the connection information for DSM. You can specify one or more storage systems on which the CSI volumes will be created. Change the following parameters as needed:

    • host: The IPv4 address of your DSM.
    • port: The port for connecting to DSM. The default HTTP port is 5000 and 5001 for HTTPS. Only change this if you use a different port.
    • https: Set "true" to use HTTPS for secure connections. Make sure the port is properly configured as well.
    • username, password: The credentials for connecting to DSM.
  5. Install

    • YAML Run ./scripts/deploy.sh run to install the driver. This will be a full deployment, which means you'll be building and running all CSI services as well as the snapshotter. If you want a basic deployment, which doesn't include installing a snapshotter, change the command as instructed below.

      • full: ./scripts/deploy.sh run
      • basic: ./scripts/deploy.sh build && ./scripts/deploy.sh install --basic

      If you don’t need to build the driver locally and want to pull the image from Docker instead, run the command as instructed below.

      • full: ./scripts/deploy.sh install --all
      • basic: ./scripts/deploy.sh install --basic

      Running the bash script will:

      • Create a namespace named "synology-csi". This is where the driver will be installed.
      • Create a secret named "client-info-secret" using the credentials from the client-info.yml you configured in the previous step.
      • Build a local image and deploy the CSI driver.
      • Create a default storage class named "synology-iscsi-storage" that uses the "Retain" policy.
      • Create a volume snapshot class named "synology-snapshotclass" that uses the "Delete" policy. (Full deployment only)
    • HELM (Local Development)

      1. kubectl create ns synology-csi
      2. kubectl create secret -n synology-csi generic client-info-secret --from-file=./config/client-info.yml
      3. cd deploy/helm; make up
  6. Check if the status of all pods of the CSI driver is Running. kubectl get pods -n synology-csi

CSI Driver Configuration

Storage classes and the secret are required for the CSI driver to function properly. This section explains how to do the following things:

  1. Create the storage system secret (This is not mandatory because deploy.sh will complete all the configurations when you configure the config file mentioned previously.)
  2. Configure storageclasses
  3. Configure volumesnapshotclasses

Creating a Secret

Create a secret to specify the storage system address and credentials (username and password). Usually the config file sets up the secret as well, but if you still want to create the secret or recreate it, follow the instructions below:

  1. Edit the config file config/client-info.yml or create a new one like the example shown here:

    clients:
    - host: 192.168.1.1
      port: 5000
      https: false
      username: <username>
      password: <password>
    - host: 192.168.1.2
      port: 5001
      https: true
      username: <username>
      password: <password>
    

    The clients field can contain more than one Synology NAS. Seperate them with a prefix -.

  2. Create the secret using the following command (usually done by deploy.sh):

    kubectl create secret -n <namespace> generic client-info-secret --from-file=config/client-info.yml
    
    • Make sure to replace <namespace> with synology-csi. This is the default namespace. Change it to your custom namespace if needed.
    • If you change the secret name "client-info-secret" to a different one, make sure that all files at deploy/kubernetes/<k8s version>/ are using the secret name you set.

Creating Storage Classes

Create and apply StorageClasses with the properties you want.

  1. Create YAML files using the one at deploy/kubernetes/<k8s version>/storage-class.yml as the example, whose content is as below:

    iSCSI Protocol

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
      name: synostorage
    provisioner: csi.san.synology.com
    parameters:
      fsType: 'btrfs'
      dsm: '192.168.1.1'
      location: '/volume1'
      formatOptions: '--nodiscard'
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    

    SMB/CIFS Protocol

    Before creating an SMB/CIFS storage class, you must create a secret and specify the DSM user whom you want to give permissions to.

    apiVersion: v1
    kind: Secret
    metadata:
      name: cifs-csi-credentials
      namespace: default
    type: Opaque
    stringData:
      username: <username>  # DSM user account accessing the shared folder
      password: <password>  # DSM user password accessing the shared folder
    

    After creating the secret, create a storage class and fill the secret for node-stage-secret. This is a required step if you're using SMB, or there will be errors when staging volumes.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: synostorage-smb
    provisioner: csi.san.synology.com
    parameters:
      protocol: "smb"
      dsm: '192.168.1.1'
      location: '/volume1'
      csi.storage.k8s.io/node-stage-secret-name: "cifs-csi-credentials"
      csi.storage.k8s.io/node-stage-secret-namespace: "default"
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    
  2. Configure the StorageClass properties by assigning the parameters in the table. You can also leave blank if you don’t have a preference:

    Name Type Description Default Supported protocols
    dsm string The IPv4 address of your DSM, which must be included in the client-info.yml for the CSI driver to log in to DSM - iSCSI, SMB
    location string The location (/volume1, /volume2, ...) on DSM where the LUN for PersistentVolume will be created - iSCSI, SMB
    fsType string The formatting file system of the PersistentVolumes when you mount them on the pods. This parameter only works with iSCSI. For SMB, the fsType is always ‘cifs‘. 'ext4' iSCSI
    protocol string The storage backend protocol. Enter ‘iscsi’ to create LUNs or ‘smb‘ to create shared folders on DSM. 'iscsi' iSCSI, SMB
    formatOptions string Additional options/arguments passed to mkfs.* command. See a linux manual that corresponds with your FS of choice. - iSCSI
    csi.storage.k8s.io/node-stage-secret-name string The name of node-stage-secret. Required if DSM shared folder is accessed via SMB. - SMB
    csi.storage.k8s.io/node-stage-secret-namespace string The namespace of node-stage-secret. Required if DSM shared folder is accessed via SMB. - SMB

    Notice

    • If you leave the parameter location blank, the CSI driver will choose a volume on DSM with available storage to create the volumes.
    • All iSCSI volumes created by the CSI driver are Thin Provisioned LUNs on DSM. This will allow you to take snapshots of them.
  3. Apply the YAML files to the Kubernetes cluster.

    kubectl apply -f <storageclass_yaml>
    

Creating Volume Snapshot Classes

Create and apply VolumeSnapshotClasses with the properties you want.

  1. Create YAML files using the one at deploy/kubernetes/<k8s version>/snapshotter/volume-snapshot-class.yml as the example, whose content is as below:

    apiVersion: snapshot.storage.k8s.io/v1beta1    # v1 for kubernetes v1.20 and above
    kind: VolumeSnapshotClass
    metadata:
      name: synology-snapshotclass
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
    driver: csi.san.synology.com
    deletionPolicy: Delete
    # parameters:
    #   description: 'Kubernetes CSI'
    #   is_locked: 'false'
    
  2. Configure volume snapshot class properties by assigning the following parameters, all parameters are optional:

    Name Type Description Default Supported protocols
    description string The description of the snapshot on DSM "" iSCSI
    is_locked string Whether you want to lock the snapshot on DSM 'false' iSCSI, SMB
  3. Apply the YAML files to the Kubernetes cluster.

    kubectl apply -f <volumesnapshotclass_yaml>
    

Building & Manually Installing

By default, the CSI driver will pull the latest image from Docker Hub.

If you want to use images you built locally for installation, edit all files under deploy/kubernetes/<k8s version>/ and make sure imagePullPolicy: IfNotPresent is included in every csi-plugin container.

Building

  • To build the CSI driver, execute make.
  • To build the synocli dev tool, execute make synocli. The output binary will be at bin/synocli.
  • To run unit tests, execute make test.
  • To build a docker image, run ./scripts/deploy.sh build. Afterwards, run docker images to check the newly created image.

Installation

  • To install all pods of the CSI driver, run ./scripts/deploy.sh install --all
  • To install pods of the CSI driver without the snapshotter, run ./scripts/deploy.sh install --basic
  • Run ./scripts/deploy.sh --help to see more information on the usage of the commands.

Uninstallation

If you are no longer using the CSI driver, make sure that no other resources in your Kubernetes cluster are using storage managed by Synology CSI driver before uninstalling it.

  • ./scripts/uninstall.sh

synology-csi's People

Contributors

chihyuwu avatar golgautier avatar inductor avatar jeanfabrice avatar kincl avatar kuougly avatar outductor avatar ressu avatar rtim75 avatar synologyopensource avatar vaskozl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synology-csi's Issues

SMB Permission Errors

Containers using non-0 UID's create permission errors, there seems to be no settings to remedy this. ISCSI has the 10 PVC limit which is not viable.

This usually can be fixed by setting

securityContext:
        runAsUser: 0

But optimally any user in that Pod should be able to write, or have it be configurable

What are the proper steps to update

Could you add the proper steps to update to the README? I just ran the installer again and ended up with a few errors.

./scripts/deploy.sh install --all
==== Creates namespace and secrets, then installs synology-csi ====
Deploy Version: v1.20
Error from server (AlreadyExists): namespaces "synology-csi" already exists
error: failed to create secret secrets "client-info-secret" already exists
mkdir: cannot create directory ‘/var/lib/kubelet’: Permission denied
serviceaccount/csi-controller-sa unchanged
clusterrole.rbac.authorization.k8s.io/synology-csi-controller-role configured
clusterrolebinding.rbac.authorization.k8s.io/synology-csi-controller-role unchanged
statefulset.apps/synology-csi-controller configured
csidriver.storage.k8s.io/csi.san.synology.com unchanged
namespace/synology-csi unchanged
serviceaccount/csi-node-sa unchanged
clusterrole.rbac.authorization.k8s.io/synology-csi-node-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/synology-csi-node-role unchanged
daemonset.apps/synology-csi-node configured
The StorageClass "synology-iscsi-storage" is invalid: parameters: Forbidden: updates to parameters are forbidden.
serviceaccount/csi-snapshotter-sa unchanged
clusterrole.rbac.authorization.k8s.io/synology-csi-snapshotter-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/synology-csi-snapshotter-role unchanged
statefulset.apps/synology-csi-snapshotter configured
error: unable to recognize "/home/username/git/kubernetes/synology-csi/deploy/kubernetes/v1.20/snapshotter/volume-snapshot-class.yml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"

The iSCSi remove policy

After success creates a PVC and generates an iSCSI in my NAS.
I try to remove the PVC but the iSCSI was not removed after PVC removal succeeded.

Does there have any rules to remove iSCSI or do I have to remove it manually?

Upgrading to 1.1.0 breaks existing storageclasses with `RPC error: rpc error: code = InvalidArgument desc = Unknown protocol`

Since upgrading to 1.1.0 mounting no longer works. The daemonset csi-plugin logs:

2022-04-28T08:34:31Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = InvalidArgument desc = Unknown protocol

I tried adding protocol: iscsi to my storage classes but kubernetes forbids me on the ground that parameters can't be edited after storage class creation.

I expect the csi-plugin to default to ISCSI and to be backwards compatible.

Unable to login which get 402

Hi, all

I had to try an admin account or create a new account but the controller always returns the "Failed to login" error.
Does any know more information to set up CSI?

Volume directoy on host quickly disappears after mount

Hi,

the volume on my DiskStation (2x 920+ in HA) is provisioned correctly and attached as block device (/dev/sdd in my case) on the host system. But it is expected to be mounted in /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1e722ba-35e7-4222-a797-1e66eb40c755/globalmount which does not exists. Only /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv exists.

On further investigation I found out that this directory shortly appears and disappears (for roughly 1s) during this log entry:

[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:105] GRPC request: {} 
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:110] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}}]} 
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Node/NodeStageVolume 
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:105] GRPC request: {"staging_target_path":"/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1e722ba-35e7-4222-a797-1e66eb40c755/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"dsm":"192.168.2.210","storage.kubernetes.io/csiProvisionerIdentity":"1633291144215-8081-csi.san.synology.com"},"volume_id":"080af020-d433-4ea3-aa2a-1773a9132e3f"} 
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/initiator.go:109] Session[iqn.2000-01.com.synology:Hossnercloud-HA.pvc-d1e722ba-35e7-4222-a797-1e66eb40c755] already exists. 
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:53Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = stat /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1e722ba-35e7-4222-a797-1e66eb40c755/globalmount: no such file or directory 

At this point I do not know how to debug this further. Possibly a hidden error during mounting.

I am on MicroK8S 1.21 by the way.

Edit: The block device should already be formatted with ext4 at this point but they are not. This is a probably cause of the mount failing. What might hahve skipped formatting during provisioning?
Edit2: Manually formatting /dev/sdd with ext4 did not help the csi-plugin with mounting, but I was able to mount is manually. This does not seem to be recognized by K8S though.

Compatibility with Nomad

Hello! I was wondering if synology-csi works with Nomad? At first glance it would appear there is only support for Kubernetes, but I just wanted to double check. Thank you

standard_init_linux.go:228: exec user process caused: exec format error

I have installed csi synology driver but I am getting this error: "standard_init_linux.go:228: exec user process caused: exec format error"
All pods in csi-synology namespace are crashing :( with that message. Can anyone assist me? I am running a cluster k3s kubernetes on 6 raspberry pi v4 with ubuntu installed.

User permission for csi

Hi all,

is there a way to use the synology-csi without an admin account/permission, with reduced permission?
It does not feel good that an admin account/service user with username/password is available in the kubernetes cluster and and potentially usable by others.

Limit of 10 Volumes

The Synology NAS has a limit of 10 Targets, which can be created. The CSI Driver creates every time a new Volume is created, a pair of LUN/Target, thus only 10 Volumes can be created. For use cases, where the NAS should be used as storage provider for the whole Cluster, this is number is far to small.

In theory the maximum number of volumes can be 320, because every Target can handle 32 LUNs. So I suggest to change the mechanism of how volumes are created to reuse existing Targets, if the maximum number is exceeded.

Support for securityContext for pods

Installing the prometheus operator helm chart with defaults (https://prometheus-community.github.io/helm-charts, kube-prometheus-stack) is by default setting this for the prometheus instance:

securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000

This makes the "prometheus-kube-prometheus-stack-prometheus-0" pod go into a crash-loop with the error in logs: "nable to create mmap-ed active query log"

Changing the prometheusSpec securityContext like this:
securityContext:
runAsGroup: 0
runAsNonRoot: true
runAsUser: 0
fsGroup: 2000
makes it all work. But most likely running with root permissions then on the file system.

This seems to be an issue with the csi implementation where it doesn't support fsGroupSupport or similar. For example longhorn does this with "fsGroupPolicy: ReadWriteOnceWithFSType" which make each volume being examined at mount time to determine if permissions should be recursively applied.

Please add support for kubernetes 1.22

Kubernetes 1.22 dropped the v1beta1 API for VolumeAttachment and moved it to stable. As a result the csi-attacher container throws the following logs:

I0913 07:45:26.832278       1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0913 07:45:26.852873       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
I0913 07:45:27.853155       1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0913 07:45:27.857884       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource

Please update the CSI attacher code to make it compatible with the latest versions of Kubernetes.

CHAP configuration for iscsi targets

Is there documentation on how/where to configure CHAP settings when creating the iscsi targets/luns? Is there a way to store that in a secret and point to it from the storageclass parameter key?

I have the CSI driver setup and working, and it creates the targets and luns as expected, but I don't see where to provide CHAP credentials. If I create the PVC in my k8s cluster, it will create the target and luns, but will fail to connect because my nodes are passing CHAP credentials to the Synology NAS. If I edit the target in DSM, then the lun mounts as expected.

This is a new install, so I am not sure what will happen with scaling up and down yet, but mobility between nodes should be fine.

For references, my cluster is using Ubuntu 22.04LTS on Raspberry Pi 4B Nodes, which are PXE Booting to an iSCSI root off the Synology. So the iscsi configuration on the nodes and the synology are configured and working correctly. And as stated, after the PVCs are deployed and the PVs are created, I can see the LUNs and Targets in the DSM. If I add the CHAP configuration at that point, then the volumes mount and run as expected.

Issue when using synology-csi

Hello, i'm trying using the synology-csi driver for my kubernetes cluster and i ran with this problem.

My Nas DSM version is 7.0, but my k8s cluster version is v1.22.4.

This is my issue seen in the synology-csi-controller pod

(mers/factory.go:135 2022-01-16T22:09:13.211693168+01:00 csi-attacher E0116 21:09:13.211663       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource)

Do you think it's because of the version of my cluster ?

Thanks

Helm Chart

Can we get a Helm chart? Helm seems to be the defacto way to deploy things in Kubernetes these days.

Couldn't find any host available to create Volume

After spending many hours trying to solve the problem myself, I need help please...

I'm already using multiple LUNs (9 targets and 9 LUNs) on the DS918+ with my Kubernetes cluster, but by iscsi pv, not with the Synology CSI.

Versions:

Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
Synology CSI: latest

client-info.yml

clients:
  - host: 192.168.2.224
    port: 5000
    https: false
    username: user
    password: password

storageclass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: synology-iscsi-storage
  annotations:
    storage-class.kubernetes.io/is-default-class: "false"
provisioner: csi.san.synology.com
parameters:
  dsm: '192.168.2.224'
  location: '/volume1'
  fsType: 'btrfs'
  formatOptions: '--nodiscard'
  type: thin
reclaimPolicy: Retain
allowVolumeExpansion: true`

claim:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  namespace: test
spec:
  storageClassName: synology-iscsi-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi

job:

apiVersion: batch/v1
kind: Job
metadata:
  name: write
  namespace: test
spec:
  template:
    metadata:
      name: write
    spec:
      containers:
        - name: write
          image: registry.access.redhat.com/ubi8/ubi-minimal:latest
          command: ["dd","if=/dev/zero","of=/mnt/pv/test.img","bs=1G","count=1","oflag=dsync"]
          volumeMounts:
            - mountPath: "/mnt/pv"
              name: test-volume
      volumes:
        - name: test-volume
          persistentVolumeClaim:
            claimName: test-claim
      restartPolicy: Never

Controller log:

controller.go:1279] provision "test/test-claim" class "synology-iscsi-storage": started
connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test", Name:"test-claim", UID:"3024a15d-4557-4c40-ba86-e0ebdf7d1ac9", APIVersion:"v1", ResourceVersion:"30936475", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "test/test-claim"
connection.go:184] GRPC request: {"capacity_range":{"required_bytes":4294967296},"name":"pvc-3024a15d-4557-4c40-ba86-e0ebdf7d1ac9","parameters":{"dsm":"192.168.2.224","formatOptions":"--nodiscard","fsType":"btrfs","location":"/volume1"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"btrfs"}},"access_mode":{"mode":1}}]}
controller.go:956] error syncing claim "3024a15d-4557-4c40-ba86-e0ebdf7d1ac9": failed to provision volume with StorageClass "synology-iscsi-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume

What does 'couldn't find any host... " mean? Not find any DSM? Not find any node?

I read all the log files 1000 times... No more ideas to resolve the issue. Not to see the forest anymore... Please help ;-)

Unable to create SMB PVs - "Already existing volume name with different capacity"

I am entirely unable to create SMB PVs . After creating the secret, storage class, and PVC, the PV is never created, and the error below is logged:

  Type     Reason                Age              From                                                             Message
  ----     ------                ----             ----                                                             -------
  Normal   ExternalProvisioning  7s (x3 over 8s)  persistentvolume-controller                                      waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator
  Normal   Provisioning          4s (x3 over 8s)  csi.san.synology.com_mpc01_c4b84a4f-d2b0-46f6-95f8-ae3f75e7ad4f  External provisioner is provisioning volume for claim "default/ubuntu-test"
  Warning  ProvisioningFailed    3s (x3 over 7s)  csi.san.synology.com_mpc01_c4b84a4f-d2b0-46f6-95f8-ae3f75e7ad4f  failed to provision volume with StorageClass "synostorage-smb": rpc error: code = AlreadyExists desc = Already existing volume name with different capacity

However, something does actually get created on the Synology device - if I go to ControlPanel / Shared Folders, I see the k8s-csi-pvc-.... folders. But no corresponding PV shows up on K8s.

The manifests I am using are below:


apiVersion: v1
kind: Secret
metadata:
  name: cifs-csi-credentials
  namespace: synology-csi
type: Opaque
stringData:
  username: testuser  # DSM user account accessing the shared folder
  password: testpass  # DSM user password accessing the shared folder
  
---

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: synostorage-smb
provisioner: csi.san.synology.com
parameters:
  protocol: smb
  csi.storage.k8s.io/node-stage-secret-name: cifs-csi-credentials
  csi.storage.k8s.io/node-stage-secret-namespace: synology-csi
reclaimPolicy: Delete
allowVolumeExpansion: true

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ubuntu-test
  labels:
    app: containerized-data-importer
  annotations:
    cdi.kubevirt.io/storage.import.endpoint: https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img
spec:
  storageClassName: synostorage-smb
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Support for k8s volume-populators appears to be missing

It appears this csi does not support Kubernetes' volume populators which allows Custom Resources to populate volumes. In my case, I have an up and running k8s cluster with a synology storage backend and the synology-csi driver is able to create persistent volume without any issues.

However, I have trouble to populate volumes using containerized-data-importer CRD, that heavily relies on volume populator and dataSourceRef feature of k8s Volume specification.

Steps to reproduce:

  • Assume an up and running k8s cluster with a synology storage backend and fully functional synolog-csi driver.
  • Deploy the CDI CRD following these steps into your cluster.
  • Deploy the DataVolume example provided in the first yaml example here into your cluster.
  • The dataVolume CDI CR controller creates an associated pvc with the same name (example-import-dv) which remains in pending state indefinitely:
$ k describe pvc example-import-dv
Name:          example-import-dv
Namespace:     default
StorageClass:  synology-csi-retain
Status:        Pending
Volume:
Labels:        alerts.k8s.io/KubePersistentVolumeFillingUp=disabled
               app=containerized-data-importer
               app.kubernetes.io/component=storage
               app.kubernetes.io/managed-by=cdi-controller
Annotations:   cdi.kubevirt.io/storage.contentType: kubevirt
               cdi.kubevirt.io/storage.pod.phase: Pending
               cdi.kubevirt.io/storage.preallocation.requested: false
               cdi.kubevirt.io/storage.usePopulator: true
               volume.beta.kubernetes.io/storage-provisioner: csi.san.synology.com
               volume.kubernetes.io/storage-provisioner: csi.san.synology.com     
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
DataSource:
  APIGroup:  cdi.kubevirt.io
  Kind:      VolumeImportSource
  Name:      volume-import-source-22af4f80-2646-4e85-b19b-90d7006f29e5
Used By:     <none>
Events:
  Type    Reason                       Age                    From                                                                 Message
  ----    ------                       ----                   ----                                                                 -------
  Normal  CreatedPVCPrimeSuccessfully  6m21s                  import-populator                                                     PVC Prime created successfully
  Normal  Provisioning                 3m15s (x4 over 6m21s)  csi.san.synology.com_yvr5-lf09_cdf5a02e-5e95-475b-a438-d40632de8ac3  External provisioner is provisioning volume for claim "default/example-import-dv"
  Normal  Provisioning                 3m15s (x4 over 6m21s)  external-provisioner                                                 Assuming an external populator will provision the volume
  Normal  ExternalProvisioning         56s (x26 over 6m21s)   persistentvolume-controller                                          waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator

The CRD creates an intermediate data-importer pod, and this pod in return creates intermediate volumes. The data importer pod fails to start up as it fails to attach it intermediate volume. Here is the events visible importer pod:

  Warning  FailedMount             2m34s                kubelet                  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[cdi-data-vol kube-api-access-7h9n6]: timed out waiting for the condition
  Warning  FailedMapVolume         65s (x13 over 11m)   kubelet                  MapVolume.SetUpDevice failed for volume "pvc-5194ebf0-8cca-4a78-9459-02afa104ac3e" : kubernetes.io/csi: blockMapper.SetUpDevice failed to get CSI client: driver name csi.san.synology.com not found in the list of registered CSI drivers
  Warning  FailedMount             19s (x4 over 9m20s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[kube-api-access-7h9n6 cdi-data-vol]: timed out waiting for the condition

nomad volume creation works, but not useable

The creation of a storage does work, but there is still some problem with the access mode within nomad:

$ nomad volume  status
Container Storage Interface
ID    Name  Plugin ID     Schedulable  Access Mode
test  test  synology true         <none>

my current configuration for the nomad csi plugin job is like this

job "plugin-synology" {
  type = "system"
  group "controller" {
    task "plugin" {
      driver = "docker"
      config {
        image = "docker.io/synology/synology-csi:v1.0.0"
        privileged = true
        volumes = [
          "local/csi.yaml:/etc/csi.yaml",
          "/:/host",
        ]
        args = [
          "--endpoint",
          "unix://csi/csi.sock",
          "--client-info",
          "/etc/csi.yaml",
        ]
      }
      template {
          destination = "local/csi.yaml"
          data = <<EOF
---
clients:
- host: 192.168.1.2
  port: 8443
  https: true
  username: nomad
  password: <password>
EOF
      }
      csi_plugin {
        id        = "synology"
        type      = "monolith"
        mount_dir = "/csi"
      }
      resources {
        cpu    = 256
        memory = 256
      }
    }
  }
}

and the volume definition for the nomad volume create is like

id        = "test"
name      = "test"
type      = "csi"
plugin_id = "synology"

capacity_min = "1GiB"
capacity_max = "2GiB"

capability {
  access_mode = "single-node-writer"
  attachment_mode = "file-system"
}

mount_options {
  mount_flags = ["rw"]
}

Originally posted by @mabunixda in #14 (comment)

Kubelet space/inodes usage (e.g. `kubelet_volume_stats_used_bytes`) is missing

Typically metrics for volumes are available via the kubelet summary API (/stats/summary).

Monitoring solutions like Prometheus with Alertmanager will scrape metrics from kubelet about volume usage and alert when a disk if filling up. This doesn't work when using the synology-csi since there are no such metrics since the csi does not seem to implement them.

Missing:

  • kubelet_volume_stats_used_bytes
  • kubelet_volume_stats_inodes

There are some histogram metrics (less useful) that are available:

  • kubelet_volume_metric_collection_duration_seconds_bucket

Reporting the volume usage is critical to avoid cases where one runs out of disk and ultimate application failure.

Error creating volume using SMB protocol

Hello,

Thanks for all your work. This integration looks very good.

I have tried to used it but I am getting below error

Name:          test
Namespace:     vaultwarden
StorageClass:  synology-smb-storage
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.san.synology.com
               volume.kubernetes.io/storage-provisioner: csi.san.synology.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                From                                                             Message
  ----     ------                ----               ----                                                             -------
  Normal   ExternalProvisioning  14s (x3 over 26s)  persistentvolume-controller                                      waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator
  Normal   Provisioning          10s (x5 over 26s)  csi.san.synology.com_node2_2fb7c8e5-b9d1-4829-9e76-d2dff23ee566  External provisioner is provisioning volume for claim "vaultwarden/test"
  Warning  ProvisioningFailed    10s (x5 over 26s)  csi.san.synology.com_node2_2fb7c8e5-b9d1-4829-9e76-d2dff23ee566  failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume

I have read the documentation and I have checked that same host is configured in the secret configured in the storage class as well as in the secret that store clients. Here you are.

StorageClass

Name:            synology-smb-storage
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"synology-smb-storage"},"parameters":{"csi.storage.k8s.io/node-stage-secret-name":"cifs-csi-credentials","csi.storage.k8s.io/node-stage-secret-namespace":"synology-csi","dsm":"192.168.30.13","location":"/volume1/KubernetesVolumes","protocol":"smb"},"provisioner":"csi.san.synology.com","reclaimPolicy":"Retain"}

Provisioner:           csi.san.synology.com
Parameters:            csi.storage.k8s.io/node-stage-secret-name=cifs-csi-credentials,csi.storage.k8s.io/node-stage-secret-namespace=synology-csi,dsm=192.168.30.13,location=/volume1/KubernetesVolumes,protocol=smb
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Retain
VolumeBindingMode:     Immediate
Events:                <none>

StorageClass Secret

apiVersion: v1
data:
  password: xxxxx
  username: xxxxx
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"cifs-csi-credentials","namespace":"synology-csi"},"stringData":{"password":"UGVyJmNvMTgxMDE2","username":"ampkaWF6"},"type":"Opaque"}
  creationTimestamp: "2022-06-20T19:25:35Z"
  name: cifs-csi-credentials
  namespace: synology-csi
  resourceVersion: "7344539"
  uid: f283712a-a557-4f5a-83b2-dfea269476c7
type: Opaque

Clients secret file

apiVersion: v1
data:
  client-info.yml: xxxxx
kind: Secret
metadata:
  creationTimestamp: "2022-06-20T18:44:45Z"
  name: client-info-secret
  namespace: synology-csi
  resourceVersion: "7338982"
  uid: df09b074-6008-4df2-a5e6-7a870bc840af
type: Opaque

And content of client-info.yml is

---
clients:
  - host: 192.168.30.13
    port: 5001
    https: true
    username: xxxx
    password: xxxxx

I think everything is configured properly. I can't find any error.

Logs from pods of deployment synology-csi-node looks fine (no error). The only error I can see is from controller.

csi-provisioner container

I0620 19:45:43.549885       1 controller.go:1279] provision "vaultwarden/test" class "synology-smb-storage": started
I0620 19:45:43.550114       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I0620 19:45:43.550152       1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-15584bfb-4154-4d8c-9c3e-64a150d562f1","parameters":{"dsm":"192.168.30.13","location":"/volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I0620 19:45:43.550269       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vaultwarden", Name:"test", UID:"15584bfb-4154-4d8c-9c3e-64a150d562f1", APIVersion:"v1", ResourceVersion:"7346608", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "vaultwarden/test"
I0620 19:45:43.838611       1 connection.go:186] GRPC response: {}
I0620 19:45:43.838809       1 connection.go:187] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0620 19:45:43.838894       1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0620 19:45:43.839015       1 controller.go:1074] Final error received, removing PVC 15584bfb-4154-4d8c-9c3e-64a150d562f1 from claims in progress
W0620 19:45:43.839048       1 controller.go:933] Retrying syncing claim "15584bfb-4154-4d8c-9c3e-64a150d562f1", failure 9
E0620 19:45:43.839104       1 controller.go:956] error syncing claim "15584bfb-4154-4d8c-9c3e-64a150d562f1": failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0620 19:45:43.839166       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vaultwarden", Name:"test", UID:"15584bfb-4154-4d8c-9c3e-64a150d562f1", APIVersion:"v1", ResourceVersion:"7346608", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
E0620 19:46:10.337400       1 controller.go:1025] claim "325f380f-ca75-4b27-98e8-e01a85c8f5e4" in work queue no longer exists

csi-plugin container

2022-06-20T19:51:09Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:09Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:10Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:10Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:11Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:11Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:11Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:11Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:13Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:13Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:13Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:13Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:17Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:17Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:18Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:18Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:26Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:26Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:26Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:26Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:42Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:42Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:43Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:43Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:52:15Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:52:15Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:52:15Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:52:15Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume

I have also checked the user I have set and it has permissions permissions to read/write in location /Volume1/KubernetesVolumes

Mount issue in multinodes cluster

Hello,

I have a 3 nodes cluster. Create a new pvc and attach it to a pod works perfectly.
When this pod moves to another node, I encounter an issue with iSCSI login failure.

Pod describe message is :

Events:
Type     Reason                  Age   From                     Message
----     ------                  ----  ----                     -------
Normal   Scheduled               6s    default-scheduler        Successfully assigned usenet/bazarr-59ff6fcc-mcq96 to kargoii
Normal   SuccessfulAttachVolume  6s    attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-7d6b036c-4248-4021-bc41-160ec4fdc704"
Warning  FailedMount             1s    kubelet                  MountVolume.MountDevice failed for volume "pvc-7d6b036c-4248-4021-bc41-160ec4fdc704" : rpc error: code = Internal desc = rpc error: code
= Internal desc = Failed to login with target iqn [iqn.2000-01.com.synology:MonNAS.pvc-7d6b036c-4248-4021-bc41-160ec4fdc704], err: iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.sy
ology:MonNAS.pvc-7d6b036c-4248-4021-bc41-160ec4fdc704, portal: 192.168.1.79,3260[].
iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2000-01.com.synology:MonNAS.pvc-7d6b036c-4248-4021-bc41-160ec4fdc704, portal: 192.168.1.79,3260] (multiple)
(exit status 19)                     

If I change my iSCSI config for this LUN under DSM, by allowing sharing between multiple initiators, it works well, but :

  • I am not sure it is the best way to proceed
  • I have to do it manually for each (actual & future) LUN

I am using MicroK8S with K8S 1.23.

Encryption for SMB/CIFS shares

When creating a PVC using the SMB/CIFS storageclass there is no way to create an encrypted share. The shares are created as un-encrypted. To satisfy ISO requirements data at reset needs to be encrypted.
After the un-encrypted share is created I can manually encrypt the shared folder. However, when the volume is deleted, synology-csi controller is not able to automatically delete the share. If I leave the share as unencrypted then synology-csi can correctly delete the share when the volume is deleted.

Is it possible to either:

  1. Allow users to specify an encryption key in the storageclass to create encrypted shared folders; or
  2. Allow the user to manually encrypt share folders but still allow synology-csi to delete the shares when the volume is deleted in K8s

Lable for iscsi volumes

Hi,

it would be great to have an identifiable name for SAN/ISCI volumes. Id works, but if a pvc was deleted it is hard to know which volume can be deleted safely. I know it is shown as ready instead of connected. But it would be great to have some way to know what volume is used for what without relying on kubernetes

The iSCSi remove policy

Hi there,

In relation to #5 The iSCSi remove policy.

If I set the Storage class to Retain, then the PVs should be retained, when I delete a PVC.
But if I delete them with a kubectl delete pv, they are not removed from the Synology.

Do I have to add any rules to remove the iSCSI-Drives or do I have to remove them manually?

LUNs succesfully create but fail to mount

I am able to successfully connect with my client config. Deploying the driver is successful. But using the StorageClass results in the following errors:

6m31s       Normal    Scheduled                pod/dokuwiki-cf5bf85c9-7bsp4     Successfully assigned dokuwiki/dokuwiki-cf5bf85c9-7bsp4 to loving-kypris
6m30s       Normal    SuccessfulAttachVolume   pod/dokuwiki-cf5bf85c9-7bsp4     AttachVolume.Attach succeeded for volume "pvc-f2ecf090-4737-41b2-8644-8442f7179b00"
2m          Warning   FailedMount              pod/dokuwiki-cf5bf85c9-7bsp4     MountVolume.MountDevice failed for volume "pvc-f2ecf090-4737-41b2-8644-8442f7179b00" : rpc error: code = Internal desc = rpc error: code = Internal desc = Failed to login with target iqn [iqn.2000-01.com.synology:mother.pvc-f2ecf090-4737-41b2-8644-8442f7179b00], err: Failed to connect to bus: No data available
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Cannot perform discovery. Initiatorname required.
iscsiadm: Could not perform SendTargets discovery: could not connect to iscsid
 (exit status 20)
2m10s   Warning   FailedMount             pod/dokuwiki-cf5bf85c9-7bsp4     Unable to attach or mount volumes: unmounted volumes=[dokuwiki-data], unattached volumes=[kube-api-access-g4bgv dokuwiki-data]: timed out waiting for the condition

Here's an image showing the LUNs successfully created on the NAS-side:
image

btrfs vs ext4 PVC provisioning delay

Hi team,

Using these StorageClass definitions:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-btrfs
provisioner: csi.san.synology.com
parameters:
  location: '/volume1' # kubernetes SSD volume
  fsType: 'btrfs'
  thin_provisioning: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true

and

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-ext4
provisioner: csi.san.synology.com
parameters:
  location: '/volume1' # kubernetes SSD volume
  fsType: 'ext4'
  thin_provisioning: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true

It takes nearly 2m for a pod-ext4 Debian pod to start with a 50Gb ext4 PVC:

 Warning  FailedScheduling        2m12s  default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling        2m10s  default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               2m8s   default-scheduler        Successfully assigned synology-csi/pod-ext4 to worker-2
  Normal   SuccessfulAttachVolume  2m7s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-67cbad0d-e431-4c7d-9342-b39e0cc175b4"
  Normal   Pulling                 18s    kubelet                  Pulling image "debian"
  Normal   Pulled                  17s    kubelet                  Successfully pulled image "debian" in 1.030883146s
  Normal   Created                 16s    kubelet                  Created container pod-ext4
  Normal   Started                 16s    kubelet                  Started container pod-ext4

image

While it takes only 12s for a pod-btrfs Debian pod to start with a 50Gb btrfs PVC:

  Type     Reason                  Age   From                     Message
  ----     ------                  ----  ----                     -------
  Warning  FailedScheduling        23s   default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling        21s   default-scheduler        0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               19s   default-scheduler        Successfully assigned synology-csi/pod-btrfs to worker-2
  Normal   SuccessfulAttachVolume  18s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-a4da0c08-ee82-4ae6-ad5c-feb3cf823ed7"
  Normal   Pulling                 6s    kubelet                  Pulling image "debian"
  Normal   Pulled                  5s    kubelet                  Successfully pulled image "debian" in 1.057510873s
  Normal   Created                 5s    kubelet                  Created container pod-btrfs
  Normal   Started                 5s    kubelet                  Started container pod-btrfs

image

Most of the time for ext4 is spent on formatting the LUN. At the end, both LUN have a different shape in Synology DSM. The ext4 one looks full, while the btrfs one not.

It that an expected behavior?

Unable to get plugin working in microk8s (1.19/stable) / Driver registration issue

Error:

MountVolume.MountDevice failed for volume "pvc-1444ade9-3341-4c73-814c-d5afb0cd404f" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.san.synology.com not found in the list of registered CSI drivers

Output from CSInodes:

$ kubectl get csinodes
NAME        DRIVERS   AGE
integrate   0         67m

Logs from csi-driver-registrar:

I1117 22:44:53.454185       1 main.go:110] Version: v1.2.0-0-g6ef000ae
I1117 22:44:53.454258       1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I1117 22:44:53.454279       1 connection.go:151] Connecting to unix:///csi/csi.sock
I1117 22:44:58.715146       1 main.go:127] Calling CSI driver to discover driver name
I1117 22:44:58.715188       1 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo
I1117 22:44:58.715199       1 connection.go:181] GRPC request: {}
I1117 22:44:58.722872       1 connection.go:183] GRPC response: {"name":"csi.san.synology.com","vendor_version":"1.0.0"}
I1117 22:44:58.723816       1 connection.go:184] GRPC error: <nil>
I1117 22:44:58.723830       1 main.go:137] CSI driver name: "csi.san.synology.com"
I1117 22:44:58.723907       1 node_register.go:58] Starting Registration Server at: /registration/csi.san.synology.com-reg.sock
I1117 22:44:58.724165       1 node_register.go:67] Registration Server started at: /registration/csi.san.synology.com-reg.sock

Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.15-34+c064bb32deff78", GitCommit:"c064bb32deff7823e740d5ab40f361f92908c4cd", GitTreeState:"clean", BuildDate:"2021-09-28T07:50:53Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Migration to organization

Prometheus metrics support?

Hello,

I’m using this CSI driver in my environment but I wonder if it supports Prometheus metrics endpoint so that I can scrape PVC and storage usage on Prometheus and grafana dashboard.

Couldn't find any host available to create volume

Using general defaults for the values and updating my connection strings in the config, l I am receiving this error:

Failed to create Volume: rpc error: code = Internal desc = Failed to get available location, err: DSM Api error. Error code:105
GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
Any idea whats happening here? I saw a previous issue where updating the parameters to the StorageClass was the solution but it doesn't seem to have the same resolution for me.

Thanks for any help!

"volume" is apparently in use by the system; will not make a filesystem here!

Synology-CSI is install using Helm, current 1.1.2 release.

I'm trying to deploy the Prometheus-Community/Prometheus Chart with the following configuration in regards to storage:

server:
      persistentVolume:
        size: 320Gi
        storageClass: synology-csi-retain

Sadly the container never comes to life, cause the volume mount fails:

MountVolume.MountDevice failed for volume "pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c" : rpc error: code = Internal desc = format of disk "/dev/disk/by-path/ip-<target>:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.san.synology.com/29266ca212fe6bef686188c7d8825cf302d27c7cb96c09f019cef5c9fc84cedb/globalmount") options:("rw,defaults") errcode:(exit status 1) output:(mke2fs 1.46.5 (30-Dec-2021)
/dev/disk/by-path/ip-<target>:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1 is apparently in use by the system; will not make a filesystem here!
)

on the host machine I can see that the presented path is a link to /dev/disk/sda

ls -la /dev/disk/by-path/ip-<target>:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1
lrwxrwxrwx 1 root root 9 Jun  7 22:17 /dev/disk/by-path/ip-<target:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1 -> ../../sda

and the iscsi disk seems to be attached correctly as /dev/sda.

Disk /dev/sda: 320 GiB, 343597383680 bytes, 671088640 sectors
Disk model: Storage         
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[Security Enhancement] Dont require a user with admin rights

Appreciate the work enabling Synology on Kubernetes. Its definitely much nicer than using NFS subdirectories, with one glaring exception.

As far as I know, we have to use a user with administrator rights to synology. This means if the kubernetes credentials are compromised, the entire synology server is compromised. This is pretty much a non-starter for business unless they can afford to dedicate an entire synology unit to each cluster and even then its iffy. I'm just using this in a homelab environment so I am ok with it for now, but it definitely made me raise an eyebrow.

I'm really hoping you guys are working on a dedicated Synology side api that can be given much more limited access.

arm64 release

First: congrats on releasing a first version.

Anyway plan in releasing a ARM64 version of the images?

Don't know what your using as CI but there many option to easily build multiple targets with Github actions and goreleaser.

Cannot control ownership and permissions for mounted volumes

First of all, thank you so much for this driver - it's a great start!

After updating images and cluster roles, I have the CSI working to the point that volumes are automatically created, attached and mounted in containers. But for some reason, the ownership and permissions are not exactly the same as they are when I was manually creating a PersistentVolume (though I did not specify anything to control it). As a result, all the containers I tried bail out with a permission error saying that they cannot write files in the mounted volumes.

I have ssh'ed to the nodes and verified that the ownership and permission of the mounted directories is now root:root and 755, respectively. For some reason, my containers are not running as root, so they don't have permission.

I wonder how to control ownership and permissions for the mount paths?

Provide documentation on backups, restore and disaster recovery

My Synology shows many more "volumes"/LUNs created using the Synology CSI than I have stateful sets and they are named arbitrarily. The result is a "black box" of storage that I am unable to reason about for purposes of backup and restore or even cleaning up.

It would be helpful if documentation provided clear instructions for backing up and restoring volumes in cases of, for example, cluster failure.

Questions I have:

  • given unrecoverable cluster failure, how does a user restore data to a new stateful set?
  • how does a user backup their storage to e.g. another Synology NAS? With HyperBackup? What about generic off-site storage?
  • how does a user clean up created backing storage safely?

With Docker Compose, it is easy to reason about mounted volumes, especially when using bind mounts: such and such is the specified mounted volume and backup can be as simple as a single rsync command.

Corrupted filesystem

When first provisioning, the CSI driver fills up the LUN, which takes some time. I suspect because of this, I experienced filesystem errors on a mounted device and lost data.

image

Migrating from jparklab/synology-csi

I've been using jparklab/synology-csi up until recently, but after an upgrade to DSM 7, that stopped working and I'm unable to mount my volumes anymore (entirely my fault for not checking that before upgrading 😅 ).

I'm wondering if there's any way to get the csi.san.synology.com CSI plugin to mount the volumes that were provisioned by that other plugin? It looks like the naming scheme is slightly different for the resources it actually creates on the Synology-side.

I'm happy with just something that lets me create a PersistentVolume manually pointing to an existing iSCSI volume on the Synology side, but at the moment it looks like this plugin makes quite a few assumptions about how things are named on the Synology side?

deploy/helm make test fails

make test in deploy/helm currently fails because StorageClass used in the test script does not match to the one predefined in the values.yaml. After renaming the storageClass synology-iscsi-storage-delete to synology-csi-delete as required by the test, the test will pass.

Failed to map target

I got most of it to work
i can see the lun and the iscsi gets created, even if the logs from controller say it gave error.
But it cant map the the 2 together,
and if i try with through the web interface myself from that lun i get
lun [xxxx] is not available due to load fail

If i create a lun with same name and map it to the iscsi myself it seem i can get it to create a pv in the cluster and bound the pvc
So maybe something changed in the api for the DSM 7.0.1-42218 Update 2

DSM 7.2 the 2FA is now mandatory which looks like it is yet not supported by this version

Looks like 2FA is now mandatory and my csi user with admin group right fails to connect to DSM cause it passes only the first phase of the 2FA authentication as seen in the logs....

Trying to make my Synology CSI ISCSI work but not getting it through.

I0906 20:28:00.271711 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-pvc-tshoot-iscsi-03", UID:"6aa6fcf4-50b0-43ab-bd6c-xxxxxxxx", APIVersion:"v1", ResourceVersion:"637548", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synostorage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0906 20:28:00.272002 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-pvc-tshoot-iscsi-01", UID:"8a9e2772-49f5-402a-a7ad-b32034xxxxxxx", APIVersion:"v1", ResourceVersion:"637588", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synology-iscsi-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume

#below the synology log showing my worker node trying to connect to the DSM. Only first auth passed via password.
09/06/2023 13:59:27 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:59:26 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:59:26 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:59:25 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:58:40 Info synology02 SYSTEM System System successfully stopped [SSH service].

Error when Attaching Volume

I am gettin this error when attaching a volume

I1101 13:06:26.838634 1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135 E1101 13:06:26.848581 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource

does anybodody have an idea on how to debug this.

FEATURE: docker swarm compatibility

Once this is stable with k8s (i get where priorities lie) it would be great if this was also an installable docker volume driver (no swarm is not dead).

Hope you can consider this.

k8s version check: --short not longer supported

Apparently newer versions of kubectl do not longer support the argument --short for the version check.

Output:

$ ./scripts/deploy.sh install --basic
==== Creates namespace and secrets, then installs synology-csi ====
error: unknown flag: --short
See 'kubectl version --help' for usage.
Version not supported:

Possible fix:
Remove the argument --short from https://github.com/SynologyOpenSource/synology-csi/blob/main/scripts/deploy.sh#L13

Example output from kubectl:

$ kubectl version
Client Version: v1.28.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.4+k3s1

This should be fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.