synologyopensource / synology-csi Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Appreciate the work enabling Synology on Kubernetes. Its definitely much nicer than using NFS subdirectories, with one glaring exception.
As far as I know, we have to use a user with administrator rights to synology. This means if the kubernetes credentials are compromised, the entire synology server is compromised. This is pretty much a non-starter for business unless they can afford to dedicate an entire synology unit to each cluster and even then its iffy. I'm just using this in a homelab environment so I am ok with it for now, but it definitely made me raise an eyebrow.
I'm really hoping you guys are working on a dedicated Synology side api that can be given much more limited access.
First: congrats on releasing a first version.
Anyway plan in releasing a ARM64 version of the images?
Don't know what your using as CI but there many option to easily build multiple targets with Github actions and goreleaser.
After spending many hours trying to solve the problem myself, I need help please...
I'm already using multiple LUNs (9 targets and 9 LUNs) on the DS918+ with my Kubernetes cluster, but by iscsi pv, not with the Synology CSI.
Versions:
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
Synology CSI: latest
client-info.yml
clients:
- host: 192.168.2.224
port: 5000
https: false
username: user
password: password
storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: synology-iscsi-storage
annotations:
storage-class.kubernetes.io/is-default-class: "false"
provisioner: csi.san.synology.com
parameters:
dsm: '192.168.2.224'
location: '/volume1'
fsType: 'btrfs'
formatOptions: '--nodiscard'
type: thin
reclaimPolicy: Retain
allowVolumeExpansion: true`
claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
namespace: test
spec:
storageClassName: synology-iscsi-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
job:
apiVersion: batch/v1
kind: Job
metadata:
name: write
namespace: test
spec:
template:
metadata:
name: write
spec:
containers:
- name: write
image: registry.access.redhat.com/ubi8/ubi-minimal:latest
command: ["dd","if=/dev/zero","of=/mnt/pv/test.img","bs=1G","count=1","oflag=dsync"]
volumeMounts:
- mountPath: "/mnt/pv"
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-claim
restartPolicy: Never
Controller log:
controller.go:1279] provision "test/test-claim" class "synology-iscsi-storage": started
connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test", Name:"test-claim", UID:"3024a15d-4557-4c40-ba86-e0ebdf7d1ac9", APIVersion:"v1", ResourceVersion:"30936475", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "test/test-claim"
connection.go:184] GRPC request: {"capacity_range":{"required_bytes":4294967296},"name":"pvc-3024a15d-4557-4c40-ba86-e0ebdf7d1ac9","parameters":{"dsm":"192.168.2.224","formatOptions":"--nodiscard","fsType":"btrfs","location":"/volume1"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"btrfs"}},"access_mode":{"mode":1}}]}
controller.go:956] error syncing claim "3024a15d-4557-4c40-ba86-e0ebdf7d1ac9": failed to provision volume with StorageClass "synology-iscsi-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
What does 'couldn't find any host... " mean? Not find any DSM? Not find any node?
I read all the log files 1000 times... No more ideas to resolve the issue. Not to see the forest anymore... Please help ;-)
Kubernetes 1.22 dropped the v1beta1 API for VolumeAttachment
and moved it to stable. As a result the csi-attacher
container throws the following logs:
I0913 07:45:26.832278 1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0913 07:45:26.852873 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
I0913 07:45:27.853155 1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135
E0913 07:45:27.857884 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
Please update the CSI attacher code to make it compatible with the latest versions of Kubernetes.
Hi all,
is there a way to use the synology-csi without an admin account/permission, with reduced permission?
It does not feel good that an admin account/service user with username/password is available in the kubernetes cluster and and potentially usable by others.
Hello! I was wondering if synology-csi works with Nomad? At first glance it would appear there is only support for Kubernetes, but I just wanted to double check. Thank you
I am gettin this error when attaching a volume
I1101 13:06:26.838634 1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135 E1101 13:06:26.848581 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
does anybodody have an idea on how to debug this.
Hi,
the volume on my DiskStation (2x 920+ in HA) is provisioned correctly and attached as block device (/dev/sdd in my case) on the host system. But it is expected to be mounted in /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1e722ba-35e7-4222-a797-1e66eb40c755/globalmount
which does not exists. Only /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv
exists.
On further investigation I found out that this directory shortly appears and disappears (for roughly 1s) during this log entry:
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:105] GRPC request: {}
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:110] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}}]}
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Node/NodeStageVolume
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/utils.go:105] GRPC request: {"staging_target_path":"/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1e722ba-35e7-4222-a797-1e66eb40c755/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"dsm":"192.168.2.210","storage.kubernetes.io/csiProvisionerIdentity":"1633291144215-8081-csi.san.synology.com"},"volume_id":"080af020-d433-4ea3-aa2a-1773a9132e3f"}
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:52Z [INFO] [driver/initiator.go:109] Session[iqn.2000-01.com.synology:Hossnercloud-HA.pvc-d1e722ba-35e7-4222-a797-1e66eb40c755] already exists.
[synology-csi-node-ppzwq csi-plugin] 2021-10-03T21:32:53Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = stat /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1e722ba-35e7-4222-a797-1e66eb40c755/globalmount: no such file or directory
At this point I do not know how to debug this further. Possibly a hidden error during mounting.
I am on MicroK8S 1.21 by the way.
Edit: The block device should already be formatted with ext4 at this point but they are not. This is a probably cause of the mount failing. What might hahve skipped formatting during provisioning?
Edit2: Manually formatting /dev/sdd with ext4 did not help the csi-plugin with mounting, but I was able to mount is manually. This does not seem to be recognized by K8S though.
Hi,
it would be great to have an identifiable name for SAN/ISCI volumes. Id works, but if a pvc was deleted it is hard to know which volume can be deleted safely. I know it is shown as ready instead of connected. But it would be great to have some way to know what volume is used for what without relying on kubernetes
Hi there,
In relation to #5 The iSCSi remove policy.
If I set the Storage class to Retain
, then the PVs should be retained, when I delete a PVC.
But if I delete them with a kubectl delete pv
, they are not removed from the Synology.
Do I have to add any rules to remove the iSCSI-Drives or do I have to remove them manually?
Since upgrading to 1.1.0 mounting no longer works. The daemonset csi-plugin logs:
2022-04-28T08:34:31Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = InvalidArgument desc = Unknown protocol
I tried adding protocol: iscsi
to my storage classes but kubernetes forbids me on the ground that parameters can't be edited after storage class creation.
I expect the csi-plugin to default to ISCSI and to be backwards compatible.
Hello, i'm trying using the synology-csi driver for my kubernetes cluster and i ran with this problem.
My Nas DSM version is 7.0, but my k8s cluster version is v1.22.4.
This is my issue seen in the synology-csi-controller pod
(mers/factory.go:135 2022-01-16T22:09:13.211693168+01:00 csi-attacher E0116 21:09:13.211663 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource)
Do you think it's because of the version of my cluster ?
Thanks
Could you add the proper steps to update to the README? I just ran the installer again and ended up with a few errors.
./scripts/deploy.sh install --all
==== Creates namespace and secrets, then installs synology-csi ====
Deploy Version: v1.20
Error from server (AlreadyExists): namespaces "synology-csi" already exists
error: failed to create secret secrets "client-info-secret" already exists
mkdir: cannot create directory ‘/var/lib/kubelet’: Permission denied
serviceaccount/csi-controller-sa unchanged
clusterrole.rbac.authorization.k8s.io/synology-csi-controller-role configured
clusterrolebinding.rbac.authorization.k8s.io/synology-csi-controller-role unchanged
statefulset.apps/synology-csi-controller configured
csidriver.storage.k8s.io/csi.san.synology.com unchanged
namespace/synology-csi unchanged
serviceaccount/csi-node-sa unchanged
clusterrole.rbac.authorization.k8s.io/synology-csi-node-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/synology-csi-node-role unchanged
daemonset.apps/synology-csi-node configured
The StorageClass "synology-iscsi-storage" is invalid: parameters: Forbidden: updates to parameters are forbidden.
serviceaccount/csi-snapshotter-sa unchanged
clusterrole.rbac.authorization.k8s.io/synology-csi-snapshotter-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/synology-csi-snapshotter-role unchanged
statefulset.apps/synology-csi-snapshotter configured
error: unable to recognize "/home/username/git/kubernetes/synology-csi/deploy/kubernetes/v1.20/snapshotter/volume-snapshot-class.yml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
I am entirely unable to create SMB PVs . After creating the secret, storage class, and PVC, the PV is never created, and the error below is logged:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 7s (x3 over 8s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator
Normal Provisioning 4s (x3 over 8s) csi.san.synology.com_mpc01_c4b84a4f-d2b0-46f6-95f8-ae3f75e7ad4f External provisioner is provisioning volume for claim "default/ubuntu-test"
Warning ProvisioningFailed 3s (x3 over 7s) csi.san.synology.com_mpc01_c4b84a4f-d2b0-46f6-95f8-ae3f75e7ad4f failed to provision volume with StorageClass "synostorage-smb": rpc error: code = AlreadyExists desc = Already existing volume name with different capacity
However, something does actually get created on the Synology device - if I go to ControlPanel / Shared Folders, I see the k8s-csi-pvc-.... folders. But no corresponding PV shows up on K8s.
The manifests I am using are below:
apiVersion: v1
kind: Secret
metadata:
name: cifs-csi-credentials
namespace: synology-csi
type: Opaque
stringData:
username: testuser # DSM user account accessing the shared folder
password: testpass # DSM user password accessing the shared folder
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: synostorage-smb
provisioner: csi.san.synology.com
parameters:
protocol: smb
csi.storage.k8s.io/node-stage-secret-name: cifs-csi-credentials
csi.storage.k8s.io/node-stage-secret-namespace: synology-csi
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ubuntu-test
labels:
app: containerized-data-importer
annotations:
cdi.kubevirt.io/storage.import.endpoint: https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img
spec:
storageClassName: synostorage-smb
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Typically metrics for volumes are available via the kubelet summary API (/stats/summary).
Monitoring solutions like Prometheus with Alertmanager will scrape metrics from kubelet about volume usage and alert when a disk if filling up. This doesn't work when using the synology-csi since there are no such metrics since the csi does not seem to implement them.
Missing:
kubelet_volume_stats_used_bytes
kubelet_volume_stats_inodes
There are some histogram metrics (less useful) that are available:
Reporting the volume usage is critical to avoid cases where one runs out of disk and ultimate application failure.
Hi team,
Using these StorageClass definitions:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-btrfs
provisioner: csi.san.synology.com
parameters:
location: '/volume1' # kubernetes SSD volume
fsType: 'btrfs'
thin_provisioning: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true
and
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-ext4
provisioner: csi.san.synology.com
parameters:
location: '/volume1' # kubernetes SSD volume
fsType: 'ext4'
thin_provisioning: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true
It takes nearly 2m for a pod-ext4
Debian pod to start with a 50Gb ext4 PVC:
Warning FailedScheduling 2m12s default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 2m10s default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 2m8s default-scheduler Successfully assigned synology-csi/pod-ext4 to worker-2
Normal SuccessfulAttachVolume 2m7s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-67cbad0d-e431-4c7d-9342-b39e0cc175b4"
Normal Pulling 18s kubelet Pulling image "debian"
Normal Pulled 17s kubelet Successfully pulled image "debian" in 1.030883146s
Normal Created 16s kubelet Created container pod-ext4
Normal Started 16s kubelet Started container pod-ext4
While it takes only 12s for a pod-btrfs
Debian pod to start with a 50Gb btrfs PVC:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 21s default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 19s default-scheduler Successfully assigned synology-csi/pod-btrfs to worker-2
Normal SuccessfulAttachVolume 18s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-a4da0c08-ee82-4ae6-ad5c-feb3cf823ed7"
Normal Pulling 6s kubelet Pulling image "debian"
Normal Pulled 5s kubelet Successfully pulled image "debian" in 1.057510873s
Normal Created 5s kubelet Created container pod-btrfs
Normal Started 5s kubelet Started container pod-btrfs
Most of the time for ext4
is spent on formatting the LUN. At the end, both LUN have a different shape in Synology DSM. The ext4
one looks full, while the btrfs
one not.
It that an expected behavior?
Hello,
I’m using this CSI driver in my environment but I wonder if it supports Prometheus metrics endpoint so that I can scrape PVC and storage usage on Prometheus and grafana dashboard.
The creation of a storage does work, but there is still some problem with the access mode within nomad:
$ nomad volume status
Container Storage Interface
ID Name Plugin ID Schedulable Access Mode
test test synology true <none>
my current configuration for the nomad csi plugin job is like this
job "plugin-synology" {
type = "system"
group "controller" {
task "plugin" {
driver = "docker"
config {
image = "docker.io/synology/synology-csi:v1.0.0"
privileged = true
volumes = [
"local/csi.yaml:/etc/csi.yaml",
"/:/host",
]
args = [
"--endpoint",
"unix://csi/csi.sock",
"--client-info",
"/etc/csi.yaml",
]
}
template {
destination = "local/csi.yaml"
data = <<EOF
---
clients:
- host: 192.168.1.2
port: 8443
https: true
username: nomad
password: <password>
EOF
}
csi_plugin {
id = "synology"
type = "monolith"
mount_dir = "/csi"
}
resources {
cpu = 256
memory = 256
}
}
}
}
and the volume definition for the nomad volume create
is like
id = "test"
name = "test"
type = "csi"
plugin_id = "synology"
capacity_min = "1GiB"
capacity_max = "2GiB"
capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
mount_options {
mount_flags = ["rw"]
}
Originally posted by @mabunixda in #14 (comment)
Once this is stable with k8s (i get where priorities lie) it would be great if this was also an installable docker volume driver (no swarm is not dead).
Hope you can consider this.
I've been using jparklab/synology-csi up until recently, but after an upgrade to DSM 7, that stopped working and I'm unable to mount my volumes anymore (entirely my fault for not checking that before upgrading 😅 ).
I'm wondering if there's any way to get the csi.san.synology.com CSI plugin to mount the volumes that were provisioned by that other plugin? It looks like the naming scheme is slightly different for the resources it actually creates on the Synology-side.
I'm happy with just something that lets me create a PersistentVolume
manually pointing to an existing iSCSI volume on the Synology side, but at the moment it looks like this plugin makes quite a few assumptions about how things are named on the Synology side?
Synology-CSI is install using Helm, current 1.1.2 release.
I'm trying to deploy the Prometheus-Community/Prometheus Chart with the following configuration in regards to storage:
server:
persistentVolume:
size: 320Gi
storageClass: synology-csi-retain
Sadly the container never comes to life, cause the volume mount fails:
MountVolume.MountDevice failed for volume "pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c" : rpc error: code = Internal desc = format of disk "/dev/disk/by-path/ip-<target>:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.san.synology.com/29266ca212fe6bef686188c7d8825cf302d27c7cb96c09f019cef5c9fc84cedb/globalmount") options:("rw,defaults") errcode:(exit status 1) output:(mke2fs 1.46.5 (30-Dec-2021)
/dev/disk/by-path/ip-<target>:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1 is apparently in use by the system; will not make a filesystem here!
)
on the host machine I can see that the presented path is a link to /dev/disk/sda
ls -la /dev/disk/by-path/ip-<target>:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1
lrwxrwxrwx 1 root root 9 Jun 7 22:17 /dev/disk/by-path/ip-<target:3260-iscsi-iqn.2000-01.com.synology:storage.pvc-30b4c5c6-c7e8-4841-9da2-0164ff16107c-lun-1 -> ../../sda
and the iscsi disk seems to be attached correctly as /dev/sda.
Disk /dev/sda: 320 GiB, 343597383680 bytes, 671088640 sectors
Disk model: Storage
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
I have installed csi synology driver but I am getting this error: "standard_init_linux.go:228: exec user process caused: exec format error"
All pods in csi-synology namespace are crashing :( with that message. Can anyone assist me? I am running a cluster k3s kubernetes on 6 raspberry pi v4 with ubuntu installed.
Hi, all
I had to try an admin account or create a new account but the controller always returns the "Failed to login" error.
Does any know more information to set up CSI?
Using general defaults for the values and updating my connection strings in the config, l I am receiving this error:
Failed to create Volume: rpc error: code = Internal desc = Failed to get available location, err: DSM Api error. Error code:105
GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
Any idea whats happening here? I saw a previous issue where updating the parameters to the StorageClass was the solution but it doesn't seem to have the same resolution for me.
Thanks for any help!
My Synology shows many more "volumes"/LUNs created using the Synology CSI than I have stateful sets and they are named arbitrarily. The result is a "black box" of storage that I am unable to reason about for purposes of backup and restore or even cleaning up.
It would be helpful if documentation provided clear instructions for backing up and restoring volumes in cases of, for example, cluster failure.
Questions I have:
With Docker Compose, it is easy to reason about mounted volumes, especially when using bind mounts: such and such is the specified mounted volume and backup can be as simple as a single rsync command.
Hello,
I have a 3 nodes cluster. Create a new pvc and attach it to a pod works perfectly.
When this pod moves to another node, I encounter an issue with iSCSI login failure.
Pod describe message is :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned usenet/bazarr-59ff6fcc-mcq96 to kargoii
Normal SuccessfulAttachVolume 6s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-7d6b036c-4248-4021-bc41-160ec4fdc704"
Warning FailedMount 1s kubelet MountVolume.MountDevice failed for volume "pvc-7d6b036c-4248-4021-bc41-160ec4fdc704" : rpc error: code = Internal desc = rpc error: code
= Internal desc = Failed to login with target iqn [iqn.2000-01.com.synology:MonNAS.pvc-7d6b036c-4248-4021-bc41-160ec4fdc704], err: iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.sy
ology:MonNAS.pvc-7d6b036c-4248-4021-bc41-160ec4fdc704, portal: 192.168.1.79,3260[].
iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2000-01.com.synology:MonNAS.pvc-7d6b036c-4248-4021-bc41-160ec4fdc704, portal: 192.168.1.79,3260] (multiple)
(exit status 19)
If I change my iSCSI config for this LUN under DSM, by allowing sharing between multiple initiators, it works well, but :
I am using MicroK8S with K8S 1.23.
The Synology NAS has a limit of 10 Targets, which can be created. The CSI Driver creates every time a new Volume is created, a pair of LUN/Target, thus only 10 Volumes can be created. For use cases, where the NAS should be used as storage provider for the whole Cluster, this is number is far to small.
In theory the maximum number of volumes can be 320, because every Target can handle 32 LUNs. So I suggest to change the mechanism of how volumes are created to reuse existing Targets, if the maximum number is exceeded.
Hello,
I'm experimenting with Synology as a storage for a K8s cluster and I was happy to find out there is an official CSI. When reviewing the manifests, I noticed that the cluster role synology-csi-node-role
gives service account csi-node-sa
permissions to read all secrets in the whole cluster. Is this really needed to operate the persistent volumes?
It would be awesome if the LUN Description field was populated by the name of the PersistentVolumeClaim that it was created from. It would be in the form of '[namespace]-[PVC name]' because PVCs are namespaced objects.
Apparently newer versions of kubectl
do not longer support the argument --short
for the version check.
Output:
$ ./scripts/deploy.sh install --basic
==== Creates namespace and secrets, then installs synology-csi ====
error: unknown flag: --short
See 'kubectl version --help' for usage.
Version not supported:
Possible fix:
Remove the argument --short
from https://github.com/SynologyOpenSource/synology-csi/blob/main/scripts/deploy.sh#L13
Example output from kubectl:
$ kubectl version
Client Version: v1.28.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.4+k3s1
This should be fine.
Dear @SynologyOpenSource team, @jasonchiu-syno, @zyli-syno, @chihyuwu,
It is possible to migrate the personal account to organization?
Converting:
It is possible to rename this account and create the organization and move/transfer the repository to the organization.
Rename account:
Create an organization:
Transfer a repository:
Like:
Can we get a Helm chart? Helm seems to be the defacto way to deploy things in Kubernetes these days.
Is there documentation on how/where to configure CHAP settings when creating the iscsi targets/luns? Is there a way to store that in a secret and point to it from the storageclass parameter key?
I have the CSI driver setup and working, and it creates the targets and luns as expected, but I don't see where to provide CHAP credentials. If I create the PVC in my k8s cluster, it will create the target and luns, but will fail to connect because my nodes are passing CHAP credentials to the Synology NAS. If I edit the target in DSM, then the lun mounts as expected.
This is a new install, so I am not sure what will happen with scaling up and down yet, but mobility between nodes should be fine.
For references, my cluster is using Ubuntu 22.04LTS on Raspberry Pi 4B Nodes, which are PXE Booting to an iSCSI root off the Synology. So the iscsi configuration on the nodes and the synology are configured and working correctly. And as stated, after the PVCs are deployed and the PVs are created, I can see the LUNs and Targets in the DSM. If I add the CHAP configuration at that point, then the volumes mount and run as expected.
https://www.synology.com/en-uk/releaseNote/ScsiTarget
Added support for FUA and Sync Cache SCSI commands to lower the risk of data loss or file system crash.
This option is disabled for LUNs created by the CSI driver.
As I understand these commands decrease the likeliness of corruption and we should be able to enable them.
First of all, thank you so much for this driver - it's a great start!
After updating images and cluster roles, I have the CSI working to the point that volumes are automatically created, attached and mounted in containers. But for some reason, the ownership and permissions are not exactly the same as they are when I was manually creating a PersistentVolume (though I did not specify anything to control it). As a result, all the containers I tried bail out with a permission error saying that they cannot write files in the mounted volumes.
I have ssh'ed to the nodes and verified that the ownership and permission of the mounted directories is now root:root and 755, respectively. For some reason, my containers are not running as root, so they don't have permission.
I wonder how to control ownership and permissions for the mount paths?
After success creates a PVC and generates an iSCSI in my NAS.
I try to remove the PVC but the iSCSI was not removed after PVC removal succeeded.
Does there have any rules to remove iSCSI or do I have to remove it manually?
Installing the prometheus operator helm chart with defaults (https://prometheus-community.github.io/helm-charts, kube-prometheus-stack) is by default setting this for the prometheus instance:
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
This makes the "prometheus-kube-prometheus-stack-prometheus-0" pod go into a crash-loop with the error in logs: "nable to create mmap-ed active query log"
Changing the prometheusSpec securityContext like this:
securityContext:
runAsGroup: 0
runAsNonRoot: true
runAsUser: 0
fsGroup: 2000
makes it all work. But most likely running with root permissions then on the file system.
This seems to be an issue with the csi implementation where it doesn't support fsGroupSupport or similar. For example longhorn does this with "fsGroupPolicy: ReadWriteOnceWithFSType" which make each volume being examined at mount time to determine if permissions should be recursively applied.
Hello,
Thanks for all your work. This integration looks very good.
I have tried to used it but I am getting below error
Name: test
Namespace: vaultwarden
StorageClass: synology-smb-storage
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.san.synology.com
volume.kubernetes.io/storage-provisioner: csi.san.synology.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 14s (x3 over 26s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator
Normal Provisioning 10s (x5 over 26s) csi.san.synology.com_node2_2fb7c8e5-b9d1-4829-9e76-d2dff23ee566 External provisioner is provisioning volume for claim "vaultwarden/test"
Warning ProvisioningFailed 10s (x5 over 26s) csi.san.synology.com_node2_2fb7c8e5-b9d1-4829-9e76-d2dff23ee566 failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
I have read the documentation and I have checked that same host is configured in the secret configured in the storage class as well as in the secret that store clients. Here you are.
StorageClass
Name: synology-smb-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"synology-smb-storage"},"parameters":{"csi.storage.k8s.io/node-stage-secret-name":"cifs-csi-credentials","csi.storage.k8s.io/node-stage-secret-namespace":"synology-csi","dsm":"192.168.30.13","location":"/volume1/KubernetesVolumes","protocol":"smb"},"provisioner":"csi.san.synology.com","reclaimPolicy":"Retain"}
Provisioner: csi.san.synology.com
Parameters: csi.storage.k8s.io/node-stage-secret-name=cifs-csi-credentials,csi.storage.k8s.io/node-stage-secret-namespace=synology-csi,dsm=192.168.30.13,location=/volume1/KubernetesVolumes,protocol=smb
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Retain
VolumeBindingMode: Immediate
Events: <none>
StorageClass Secret
apiVersion: v1
data:
password: xxxxx
username: xxxxx
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"cifs-csi-credentials","namespace":"synology-csi"},"stringData":{"password":"UGVyJmNvMTgxMDE2","username":"ampkaWF6"},"type":"Opaque"}
creationTimestamp: "2022-06-20T19:25:35Z"
name: cifs-csi-credentials
namespace: synology-csi
resourceVersion: "7344539"
uid: f283712a-a557-4f5a-83b2-dfea269476c7
type: Opaque
Clients secret file
apiVersion: v1
data:
client-info.yml: xxxxx
kind: Secret
metadata:
creationTimestamp: "2022-06-20T18:44:45Z"
name: client-info-secret
namespace: synology-csi
resourceVersion: "7338982"
uid: df09b074-6008-4df2-a5e6-7a870bc840af
type: Opaque
And content of client-info.yml is
---
clients:
- host: 192.168.30.13
port: 5001
https: true
username: xxxx
password: xxxxx
I think everything is configured properly. I can't find any error.
Logs from pods of deployment synology-csi-node looks fine (no error). The only error I can see is from controller.
csi-provisioner container
I0620 19:45:43.549885 1 controller.go:1279] provision "vaultwarden/test" class "synology-smb-storage": started
I0620 19:45:43.550114 1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I0620 19:45:43.550152 1 connection.go:184] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-15584bfb-4154-4d8c-9c3e-64a150d562f1","parameters":{"dsm":"192.168.30.13","location":"/volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I0620 19:45:43.550269 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vaultwarden", Name:"test", UID:"15584bfb-4154-4d8c-9c3e-64a150d562f1", APIVersion:"v1", ResourceVersion:"7346608", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "vaultwarden/test"
I0620 19:45:43.838611 1 connection.go:186] GRPC response: {}
I0620 19:45:43.838809 1 connection.go:187] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0620 19:45:43.838894 1 controller.go:767] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0620 19:45:43.839015 1 controller.go:1074] Final error received, removing PVC 15584bfb-4154-4d8c-9c3e-64a150d562f1 from claims in progress
W0620 19:45:43.839048 1 controller.go:933] Retrying syncing claim "15584bfb-4154-4d8c-9c3e-64a150d562f1", failure 9
E0620 19:45:43.839104 1 controller.go:956] error syncing claim "15584bfb-4154-4d8c-9c3e-64a150d562f1": failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0620 19:45:43.839166 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vaultwarden", Name:"test", UID:"15584bfb-4154-4d8c-9c3e-64a150d562f1", APIVersion:"v1", ResourceVersion:"7346608", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synology-smb-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
E0620 19:46:10.337400 1 controller.go:1025] claim "325f380f-ca75-4b27-98e8-e01a85c8f5e4" in work queue no longer exists
csi-plugin container
2022-06-20T19:51:09Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:09Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:10Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:10Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:11Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:11Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:11Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:11Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:13Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:13Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:13Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:13Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:17Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:17Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:18Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:18Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:26Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:26Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:26Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:26Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:51:42Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:51:42Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:51:43Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:51:43Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
2022-06-20T19:52:15Z [INFO] [driver/utils.go:104] GRPC call: /csi.v1.Controller/CreateVolume
2022-06-20T19:52:15Z [INFO] [driver/utils.go:105] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-605b08d1-2a1d-4803-a0da-1a79687bfa6a","parameters":{"dsm":"192.168.30.13","location":"/Volume1/KubernetesVolumes","protocol":"smb"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
2022-06-20T19:52:15Z [ERROR] [service/dsm.go:474] [192.168.30.13] Failed to create Volume: rpc error: code = Internal desc = Failed to create share, err: Share API error. Error code: 3300
2022-06-20T19:52:15Z [ERROR] [driver/utils.go:108] GRPC error: rpc error: code = Internal desc = Couldn't find any host available to create Volume
I have also checked the user I have set and it has permissions permissions to read/write in location /Volume1/KubernetesVolumes
Hej,
when will you release a v1.1.2 that contains the NodeServiceCapability_RPC_VOLUME_MOUNT_GROUP
(278b4cb) fix?
Error:
MountVolume.MountDevice failed for volume "pvc-1444ade9-3341-4c73-814c-d5afb0cd404f" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.san.synology.com not found in the list of registered CSI drivers
Output from CSInodes:
$ kubectl get csinodes
NAME DRIVERS AGE
integrate 0 67m
Logs from csi-driver-registrar:
I1117 22:44:53.454185 1 main.go:110] Version: v1.2.0-0-g6ef000ae
I1117 22:44:53.454258 1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I1117 22:44:53.454279 1 connection.go:151] Connecting to unix:///csi/csi.sock
I1117 22:44:58.715146 1 main.go:127] Calling CSI driver to discover driver name
I1117 22:44:58.715188 1 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo
I1117 22:44:58.715199 1 connection.go:181] GRPC request: {}
I1117 22:44:58.722872 1 connection.go:183] GRPC response: {"name":"csi.san.synology.com","vendor_version":"1.0.0"}
I1117 22:44:58.723816 1 connection.go:184] GRPC error: <nil>
I1117 22:44:58.723830 1 main.go:137] CSI driver name: "csi.san.synology.com"
I1117 22:44:58.723907 1 node_register.go:58] Starting Registration Server at: /registration/csi.san.synology.com-reg.sock
I1117 22:44:58.724165 1 node_register.go:67] Registration Server started at: /registration/csi.san.synology.com-reg.sock
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.15-34+c064bb32deff78", GitCommit:"c064bb32deff7823e740d5ab40f361f92908c4cd", GitTreeState:"clean", BuildDate:"2021-09-28T07:50:53Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
I am able to successfully connect with my client config. Deploying the driver is successful. But using the StorageClass results in the following errors:
6m31s Normal Scheduled pod/dokuwiki-cf5bf85c9-7bsp4 Successfully assigned dokuwiki/dokuwiki-cf5bf85c9-7bsp4 to loving-kypris
6m30s Normal SuccessfulAttachVolume pod/dokuwiki-cf5bf85c9-7bsp4 AttachVolume.Attach succeeded for volume "pvc-f2ecf090-4737-41b2-8644-8442f7179b00"
2m Warning FailedMount pod/dokuwiki-cf5bf85c9-7bsp4 MountVolume.MountDevice failed for volume "pvc-f2ecf090-4737-41b2-8644-8442f7179b00" : rpc error: code = Internal desc = rpc error: code = Internal desc = Failed to login with target iqn [iqn.2000-01.com.synology:mother.pvc-f2ecf090-4737-41b2-8644-8442f7179b00], err: Failed to connect to bus: No data available
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Cannot perform discovery. Initiatorname required.
iscsiadm: Could not perform SendTargets discovery: could not connect to iscsid
(exit status 20)
2m10s Warning FailedMount pod/dokuwiki-cf5bf85c9-7bsp4 Unable to attach or mount volumes: unmounted volumes=[dokuwiki-data], unattached volumes=[kube-api-access-g4bgv dokuwiki-data]: timed out waiting for the condition
Here's an image showing the LUNs successfully created on the NAS-side:
is this compatible with nomad?
https://learn.hashicorp.com/tutorials/nomad/stateful-workloads-csi-volumes
It appears this csi does not support Kubernetes' volume populators which allows Custom Resources to populate volumes. In my case, I have an up and running k8s cluster with a synology storage backend and the synology-csi driver is able to create persistent volume without any issues.
However, I have trouble to populate volumes using containerized-data-importer CRD, that heavily relies on volume populator and dataSourceRef feature of k8s Volume specification.
Steps to reproduce:
$ k describe pvc example-import-dv
Name: example-import-dv
Namespace: default
StorageClass: synology-csi-retain
Status: Pending
Volume:
Labels: alerts.k8s.io/KubePersistentVolumeFillingUp=disabled
app=containerized-data-importer
app.kubernetes.io/component=storage
app.kubernetes.io/managed-by=cdi-controller
Annotations: cdi.kubevirt.io/storage.contentType: kubevirt
cdi.kubevirt.io/storage.pod.phase: Pending
cdi.kubevirt.io/storage.preallocation.requested: false
cdi.kubevirt.io/storage.usePopulator: true
volume.beta.kubernetes.io/storage-provisioner: csi.san.synology.com
volume.kubernetes.io/storage-provisioner: csi.san.synology.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
DataSource:
APIGroup: cdi.kubevirt.io
Kind: VolumeImportSource
Name: volume-import-source-22af4f80-2646-4e85-b19b-90d7006f29e5
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatedPVCPrimeSuccessfully 6m21s import-populator PVC Prime created successfully
Normal Provisioning 3m15s (x4 over 6m21s) csi.san.synology.com_yvr5-lf09_cdf5a02e-5e95-475b-a438-d40632de8ac3 External provisioner is provisioning volume for claim "default/example-import-dv"
Normal Provisioning 3m15s (x4 over 6m21s) external-provisioner Assuming an external populator will provision the volume
Normal ExternalProvisioning 56s (x26 over 6m21s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.san.synology.com" or manually created by system administrator
The CRD creates an intermediate data-importer pod, and this pod in return creates intermediate volumes. The data importer pod fails to start up as it fails to attach it intermediate volume. Here is the events visible importer pod:
Warning FailedMount 2m34s kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[cdi-data-vol kube-api-access-7h9n6]: timed out waiting for the condition
Warning FailedMapVolume 65s (x13 over 11m) kubelet MapVolume.SetUpDevice failed for volume "pvc-5194ebf0-8cca-4a78-9459-02afa104ac3e" : kubernetes.io/csi: blockMapper.SetUpDevice failed to get CSI client: driver name csi.san.synology.com not found in the list of registered CSI drivers
Warning FailedMount 19s (x4 over 9m20s) kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[kube-api-access-7h9n6 cdi-data-vol]: timed out waiting for the condition
make test
in deploy/helm
currently fails because StorageClass used in the test script does not match to the one predefined in the values.yaml
. After renaming the storageClass synology-iscsi-storage-delete
to synology-csi-delete
as required by the test, the test will pass.
Looks like 2FA is now mandatory and my csi user with admin group right fails to connect to DSM cause it passes only the first phase of the 2FA authentication as seen in the logs....
Trying to make my Synology CSI ISCSI work but not getting it through.
I0906 20:28:00.271711 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-pvc-tshoot-iscsi-03", UID:"6aa6fcf4-50b0-43ab-bd6c-xxxxxxxx", APIVersion:"v1", ResourceVersion:"637548", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synostorage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
I0906 20:28:00.272002 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-pvc-tshoot-iscsi-01", UID:"8a9e2772-49f5-402a-a7ad-b32034xxxxxxx", APIVersion:"v1", ResourceVersion:"637588", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synology-iscsi-storage": rpc error: code = Internal desc = Couldn't find any host available to create Volume
#below the synology log showing my worker node trying to connect to the DSM. Only first auth passed via password.
09/06/2023 13:59:27 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:59:26 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:59:26 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:59:25 Info synology02 synology-k3s-csi Connection User [synology-k3s-csi] from [192.168.0.39] has successfully passed the first authentication of 2FA via [password] 09/06/2023 13:58:40 Info synology02 SYSTEM System System successfully stopped [SSH service].
When creating a PVC using the SMB/CIFS storageclass there is no way to create an encrypted share. The shares are created as un-encrypted. To satisfy ISO requirements data at reset needs to be encrypted.
After the un-encrypted share is created I can manually encrypt the shared folder. However, when the volume is deleted, synology-csi controller is not able to automatically delete the share. If I leave the share as unencrypted then synology-csi can correctly delete the share when the volume is deleted.
Is it possible to either:
I got most of it to work
i can see the lun and the iscsi gets created, even if the logs from controller say it gave error.
But it cant map the the 2 together,
and if i try with through the web interface myself from that lun i get
lun [xxxx] is not available due to load fail
If i create a lun with same name and map it to the iscsi myself it seem i can get it to create a pv in the cluster and bound the pvc
So maybe something changed in the api for the DSM 7.0.1-42218 Update 2
See
and quite some other places.Please do not use Always
for tags representing stable versions like 1.0.1
in this case. While Always
is very convenient for you to use during development, it may create unreproducible builds downstream if you ever pushed an update to a supposedly stable image.
Containers using non-0 UID's create permission errors, there seems to be no settings to remedy this. ISCSI has the 10 PVC limit which is not viable.
This usually can be fixed by setting
securityContext:
runAsUser: 0
But optimally any user in that Pod should be able to write, or have it be configurable
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.