GithubHelp home page GithubHelp logo

metal-stack / csi-lvm Goto Github PK

View Code? Open in Web Editor NEW
75.0 9.0 7.0 402 KB

kubernetes csi for bare metal deployments, uses local storage

License: MIT License

Go 77.52% Dockerfile 2.80% Makefile 4.88% Shell 14.79%
kubernetes csi lvm2 storage-provider bare-metal lvm csi-lvm persistent-volumes

csi-lvm's Introduction

CSI LVM Provisioner

Go Report Card

Overview

This driver is replaced by csi-driver-lvm, all further development happens there

CSI LVM Provisioner utilizes local storage of Kubernetes nodes to provide persistent storage for pods.

It automatically creates hostPath based persistent volumes on the nodes and makes use of the Local Persistent Volume feature introduced by Kubernetes 1.10 but it's simpler to use than the built-in local volume feature in Kubernetes.

Underneath it creates a LVM logical volume on the local disks. A grok pattern, which disks to use can be specified.

This Provisioner is derived from the Local Path Provisioner.

Compare to Local Path Provisioner

Pros

Dynamic provisioning the volume using host path.

  • Currently the Kubernetes Local Volume provisioner cannot do dynamic provisioning for the host path volumes.
  • Support for volume capacity limit.
  • Performance speedup if more than one local disk is available because it can create lv´s which are stripe across all physical volumes.

Requirement

Kubernetes v1.12+.

Deployment

Installation

The deployments consists of two parts:

  • A controller deployment, which is registerd as storage controller and schedules the creation and deletion of volumes
  • A reviver daemonset, which is responsible for re-creating the mount-structure after a reboot

In this setup, the directory /tmp/csi-lvm/<name of the pv> will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). The provisioner will be installed in csi-lvm namespace by default.

The default grok pattern for disks to use is /dev/nvme[0-9]n*, please check if this matches your setup, otherwise, copy controller.yaml to your local machine and modify the value of CSI_LVM_DEVICE_PATTERN accordingly.

- name: CSI_LVM_DEVICE_PATTERN
  value: /dev/nvme[0-9]n*"

If this is set you can install the csi-lvm with:

kubectl apply -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/controller.yaml
kubectl apply -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/reviver.yaml

After installation, you should see something like the following:

$ kubectl -n csi-lvm get pod
NAME                                     READY     STATUS    RESTARTS   AGE
csi-lvm-controller-d744ccf98-xfcbk       1/1       Running   0          7m
csi-lvm-reviver-ndh46                    1/1       Running   0          7m

Check and follow the provisioner log using:

$ kubectl -n csi-lvm logs -f csi-lvm-controller-d744ccf98-xfcbk
I1021 14:09:31.108535       1 main.go:132] Provisioner started
I1021 14:09:31.108830       1 leaderelection.go:235] attempting to acquire leader lease  csi-lvm/metal-stack.io-csi-lvm...
I1021 14:09:31.121305       1 leaderelection.go:245] successfully acquired lease csi-lvm/metal-stack.io-csi-lvm
I1021 14:09:31.124339       1 controller.go:770] Starting provisioner controller metal-stack.io/csi-lvm_csi-lvm-controller-7f94749d78-t5nh8_17d2f7ef-1375-4e36-aa71-82e237430881!
I1021 14:09:31.126248       1 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"csi-lvm", Name:"metal-stack.io-csi-lvm", UID:"04da008c-36ec-4966-a4f6-c2028e69cdd5", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' csi-lvm-controller-7f94749d78-t5nh8_17d2f7ef-1375-4e36-aa71-82e237430881 became leader
I1021 14:09:31.225917       1 controller.go:819] Started provisioner controller metal-stack.io/csi-lvm_csi-lvm-controller-7f94749d78-t5nh8_17d2f7ef-1375-4e36-aa71-82e237430881!

Usage

Create a hostPath backed Persistent Volume and a pod uses it:

kubectl create -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

You should see the PV has been created:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   50Mi       RWO            Delete           Bound     default/lvm-pvc          csi-lvm                  4s

The PVC has been bound:

$ kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
lvm-pvc          Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   50Mi       RWO            csi-lvm        16s

And the Pod started running:

$ kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
volume-test   1/1       Running   0          3s

Write something into the pod

kubectl exec volume-test -- sh -c "echo lvm-test > /data/test"

Now delete the pod using

kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

After confirm that the pod is gone, recreated the pod using

kubectl create -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

Check the volume content:

$ kubectl exec volume-test cat /data/test
lvm-test

Delete the pod and pvc

kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pvc.yaml
kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/example/pod.yaml

The volume content stored on the node will be automatically cleaned up. You can check the log of csi-lvm-controller-xxx for details.

Now you've verified that the provisioner works as expected.

Configuration

The configuration of the csi-lvm-controller is done via Environment variables:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: csi-lvm-controller
  namespace: csi-lvm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: csi-lvm-controller
  template:
    metadata:
      labels:
        app: csi-lvm-controller
    spec:
      serviceAccountName: csi-lvm-controller
      containers:
      - name: csi-lvm-controller
        image: ghcr.io/metal-stack/csi-lvm-controller:v0.6.3
        imagePullPolicy: IfNotPresent
        command:
        - /csi-lvm-controller
        args:
        - start
        env:
        - name: CSI_LVM_PROVISIONER_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: CSI_LVM_PROVISIONER_IMAGE
          value: "ghcr.io/metal-stack/csi-lvm-provisioner:v0.6.3"
        - name: CSI_LVM_DEVICE_PATTERN
          value: "/dev/loop[0,1]"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: csi-lvm-reviver
  namespace: csi-lvm
spec:
  selector:
    matchLabels:
      app: csi-lvm-reviver
  template:
    metadata:
      labels:
        app: csi-lvm-reviver
    spec:
      serviceAccountName: csi-lvm-reviver
      containers:
      - name: csi-lvm-reviver
        image: ghcr.io/metal-stack/csi-lvm-provisioner:v0.6.3
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        env:
          - name: CSI_LVM_MOUNTPOINT
            value: "/tmp/csi-lvm"
        command:
        - /csi-lvm-provisioner
        args:
        - revivelvs
        volumeMounts:
          - mountPath: /tmp/csi-lvm
            name: data
            mountPropagation: Bidirectional
          - mountPath: /dev
            name: devices
          - mountPath: /lib/modules
            name: modules
      volumes:
        - hostPath:
            path: /tmp/csi-lvm
            type: DirectoryOrCreate
          name: data
        - hostPath:
            path: /dev
            type: DirectoryOrCreate
          name: devices
        - hostPath:
            path: /lib/modules
            type: DirectoryOrCreate
          name: modules

Definition

CSI_LVM_DEVICE_PATTERN is a grok pattern to specify which block devices to use for lvm devices on the node. This can be for example /dev/sd[bcde] if you want to use only /dev/sdb - /dev/sde. IMPORTANT: no wildcard (*) allowed currently.

PVC Striped, Mirrored

By default the LV´s are created in linear mode on the devices specified by the grok pattern, beginning on the first found device. If this is full, the next LV will be created on the next device and so forth.

If more than 1 device was found with the given pattern, two more options for the created lvs are available:

  • mirror: all block will be mirrored with one additional copy to a additional disk found if more than one disk is present.
  • striped: the pvc will be a stripe across all found block devices specified by the above grok pattern. If for example 4 disk where found, all blocks written are spread across 4 devices in chunks. This gives ~4 times the read/write performance for the volume, but also a 4 times higher risk of data loss in case a single disk fails.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-lvm
  resources:
    requests:
      storage: 50Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc-striped
  namespace: default
  annotations:
    csi-lvm.metal-stack.io/type: "striped"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-lvm
  resources:
    requests:
      storage: 50Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc-mirrored
  namespace: default
  annotations:
    csi-lvm.metal-stack.io/type: "mirror"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-lvm
  resources:
    requests:
      storage: 50Mi

Uninstall

Before un-installation, make sure the PVs created by the provisioner have already been deleted. Use kubectl get pv and make sure no PV with StorageClass csi-lvm.

To uninstall, execute:

kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/controller.yaml
kubectl delete -f https://raw.githubusercontent.com/metal-stack/csi-lvm/master/deploy/reviver.yaml

Migration

If you want to migrate your existing PVC to / from csi-driver-lvm, you can use korb.

csi-lvm's People

Contributors

gerrit91 avatar majst01 avatar mwennrich avatar mwindower avatar robertvolkmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-lvm's Issues

Support arm64 and arm docker images

Thanks for this project, it is great. I would like to run this on some IoT devices.
Can you please also provide docker images for arm64 and arm

controller tries to provision deletion pods to non-existing workers

After cluster worker rolling, the controller tries to provision deletion pods to the old - now non existing - worker:

apiVersion: v1
kind: Pod
metadata:
  name: delete-pvc-06aa7d26-114f-4527-8de4-28eabde6a66b
  namespace: csi-lvm
spec:
  containers:
  - args:
    - deletelv
    - --lvname
    - pvc-06aa7d26-114f-4527-8de4-28eabde6a66b
    - --vgname
    - csi-lvm
    - --directory
    - /tmp/csi-lvm
    command:
    - /csi-lvm-provisioner
(...)
  nodeName: shoot--test--fra-equ01-default-worker-7b46975cfb-cwsnx
status:
  phase: Pending

This pod will stay pending since the node doesn't exist.

$ kubectl get nodes
NAME                                                STATUS   ROLES   AGE    VERSION
shoot--test--fra-equ01-default-worker-5f9b5-75z79   Ready    node    3d5h   v1.18.10
I1106 12:51:07.335238       1 controller.go:392] provisioner pod status:Pending
I1106 12:51:07.452912       1 controller.go:392] provisioner pod status:Pending
I1106 12:51:08.232826       1 controller.go:205] clean up volume pvc-70102302-474e-4218-abbb-743146afe8fa failed: create process timeout after 20 seconds
E1106 12:51:08.232868       1 controller.go:1346] delete "pvc-70102302-474e-4218-abbb-743146afe8fa": volume deletion failed: create process timeout after 20 seconds

The controller will start the delete-provision-pod over and over again, till somebody deletes the pv.

Maybe we should check here if the node really exists, and if not, either report and error and abort or just report the volume as successful deleted?

reviver didn't revive

image: r.metal-stack.io/csi-lvm-provisioner:v0.6.3

I0509 07:33:08.838192      42 main.go:40] starting csi-lvm-provisioner
I0509 07:33:08.838374      42 revivelvs.go:68] starting reviver
I0509 07:33:08.913316      42 createlv.go:221] compare vg:csi-lvm with:csi-lvm
I0509 07:33:09.102402      42 revivelvs.go:95] unable to list existing logicalvolumes:strconv.ParseUint: parsing "-1": invalid syntax
I0509 07:33:09.176831      42 revivelvs.go:57] vgs output:  VG      #PV #LV #SN Attr   VSize VFree
  csi-lvm   2  36   0 wz--n- 2.91t 1.79t
I0509 07:33:09.268702      42 revivelvs.go:63] lvs output:  LV                                               VG      Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pvc-001e1d09-9673-48f0-b1e8-2ecf72ad0393         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-00bf0c07-37f5-4a55-b03e-21a9be004d45         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-06e0a84e-9a2b-4312-ab9e-e425a1fdcb7d         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-09c76b4b-3b54-46ee-ac46-c54f94f9ae2c         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-0ac3cde9-819b-43d2-966d-c0a06c0690c9         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-13bc3e1e-e7b0-4091-af73-7ae87d009d60         csi-lvm Rwi-aor--- 10.00g                                    100.00
  pvc-1768ea7f-b934-4cb1-b8c2-5e0f51348de4         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-1e93adf7-d80b-46fb-985f-766f5cb8fdfd         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-1f35dec2-f628-4d89-9c1a-1c8c2ad01f7a         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-1fa75c11-5ef1-4c77-8eb7-2e46d6a4c09b         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-220a770f-9c0c-4dc9-97bc-4462916206f4         csi-lvm Rwi-aor--- 10.00g                                    100.00
  pvc-2b02006d-c5b1-4da5-8612-5aba18769484         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-3d8438ce-7d53-4f05-819c-d931256e19ae         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-3f9b8ca9-c4e3-462d-a49f-ffd11f242ae9         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-59a4f792-a88b-486e-b681-098f0e92f515         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-629b7cff-33d3-4703-b11b-d1ad42b309e4         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-646a1aeb-a6fc-4e52-95e7-a9192c8a3523         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-6641133f-b3cd-4a3b-ad11-50867de9be73         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-70b9d12e-509f-4117-a90d-e93f246dd100         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-8b12d9f0-714b-4590-b64a-cc188ae003ea         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-8b81a791-41de-422a-89ab-99a0d40a5b9b         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-a80e0a2c-f550-4a3d-93cc-04854b5ae7c6         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-a9eff2c9-65e0-4eaf-8bb8-69b21d82a111         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-b2d7daeb-a603-48ad-8608-fc74d9c8d40a         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-b9c4e19b-aa82-4599-8db2-eaa4b78699b3         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-ba82b013-7bcf-46f4-b891-781eaa655a8f         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-c16a52f9-9802-4c5f-a9bd-e309b5d4372d         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-c3f501e3-46ec-46a6-b74e-e44333c30a7c         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-c5fe94d1-6f21-4fe2-b0f8-f0d4f7a51e5a         csi-lvm Rwi-aor--- 10.00g                                    100.00
  pvc-c64c6c34-b42c-4d96-9dee-10dbf33685c4         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-d901d784-1503-43b5-ba5f-0472d72638e4         csi-lvm Rwi-aor--- 16.00g                                    100.00
  pvc-e34b6094-6397-4a90-91ab-e5ea1b980d5c         csi-lvm Rwi---r--k 10.00g
  pvc-e34b6094-6397-4a90-91ab-e5ea1b980d5c_rmeta_0 csi-lvm ewi-a-r-r-  4.00m
  pvc-e34b6094-6397-4a90-91ab-e5ea1b980d5c_rmeta_1 csi-lvm ewi-a-r-r-  4.00m
  pvc-ef8dca68-f033-4a9d-bfc2-6451d073d559         csi-lvm Rwi-aor--- 20.00g                                    100.00
  pvc-f49c05b6-5e34-49f3-865e-fb1418170726         csi-lvm Rwi-aor--- 16.00g                                    100.00

volume group backup file must be persisted

By default lvm2 stores a copy of the metadata in /etc/lvm/backup, but this directory is ephemeral in our current setup. To solve this we must mount the hostpath /etc/lvm/backup to the same location into the provisioner and reviver pod.

This enables us to repair a volume with corrupted metadata which might occur for whatever reason.

Check if the usage of PriorityClasses will help to ensure that the reviver gets started before all other Pods

We guess that with the actual setup there might be a chance that a pod with PV´s will be scheduled before the reviver has been started on that node and all pvc have been already activated and mounted.

k8s has the idea of PriorityClasses, this might be the right approach.

Documentation:
https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/

There is a predifined PR which seems already the right one:
system-node-critical

block devices do not get fully unmounted

delete-pvc-18285b91-fdf0-408c-82e5-c6be034caef0 csi-lvm-delete I0317 15:16:03.102191       1 deletelv.go:61] delete lv pvc-18285b91-fdf0-408c-82e5-c6be034caef0 vg:csi-lvm dir:/tmp/csi-lvm block:true
delete-pvc-18285b91-fdf0-408c-82e5-c6be034caef0 csi-lvm-delete E0317 15:16:03.105274       1 deletelv.go:83] unable to umount /dev/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0 from /tmp/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0 output:umount: /dev/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0: not mounted.
delete-pvc-18285b91-fdf0-408c-82e5-c6be034caef0 csi-lvm-delete  err:exit status 32
delete-pvc-18285b91-fdf0-408c-82e5-c6be034caef0 csi-lvm-delete E0317 15:16:03.105342       1 deletelv.go:87] unable to remove mount directory:/tmp/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0 err:remove /tmp/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0: device or resource busy
delete-pvc-18285b91-fdf0-408c-82e5-c6be034caef0 csi-lvm-delete I0317 15:16:03.725838       1 deletelv.go:72] lv pvc-18285b91-fdf0-408c-82e5-c6be034caef0 vg:csi-lvm deleted
udev on /tmp/csi-lvm/pvc-df8b3c9c-57d0-4830-abac-2cdce07429de type devtmpfs (rw,nosuid,relatime,size=48779228k,nr_inodes=12194807,mode=755)
udev on /tmp/csi-lvm/pvc-bc5dd492-5101-4310-89db-6df1a3c1f4ca type devtmpfs (rw,nosuid,relatime,size=48779228k,nr_inodes=12194807,mode=755)
udev on /tmp/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0 type devtmpfs (rw,nosuid,relatime,size=48779228k,nr_inodes=12194807,mode=755)

/ # lvs
/ # vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  csi-lvm   1   0   0 wz--n- 745.21g 745.21g

Filesystem           1K-blocks      Used Available Use% Mounted on
udev                  48779228         0  48779228   0% /tmp/csi-lvm/pvc-df8b3c9c-57d0-4830-abac-2cdce07429de
udev                  48779228         0  48779228   0% /tmp/csi-lvm/pvc-bc5dd492-5101-4310-89db-6df1a3c1f4ca
udev                  48779228         0  48779228   0% /tmp/csi-lvm/pvc-18285b91-fdf0-408c-82e5-c6be034caef0

pv-deletion fails, if lv does not exist anymore

delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete I0113 07:34:53.055746       1 main.go:40] starting csi-lvm-provisioner
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete I0113 07:34:53.055916       1 deletelv.go:61] delete lv pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 vg:csi-lvm dir:/tmp/csi-lvm block:false
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete E0113 07:34:53.057827       1 deletelv.go:83] unable to umount /tmp/csi-lvm/pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 from /dev/csi-lvm/pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 output:umount: /tmp/csi-lvm/pvc-0ca659b5-f472-4619-8ff7-c993d23fc800: no mount point specified.
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete  err:%!w(*exec.ExitError=&{0xc00012e810 []})
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete E0113 07:34:53.057914       1 deletelv.go:87] unable to remove mount directory:/tmp/csi-lvm/pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 err:%!w(*fs.PathError=&{remove /tmp/csi-lvm/pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 2})
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete F0113 07:34:53.450816       1 deletelv.go:38] Error deleting lv: unable to delete lv: failed to list LVs: exit status 5 output:
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete goroutine 1 [running]:
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete k8s.io/klog/v2.stacks(0xc000136001, 0xc000214140, 0x80, 0x12f)
delete-pvc-0ca659b5-f472-4619-8ff7-c993d23fc800 csi-lvm-delete  /go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1020 +0xb9

Support multiple volume groups via storage classes

Thanks for this project, very useful!

At the moment, the used disks are hardcoded via a env var in the controller.

This is a somewhat limiting design.
Concretely, I need support for two separate volume groups. (fast (SSDs) and slow (HDDs))

Proposed design: the StorageClass contains all necessary mapping in parameters.

The disks shouldn't need to be hardcoded at all by default.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-arbitrary-name
params:
  volumeGroup: fast-disks
  # Purely optional, for restricting:
  disks: ["sdc", "sde"]
provisioner: metal-stack.io/csi-lvm
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

Deploy on Production environments

Hi,

I have been working with this CSI over a week. I have some questions:

  1. Docker image is not stored in any public registry ?
  2. The location /tmp/csi-lvm could be changed ? For example: **/var/lib/csi-lvm ? Because /tmp is erased after a reboot.

I have attached a new disk on my virtual machine (/dev/sdb) and I replaced it for /dev/loop[0,1] on CSI LVM Controller.

 - name: CSI_LVM_DEVICE_PATTERN
   value: "/dev/sdb1"

I have deployed controller, reviver, pvc and pod. It worked, however after rebooting the virtual machine, data was erased.

I would like to know the steps that I need in order to deploy this tool on production environments.

Thanks in advance,

Regards

Support custom filesystem type and mount options

such as

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-lvm
parameters:
  fsType: ext4/xfs/btrfs
  mkfsOptions: xxxxx
  mountOptions: xxxxx
provisioner: metal-stack.io/csi-lvm
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.