openebs-archive / device-localpv Goto Github PK
View Code? Open in Web Editor NEWCSI Driver for using Local Block Devices
License: Apache License 2.0
CSI Driver for using Local Block Devices
License: Apache License 2.0
There was no update since 9 months, and the README says the project is "alpha".
What are your plans for the future?
Or do you recommend an alternative? Which one?
Hi folks, i see one place (need to look into it exhaustively) where the error handling is not done correctly. Can anyone pick it up?
Returning nil error from findBestPart
& GetFreeCapacity
function results in invoking following parted command -
device-util.go:312] Device LocalPV: could not Run command [parted /dev/ mkpart c44356ce-0fe9-472b-9a44-64385d8e6c1e 0MiB 9060352MiB --script]
With k8s 1.21 and device-localpv -- waitforfirstconsumer pvc's are not getting bound, while the same is working in k8s 1.20
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 3m34s (x5 over 4m23s) persistentvolume-controller waiting for first consumer to be created before binding
Normal WaitForPodScheduled 4s (x14 over 3m19s) persistentvolume-controller waiting for pod app-busybox-55569b5cc8-nwkxn to be scheduled
Pod keeps in pending with this error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) did not have enough free storage.
Warning FailedScheduling 14s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) did not have enough free storage.
on nodes we can see, we have 50 G disk (/dev/sde) , while pvc is only for 4 G. There are no logs also in controller
k8s@lvm-node1:~$ lsblk | grep sd
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 99.5G 0 part /
sdb 8:16 0 50G 0 disk
sdc 8:32 0 50G 0 disk
sdd 8:48 0 50G 0 disk
sde 8:64 0 50G 0 disk
└─sde1 8:65 0 9M 0 part
sdf 8:80 0 50G 0 disk
I created 5 volumes on single node and single disk /dev/sdc
. 5 partitions were created /dev/sdc2 ...to /dev/sdc6
.
when i deleted these 5 volumes at a time, i see that all 5 volumes has been deleted but device partition /dev/sd2
is still present on the node.
it doesn't happen everytime, but hitting this issue very frequently.
Looks like when multiple deletion request occurs at a time, some partitions are skipped from deletion.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
loop0 7:0 0 55.5M 1 loop /snap/core18/1988
loop1 7:1 0 51M 1 loop /snap/snap-store/518
loop2 7:2 0 64.8M 1 loop /snap/gtk-common-themes/1514
loop3 7:3 0 55.5M 1 loop /snap/core18/1997
loop4 7:4 0 219M 1 loop /snap/gnome-3-34-1804/66
loop5 7:5 0 65.1M 1 loop /snap/gtk-common-themes/1515
loop6 7:6 0 32.3M 1 loop /snap/snapd/11402
loop7 7:7 0 32.3M 1 loop /snap/snapd/11588
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 99.5G 0 part /
sdb 8:16 0 50G 0 disk
sdc 8:32 0 50G 0 disk
├─sdc1 8:33 0 9M 0 part
└─sdc2 8:34 0 1G 0 part
We can see here that no volume is present in cluser.
$ k get pvc -A
No resources found
$ k get pv
No resources found
$ k get devicevol -A
No resources found
but on node, we can see
$ sudo parted /dev/sdc unit b print --script
Model: VMware Virtual disk (scsi)
Disk /dev/sdc: 53687091200B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1048576B 10485759B 9437184B e2e-device
2 10485760B 1084227583B 1073741824B 4c1d2a6d-6984-420d-8cfa-f028613e12fd
When i tried deleting partition manually it successfully gets deleted.
$ sudo parted /dev/sdc rm 2 --script
$ lsblk
sdb 8:16 0 50G 0 disk
sdc 8:32 0 50G 0 disk
└─sdc1 8:33 0 9M 0 part
sdd 8:48 0 50G 0 disk
How to Reproduce issue
kubectl delete pvc --all
, and check the disk partition on nodes. You may not hit it every-time but hopefully will hit the issue in 2-3 times.I found two scheduling algorithms getCapacityWeightedMap
and getCapacityWeightedMap
in the project , according to actual needs, I plan to implement a scheduling algorithm getFreeCapacityWeightedMap
, it can be scores to nodes by disk free capacity, something like this:
device-localpv/pkg/driver/schd_helper.go
:
// key value struct for creating the node list
type kv struct {
Key string
Value int64
}
func getFreeCapacityWeightedMap(deviceName string) (map[string]int64, error) {
nmap := map[string]int64{}
nodeList, err := nodebuilder.NewKubeclient().
WithNamespace(device.DeviceNamespace).
List(metav1.ListOptions{})
if err != nil {
return nmap, err
}
// create the map of the free capacity
// for the given deviceName
nFreeMap := map[string]int64{}
for _, n := range nodeList.Items {
for _, dev := range n.Devices {
i, ok := dev.Free.AsInt64()
if !ok {
klog.Infof("Disk: Free capacity convert int64 failure %s, %+v", dev.Free, err)
continue
}
devRegex, err := regexp.Compile(dev.Name)
if err != nil {
klog.Infof("Disk: Regex compile failure %s, %+v", dev.Name, err)
continue
}
if devRegex.MatchString(deviceName) {
nFreeMap[n.Name] += i
}
}
}
var nList []kv
for k, v := range nFreeMap {
nList = append(nList, kv{k, v})
}
// sort the node map by free capacity
sort.Slice(nList, func(i, j int) bool {
return nList[i].Value > nList[j].Value
})
// score nodes by free capacity
for i, v := range nList {
nmap[v.Key] = int64(i)
}
return nmap, nil
}
It can query the disk free capacity on all nodes, and give higher scores to nodes with larger disk free capacity.
To do this, I will also modify the code that creates the DeviceNode
CRD:
device-localpv/pkg/mgmt/devicenode/devicenode.go
:
// syncNode is the function which tries to converge to a desired state for the
// DeviceNode
func (c *NodeController) syncNode(namespace string, name string) error {
...
if node == nil { // if it doesn't exists, create device node object
if devices == nil {
devices = []apis.Device{}
}
if node, err = nodebuilder.NewBuilder().
WithNamespace(namespace).WithName(name).
WithDevices(devices).
WithOwnerReferences(c.ownerRef).
Build(); err != nil {
return err
}
klog.Infof("device node controller: creating new node object for %+v", node)
if node, err = nodebuilder.NewKubeclient().WithNamespace(namespace).Create(node); err != nil {
return fmt.Errorf("create device node %s/%s: %v", namespace, name, err)
}
klog.Infof("device node controller: created node object %s/%s", namespace, name)
return nil
}
...
}
add this code:
if devices == nil {
devices = []apis.Device{}
}
To avoid DeviceNode
creation failure when devices == nil
, the purpose is to query nodes without free capacity in getFreeCapacityWeightedMap
, in this way, when scoring according to the free capacity, nodes with no free capacity will get a very low score, avoiding PV scheduling in the past.
That's my plan, looking forward to making suggestions and requests.
Describe the bug
Provision fio application with CSI operator yaml , pod is going into pending state.
Ran the command parted /dev/nvme1n1 unit b print --script -m
inside a daemon set pod openebs-device-node and wrt node volume.
$kubectl exec -it openebs-device-node-phwqw -c openebs-device-plugin -n kube-system -- sh
/ # parted /dev/nvme1n1 unit b print --script -m
BYT;
/dev/nvme1n1:107374182400B:nvme:512:512:gpt:Amazon Elastic Block Store:;
1:1048576B:10485759B:9437184B::test-device:msftdata;
Inside node wrt attached volume
root@ip-172-31-40-216 ~]# parted /dev/nvme1n1 unit b print --script -m
BYT;
/dev/nvme1n1:107374182400B:nvme:512:512:gpt:NVMe Device:;
1:1048576B:10485759B:9437184B::test-device:;
Both places output are different.
Expected behavior
# parted /dev/nvme1n1 unit b print --script -m
command output should be same without msftdata
flag data.
Environment:
NAME="Amazon Linux"
ID="amzn"
ID_LIKE="centos rhel fedora"
Screenshot :
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
fio 0/1 Pending 0 87m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-devicepv openebs-device-sc 89m
I think there might be a little bit of a problem with calculate the part free size, my disk partition looks like this:
$ sudo parted /dev/sdb unit b print free --script -m
BYT;
/dev/sdb:10737418240B:scsi:512:512:gpt:ATA VBOX HARDDISK:;
1:17408B:1048575B:1031168B:free;
1:1048576B:10485759B:9437184B::test-device:;
2:10485760B:1084227583B:1073741824B:ext4:301d32a8-b9b0-4c60-9585-aa4ee7364e2b:;
3:1084227584B:4305453055B:3221225472B:ext4:f9ee1a0a-e2b9-498f-9d30-97ece3195e81:;
1:4305453056B:7526678527B:3221225472B:free;
5:7526678528B:8600420351B:1073741824B:ext4:ab6a7db4-cc74-4e72-b6aa-f5a09b0ef58b:;
1:8600420352B:10737401343B:2136980992B:free;
In fact, calculate the part free size: 3221225472B = 7526678527B - 4305453056B + 1B
.
Then, in findBestPart
the code judgment of if tmp.SizeMiB > partSize
ignores the case of equal capacity.
These two problems can result in a bug, i cannot apply for another 3Gi PVC:
➜ kubectl get pvc csi-devicepv1-hello1-1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2022-07-22T07:18:07Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: hello1
name: csi-devicepv1-hello1-1
namespace: default
resourceVersion: "10121"
uid: df90aafe-504d-40b8-bb5d-0c85458a2e68
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: openebs-device-sc
volumeMode: Filesystem
status:
phase: Pending
The PVC will always be in a Pending state.
This pull request is used to fix this bug: #57.
Hello Sir,this project just support Ubuntu? What about CentOS 7.9 ? I installed using CentOS 7.9, but I haven't been able to experiment successfully. Do you have to use the Ubuntu operating system?
this is more feature request rather than issue or bug.
When I am creating StatefullSet or other component and just specifying volumeClaimTemplate in order to automatically create bdc, pv and pvc openEbs creates pvc -s with storage size specified in volumeClaimTemplate which leads to inefficient usage of bd.
Lets say I have an 4 block devices in my cube cluster each 5G size.
When I create for example minio cluster with 4 nods with following volumeClaimTemplate
volumeClaimTemplate:
apiVersion: v1
kind: persistentvolumeclaims
metadata:
creationTimestamp: null
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: minio-openebs-local-device
it will create 4 pvc -s with 1Gi size, each connected with one bd and will 'lock' bd's so I will not be able to create other pv/pvc's.
This means I will use only 1Gi of my 5Gi drives.
So my request is the following.
as far as we can create only one pv/pvc on bd it will be more efficient to get all storage that we have.
In my example even I requested 1Gi my volumes should be 5GI IN SIZE.
I see we are not propagating the volume provisioning error from node agent to controller to enable volume rescheduling on some other node (as we used to do with lvm localpv plugin). Can we fix it to enable rescheduling on provisioning failure to some other node?
I followed the README.md settings and observed that the final code executes MountFilesystem
. How can I use MountBlock
?
We observed a go runtime panic within device local node plugin. Just putting out here to see if anyone can take look into it.
2021-06-07T16:57:10.817343241+05:30 stderr F I0607 16:57:10.817191 1 device-util.go:170] GetPart Error, {DiskName:sda Size:10737418240}
2021-06-07T16:57:10.91741737+05:30 stderr F I0607 16:57:10.917210 1 device-util.go:288] Disk: DiskName not correct
2021-06-07T16:57:10.917434131+05:30 stderr F I0607 16:57:10.917232 1 device-util.go:170] GetPart Error, {DiskName:sda Size:10737418240}
2021-06-07T16:57:13.019507828+05:30 stderr F I0607 16:57:13.019321 1 device-util.go:288] Disk: DiskName not correct
2021-06-07T16:57:13.019534179+05:30 stderr F I0607 16:57:13.019352 1 device-util.go:170] GetPart Error, {DiskName:sdd Size:3189013217280}
2021-06-07T16:57:13.019604142+05:30 stderr F I0607 16:57:13.019531 1 device-util.go:288] Disk: DiskName not correct
2021-06-07T16:57:13.019619593+05:30 stderr F I0607 16:57:13.019555 1 device-util.go:170] GetPart Error, {DiskName:sdd Size:3189013217280}
2021-06-07T16:57:13.618848367+05:30 stderr F I0607 16:57:13.618659 1 device-util.go:288] Disk: DiskName not correct
2021-06-07T16:57:13.618869735+05:30 stderr F I0607 16:57:13.618686 1 device-util.go:170] GetPart Error, {DiskName:sde Size:12000136527872}
2021-06-07T16:57:13.618873316+05:30 stderr F I0607 16:57:13.618697 1 device-util.go:107] Running WipeFS for Partition sdb 2
2021-06-07T16:57:13.919818946+05:30 stderr F I0607 16:57:13.919580 1 device-util.go:288] Disk: DiskName not correct
2021-06-07T16:57:13.919837104+05:30 stderr F I0607 16:57:13.919603 1 device-util.go:170] GetPart Error, {DiskName:sde Size:12000136527872}
2021-06-07T16:57:13.919840384+05:30 stderr F E0607 16:57:13.919664 1 runtime.go:73] Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)
2021-06-07T16:57:13.919843152+05:30 stderr F goroutine 103 [running]:
2021-06-07T16:57:13.919845434+05:30 stderr F k8s.io/apimachinery/pkg/util/runtime.logPanic(0x14b5ca0, 0xc000393a80)
2021-06-07T16:57:13.91985613+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:69 +0x7b
2021-06-07T16:57:13.919858808+05:30 stderr F k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
2021-06-07T16:57:13.919861079+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51 +0x89
2021-06-07T16:57:13.919863579+05:30 stderr F panic(0x14b5ca0, 0xc000393a80)
2021-06-07T16:57:13.919865765+05:30 stderr F /usr/local/opt/go/libexec/src/runtime/panic.go:969 +0x1b9
2021-06-07T16:57:13.919868366+05:30 stderr F github.com/openebs/device-localpv/pkg/device.wipefsAndCreatePart(0xc000cc8cd5, 0x3, 0x2, 0xc0004ac2a4, 0x24, 0x8a4000, 0xc000d085e0, 0xa, 0x0, 0x1)
2021-06-07T16:57:13.919870885+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/device/device-util.go:107 +0x72f
2021-06-07T16:57:13.919873027+05:30 stderr F github.com/openebs/device-localpv/pkg/device.CreateVolume(0xc00073e780, 0x1, 0xc00073e780)
2021-06-07T16:57:13.91987519+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/device/device-util.go:89 +0x3fd
2021-06-07T16:57:13.919877345+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).syncVol(0xc00048cae0, 0xc00073e780, 0x0, 0x0)
2021-06-07T16:57:13.919879533+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:91 +0x4b
2021-06-07T16:57:13.919882012+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).syncHandler(0xc00048cae0, 0xc0005a2340, 0x37, 0xc00083ddb0, 0x11e8954)
2021-06-07T16:57:13.919884183+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:57 +0x13d
2021-06-07T16:57:13.919886428+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).processNextWorkItem.func1(0xc00048cae0, 0x1362b20, 0xc0006ea560, 0x0, 0x0)
2021-06-07T16:57:13.919888691+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:230 +0xd7
2021-06-07T16:57:13.91989105+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).processNextWorkItem(0xc00048cae0, 0x1)
2021-06-07T16:57:13.919893251+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:240 +0x4d
2021-06-07T16:57:13.91989537+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).runWorker(0xc00048cae0)
2021-06-07T16:57:13.919897546+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:191 +0x2b
2021-06-07T16:57:13.91989968+05:30 stderr F k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0006a24f0)
2021-06-07T16:57:13.919901849+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5f
2021-06-07T16:57:13.919903992+05:30 stderr F k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006a24f0, 0x3b9aca00, 0x0, 0x1, 0xc00011c780)
2021-06-07T16:57:13.919906101+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x105
2021-06-07T16:57:13.91990823+05:30 stderr F k8s.io/apimachinery/pkg/util/wait.Until(0xc0006a24f0, 0x3b9aca00, 0xc00011c780)
2021-06-07T16:57:13.9199107+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
2021-06-07T16:57:13.919912858+05:30 stderr F created by github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).Run
2021-06-07T16:57:13.919914971+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:177 +0x265
2021-06-07T16:57:13.921975666+05:30 stderr F panic: runtime error: index out of range [0] with length 0 [recovered]
2021-06-07T16:57:13.921990187+05:30 stderr F panic: runtime error: index out of range [0] with length 0
2021-06-07T16:57:13.922018166+05:30 stderr F
2021-06-07T16:57:13.922022345+05:30 stderr F goroutine 103 [running]:
2021-06-07T16:57:13.922025998+05:30 stderr F k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
2021-06-07T16:57:13.922030533+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x10c
2021-06-07T16:57:13.922035537+05:30 stderr F panic(0x14b5ca0, 0xc000393a80)
2021-06-07T16:57:13.922039044+05:30 stderr F /usr/local/opt/go/libexec/src/runtime/panic.go:969 +0x1b9
2021-06-07T16:57:13.922046513+05:30 stderr F github.com/openebs/device-localpv/pkg/device.wipefsAndCreatePart(0xc000cc8cd5, 0x3, 0x2, 0xc0004ac2a4, 0x24, 0x8a4000, 0xc000d085e0, 0xa, 0x0, 0x1)
2021-06-07T16:57:13.922050847+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/device/device-util.go:107 +0x72f
2021-06-07T16:57:13.922054326+05:30 stderr F github.com/openebs/device-localpv/pkg/device.CreateVolume(0xc00073e780, 0x1, 0xc00073e780)
2021-06-07T16:57:13.922057886+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/device/device-util.go:89 +0x3fd
2021-06-07T16:57:13.922264779+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).syncVol(0xc00048cae0, 0xc00073e780, 0x0, 0x0)
2021-06-07T16:57:13.922272982+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:91 +0x4b
2021-06-07T16:57:13.922278953+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).syncHandler(0xc00048cae0, 0xc0005a2340, 0x37, 0xc00083ddb0, 0x11e8954)
2021-06-07T16:57:13.922283195+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:57 +0x13d
2021-06-07T16:57:13.922287216+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).processNextWorkItem.func1(0xc00048cae0, 0x1362b20, 0xc0006ea560, 0x0, 0x0)
2021-06-07T16:57:13.922291221+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:230 +0xd7
2021-06-07T16:57:13.922295171+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).processNextWorkItem(0xc00048cae0, 0x1)
2021-06-07T16:57:13.922299196+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:240 +0x4d
2021-06-07T16:57:13.922303001+05:30 stderr F github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).runWorker(0xc00048cae0)
2021-06-07T16:57:13.92230684+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:191 +0x2b
2021-06-07T16:57:13.922310849+05:30 stderr F k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0006a24f0)
2021-06-07T16:57:13.922314835+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5f
2021-06-07T16:57:13.922318811+05:30 stderr F k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006a24f0, 0x3b9aca00, 0x0, 0x1, 0xc00011c780)
2021-06-07T16:57:13.922322745+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x105
2021-06-07T16:57:13.922326806+05:30 stderr F k8s.io/apimachinery/pkg/util/wait.Until(0xc0006a24f0, 0x3b9aca00, 0xc00011c780)
2021-06-07T16:57:13.922330838+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
2021-06-07T16:57:13.92233473+05:30 stderr F created by github.com/openebs/device-localpv/pkg/mgmt/volume.(*VolController).Run
2021-06-07T16:57:13.922338818+05:30 stderr F /Users/praveen.gt/go/src/github.com/openebs/device-localpv/pkg/mgmt/volume/volume.go:177 +0x265
Even we have enough capacity on node (correctly published under csi storage capacity resource), pvc creation is failing with following error -
2021-08-26T01:25:12.850848431+05:30 stderr F E0826 01:25:12.850660 1 device-util.go:161] findBestPart Failed
2021-08-26T01:25:12.850852115+05:30 stderr F E0826 01:25:12.850687 1 volume.go:243] error syncing 'openebs-device/pvc-9c9d05b5-3dc1-4201-84bb-6f416532e508': free space of 9060352MiB not found on disk name: xxxx-*, requeuing
Capacity asked by PVC -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
device.csi.openebs.io/csi-volume-name: pvc-9c9d05b5-3dc1-4201-84bb-6f416532e508
volume.beta.kubernetes.io/storage-provisioner: device.csi.openebs.io
volume.kubernetes.io/selected-node: node-xyz
creationTimestamp: "2021-08-25T18:56:08Z"
name: ...
namespace: ...
resourceVersion: "105909663"
uid: 9c9d05b5-3dc1-4201-84bb-6f416532e508
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 9500G
storageClassName: hdd-xxxx-part-ext4-0
volumeMode: Filesystem
status:
phase: Pending
Relevant CSI storage capacity resource -
apiVersion: storage.k8s.io/v1alpha1
capacity: 9060351Mi
kind: CSIStorageCapacity
...
Issue is happening due to conversion of pv size to nearest Gi unit here. Any specific reason of doing so? Can we remove the Gi unit conversion?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.