GithubHelp home page GithubHelp logo

openebs-archive / jiva-operator Goto Github PK

View Code? Open in Web Editor NEW
46.0 46.0 44.0 742 KB

Kubernetes Operator for managing Jiva Volumes via custom resource.

Home Page: https://openebs.io

License: Apache License 2.0

Makefile 3.60% Dockerfile 2.13% Shell 5.37% Go 85.83% Mustache 1.07% Jinja 1.45% Python 0.55%
go hacktoberfest kubernetes openebs storage

jiva-operator's People

Contributors

abhilashshetty04 avatar abhinandan-purkait avatar abhisheksinghbaghel avatar adamcharnock avatar asquare14 avatar dbaker-rh avatar fossabot avatar ianroberts avatar kmova avatar niladrih avatar nsathyaseelan avatar payes avatar prateekpandey14 avatar rajasahil avatar shazadbrohi avatar shovanmaity avatar shubham14bajpai avatar somesh2905 avatar soniasingla avatar surajssd avatar utkarshmani1997 avatar w3aman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jiva-operator's Issues

jiva volume status shows unknown upon continuous restart of replica in a specific node

What steps did you take and what happened:

  • Continuously restarting the replica that was scheduled on the specific node post that checking the replica status in jivavolume not able to get the status of the volume.
d2iq@rack2:~/e2e-konvoy/openebs-konvoy-e2e/stages/5-infra-chaos$ kubectl get jivavolume -n openebs
NAME                                       REPLICACOUNT   PHASE     STATUS
pvc-4870269c-2423-4279-8be9-978d68ac59ac   3              Ready     RW
pvc-5180e0ee-61fb-4685-bb95-a0f3c695032d   3              Ready     RW
pvc-b67872c9-0e9e-4073-b346-2fde70031600                  Unknown  
d2iq@rack2:~/e2e-konvoy/openebs-konvoy-e2e/stages/5-infra-chaos$ kubectl get po -n openebs  -o wide
NAME                                                              READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
containerd-chaos-69cvc                                            1/1     Running   0          29m     192.168.22.156    d2iq-node2   <none>           <none>
containerd-chaos-gcdm8                                            1/1     Running   1          29m     192.168.49.244    d2iq-node5   <none>           <none>
containerd-chaos-jsd58                                            1/1     Running   0          29m     192.168.42.188    d2iq-node3   <none>           <none>
containerd-chaos-l6spd                                            1/1     Running   0          29m     192.168.144.238   d2iq-node1   <none>           <none>
containerd-chaos-xjjzl                                            1/1     Running   0          29m     192.168.205.238   d2iq-node4   <none>           <none>
jiva-operator-5b8b6d668f-lb8v6                                    1/1     Running   1          46m     192.168.22.145    d2iq-node2   <none>           <none>
maya-apiserver-bcfb957d7-gml5s                                    1/1     Running   0          50m     192.168.144.204   d2iq-node1   <none>           <none>
openebs-admission-server-75b69bc554-vgw24                         1/1     Running   0          50m     192.168.205.206   d2iq-node4   <none>           <none>
openebs-jiva-csi-controller-0                                     5/5     Running   0          46m     192.168.205.208   d2iq-node4   <none>           <none>
openebs-jiva-csi-node-6pmnp                                       3/3     Running   4          46m     10.43.1.115       d2iq-node5   <none>           <none>
openebs-jiva-csi-node-6w688                                       3/3     Running   0          46m     10.43.1.113       d2iq-node3   <none>           <none>
openebs-jiva-csi-node-99lb6                                       3/3     Running   0          46m     10.43.1.111       d2iq-node1   <none>           <none>
openebs-jiva-csi-node-gf84p                                       3/3     Running   0          46m     10.43.1.112       d2iq-node2   <none>           <none>
openebs-jiva-csi-node-rkdtr                                       3/3     Running   0          46m     10.43.1.114       d2iq-node4   <none>           <none>
openebs-localpv-provisioner-864958cb7b-8wpfg                      1/1     Running   0          50m     192.168.144.206   d2iq-node1   <none>           <none>
openebs-ndm-44tlh                                                 1/1     Running   0          47m     10.43.1.111       d2iq-node1   <none>           <none>
openebs-ndm-djw9p                                                 1/1     Running   0          47m     10.43.1.112       d2iq-node2   <none>           <none>
openebs-ndm-operator-6f4bc6c84d-dcmvs                             1/1     Running   0          48m     192.168.144.207   d2iq-node1   <none>           <none>
openebs-ndm-qkct4                                                 1/1     Running   0          47m     10.43.1.114       d2iq-node4   <none>           <none>
openebs-ndm-smj5f                                                 1/1     Running   2          47m     10.43.1.115       d2iq-node5   <none>           <none>
openebs-ndm-wgsj5                                                 1/1     Running   0          47m     10.43.1.113       d2iq-node3   <none>           <none>
openebs-provisioner-cff4c4454-nvtfb                               1/1     Running   0          50m     192.168.205.205   d2iq-node4   <none>           <none>
openebs-snapshot-operator-6c5d68548c-7c9nc                        2/2     Running   0          50m     192.168.22.144    d2iq-node2   <none>           <none>
pvc-4870269c-2423-4279-8be9-978d68ac59ac-jiva-ctrl-787f746729f9   1/1     Running   0          41m     192.168.42.148    d2iq-node3   <none>           <none>
pvc-4870269c-2423-4279-8be9-978d68ac59ac-jiva-rep-0               1/1     Running   0          41m     192.168.42.150    d2iq-node3   <none>           <none>
pvc-4870269c-2423-4279-8be9-978d68ac59ac-jiva-rep-1               1/1     Running   2          41m     192.168.22.154    d2iq-node2   <none>           <none>
pvc-4870269c-2423-4279-8be9-978d68ac59ac-jiva-rep-2               1/1     Running   1          41m     192.168.49.236    d2iq-node5   <none>           <none>
pvc-4d85b5c2-f526-431d-a24d-8b450dfbcf07-jiva-ctrl-7b4dd699fbpg   1/1     Running   1          8m15s   192.168.49.250    d2iq-node5   <none>           <none>
pvc-4d85b5c2-f526-431d-a24d-8b450dfbcf07-jiva-rep-0               1/1     Running   3          8m15s   192.168.22.170    d2iq-node2   <none>           <none>
pvc-4d85b5c2-f526-431d-a24d-8b450dfbcf07-jiva-rep-1               1/1     Running   1          8m15s   192.168.49.195    d2iq-node5   <none>           <none>
pvc-4d85b5c2-f526-431d-a24d-8b450dfbcf07-jiva-rep-2               1/1     Running   4          8m15s   192.168.42.136    d2iq-node3   <none>           <none>
pvc-5180e0ee-61fb-4685-bb95-a0f3c695032d-jiva-ctrl-5f5fbfdcnq6n   1/1     Running   0          40m     192.168.42.160    d2iq-node3   <none>           <none>
pvc-5180e0ee-61fb-4685-bb95-a0f3c695032d-jiva-rep-0               1/1     Running   0          40m     192.168.42.164    d2iq-node3   <none>           <none>
pvc-5180e0ee-61fb-4685-bb95-a0f3c695032d-jiva-rep-1               1/1     Running   0          40m     192.168.205.212   d2iq-node4   <none>           <none>
pvc-5180e0ee-61fb-4685-bb95-a0f3c695032d-jiva-rep-2               1/1     Running   0          40m     192.168.144.220   d2iq-node1   <none>           <none>
pvc-b67872c9-0e9e-4073-b346-2fde70031600-jiva-ctrl-cd54f5br7txm   1/1     Running   1          32m     192.168.49.249    d2iq-node5   <none>           <none>
pvc-b67872c9-0e9e-4073-b346-2fde70031600-jiva-rep-0               1/1     Running   4          26m     192.168.49.239    d2iq-node5   <none>           <none>
pvc-b67872c9-0e9e-4073-b346-2fde70031600-jiva-rep-1               1/1     Running   1          32m     192.168.42.184    d2iq-node3   <none>           <none>
pvc-b67872c9-0e9e-4073-b346-2fde70031600-jiva-rep-2               1/1     Running   2          32m     192.168.22.130    d2iq-node2   <none>           <none>

What did you expect to happen:

  • Jivavolume status should be RW and ready phase.

jv-oyaml.txt
current_ctrl_pod.log
controller-pod.log
jiva-operator.log

Automated replacement of replica in cases where a node with replicas goes out of the cluster.

Describe the problem/challenge you have
When a node with replicas goes out of the cluster the jiva volume replica STS pod on that node will stay in a pending state.

$ kubectl -n openebs get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
openebs-jiva-csi-controller-0                                     5/5     Running   0          25m
openebs-jiva-csi-node-pmc9z                                       3/3     Running   0          25m
openebs-jiva-csi-node-vfhh4                                       3/3     Running   0          9m6s
openebs-jiva-csi-node-ztz9h                                       3/3     Running   0          25m
openebs-jiva-operator-7c89b45d4c-9lvh8                            1/1     Running   0          25m
openebs-localpv-provisioner-7f7469574c-dbln7                      1/1     Running   0          25m
pvc-83a1ecd0-ec40-448d-8ba3-18ef7efd8073-jiva-ctrl-69dfdb4l5ddz   2/2     Running   0          15m
pvc-83a1ecd0-ec40-448d-8ba3-18ef7efd8073-jiva-rep-0               1/1     Running   1          15m
pvc-83a1ecd0-ec40-448d-8ba3-18ef7efd8073-jiva-rep-1               0/1     Pending   0          10m
pvc-83a1ecd0-ec40-448d-8ba3-18ef7efd8073-jiva-rep-2               1/1     Running   1          3m27s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  4m7s (x3 over 5m1s)  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
  Warning  FailedScheduling  65s (x5 over 4m3s)   default-scheduler  0/3 nodes are available: 3 node(s) had volume node affinity conflict.

This happens as the PVC created by the STS has the volume node affinity for the node which is no longer in the cluster.

Describe the solution you'd like
As the storageClass used by the replica STS is host-path the data can not be recovered from the removed node. An easy manual solution to this is to delete the PVC corresponding to the STS pod which is stuck in a pending state and then restart the pending pod. This will result in a new PVC in another node and the data will be rebuild using the other replicas.

This can also be automated from the jiva-operator itself by performing the above steps when a replica STS node is removed from the cluster by checking the status of the STS and the nodes on which it's pods get scheduled.

Environment:

  • OpenEBS version: 2.9.0

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.8-eks-96780e", GitCommit:"96780e1b30acbf0a52c38b6030d7853e575bcdf3", GitTreeState:"clean", BuildDate:"2021-03-10T21:32:29Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes installer & version:

  • Cloud provider or hardware configuration: AWS

  • OS (e.g. from /etc/os-release): ubuntu 20.04

pvc-xxx-jiva-ctrl-yyy depends on openebs/jiva:ci

What steps did you take and what happened:
A have installation based on helm with all images in private registry
k get all
NAME READY STATUS RESTARTS AGE
pod/openebs-jiva-csi-controller-0 5/5 Running 0 110m
pod/openebs-jiva-csi-node-9pv94 3/3 Running 0 110m
pod/openebs-jiva-csi-node-jt2ds 3/3 Running 0 110m
pod/openebs-jiva-csi-node-nz96j 3/3 Running 0 110m
pod/openebs-jiva-csi-node-tjtdx 3/3 Running 0 110m
pod/openebs-jiva-csi-node-vdj9t 3/3 Running 0 110m
pod/openebs-jiva-operator-98df8b7b5-f98pp 1/1 Running 0 110m
pod/openebs-localpv-provisioner-84cb775f46-xxbmv 1/1 Running 0 110m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/openebs-jiva-csi-node 5 5 5 5 5 110m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openebs-jiva-operator 1/1 1 1 110m
deployment.apps/openebs-localpv-provisioner 1/1 1 1 110m

NAME DESIRED CURRENT READY AGE
replicaset.apps/openebs-jiva-operator-98df8b7b5 1 1 1 110m
replicaset.apps/openebs-localpv-provisioner-84cb775f46 1 1 1 110m

NAME READY AGE
statefulset.apps/openebs-jiva-csi-controller 1/1 110m

helm values for images are like the next one
replica:
image:
registry: brbs2p.ros.czso.cz:5000/
repository: openebs/jiva
tag: 3.0.0

What did you expect to happen:
To be able create jiva volume

During jiva volume initialization I have the following error
Events:
Type Reason Age From Message


Normal Scheduled 5m56s default-scheduler Successfully assigned openebs/pvc-523cbfc1-1d35-4abf-ab73-940b41de94db-jiva-ctrl-6d5f59dkqk5t to arbs1p.ros.czso.cz
Normal Pulled 5m46s kubelet Container image "brbs2p.ros.czso.cz:5000/openebs/m-exporter:3.0.0" already present on machine
Normal Created 5m44s kubelet Created container maya-volume-exporter
Normal Started 5m44s kubelet Started container maya-volume-exporter
Warning Failed 5m5s (x3 over 5m46s) kubelet Failed to pull image "openebs/jiva:ci": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.143.68.16:53: lame referral
Warning Failed 5m5s (x3 over 5m46s) kubelet Error: ErrImagePull
Warning Failed 4m27s (x6 over 5m43s) kubelet Error: ImagePullBackOff
Normal Pulling 4m12s (x4 over 5m55s) kubelet Pulling image "openebs/jiva:ci"
Normal BackOff 48s (x21 over 5m43s) kubelet Back-off pulling image "openebs/jiva:ci"

I'm no able to specify something like in helm values
image:
registry: brbs2p.ros.czso.cz:5000/
repository: openebs/jiva:ci

Moreover, openebs/jiva:ci is very old image

The next problem is, tehat after volume deletion in
init-pvc-13ca2cf4-f81d-4511-94a1-c2fcf5b8107b
I can see another problem
Events:
Type Reason Age From Message


Normal Scheduled 60s default-scheduler Successfully assigned openebs/init-pvc-13ca2cf4-f81d-4511-94a1-c2fcf5b8107b to arbs1p.ros.czso.cz
Normal Pulling 14s (x3 over 58s) kubelet Pulling image "openebs/linux-utils:3.0.0"
Warning Failed 14s (x3 over 58s) kubelet Failed to pull image "openebs/linux-utils:3.0.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.143.68.16:53: lame referral
Warning Failed 14s (x3 over 58s) kubelet Error: ErrImagePull
Normal BackOff 0s (x4 over 57s) kubelet Back-off pulling image "openebs/linux-utils:3.0.0"
Warning Failed 0s (x4 over 57s) kubelet Error: ImagePullBackOff

Again, I cant specify openebs/linux-utils:3.0.0 to be taken from private registry

Regards
Vlado Hudec

Prometheus Metric for Client Disk Usage

Is this a BUG REPORT or FEATURE REQUEST?

FEATURE REQUEST

Currently in the maya volume exporter the only exported disk usage metrics are OpenEBS_logical_size and OpenEBS_actual_used which show disk usage on the actual host machines. I would like to allow Prometheus alerting rules on the usage from a client's perspective as well but there is no exported metric that shows this information.

What you expected to happen:
I would like to have an exported Prometheus metric that shows client usage of an OpenEBS volume.

Anything else we need to know?:

Environment:

  • Maya version (use maya version): v0.6.0
  • M-apiserver version (use m-apiserver version): v0.6.0
  • OS (e.g. from /etc/os-release): CoreOS 1800.5.0
  • Kernel (e.g. uname -a): 4.14.59-coreos
  • Install tools: openebs-operator.yaml

Jiva volume mount into pod fails with `driver name jiva.csi.openebs.io not found in the list of registered CSI drivers`

What steps did you take and what happened:
I am trying to configure Jiva to create volumes with replication support on Kubernetes version v1.24.9. I have followed both helm chart way of installation and the operator one which is mentioned here in the user guide docs https://openebs.io/docs/3.3.x/user-guides/jiva/jiva-prerequisites
All the OpenEBS components are up & running, able to create PV as well. But when the pod comes up it is unable to mount the PV into it. Below is the exception I see. Appreciate if you can assist about this.

kubelet MountVolume.MountDevice failed for volume "pvc-aa8cec7a-2e64-4522-8538-929780487241" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name jiva.csi.openebs.io not found in the list of registered CSI drivers

CSIDriver looks fine,

[root@k8s-master~]# k get csidrivers.storage.k8s.io 
NAME                  ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
jiva.csi.openebs.io   false            true             false             <unset>         false               Persistent   5m39s

What did you expect to happen:
PV should be able successfully mounted in the pod.

The output of the following commands will help us better understand what's going on:

  • kubectl logs <jiva-operator pod name> -n openebs
[root@k8s-master~]# kubectl logs -n openebs jiva-operator-766dbdb4bd-zrnjz               
time="2023-04-04T04:37:10Z" level=info msg="Go Version: go1.17.6"
time="2023-04-04T04:37:10Z" level=info msg="Go OS/Arch: linux/amd64"
time="2023-04-04T04:37:10Z" level=info msg="Version of jiva-operator: 3.4.0"
time="2023-04-04T04:37:10Z" level=info msg="starting manager"
time="2023-04-04T04:39:50Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:39:51Z" level=info msg="start bootstraping jiva componentsJivaVolume: pvc-aa8cec7a-2e64-4522-8538-929780487241"
time="2023-04-04T04:39:51Z" level=info msg="Creating a new serviceService.NamespaceopenebsService.Namepvc-aa8cec7a-2e64-4522-8538-929780487241-jiva-ctrl-svc"
time="2023-04-04T04:39:52Z" level=info msg="Updating JivaVolume with iscsi specISCSISpec{10.111.116.215 3260 iqn.2016-09.com.openebs.jiva:pvc-aa8cec7a-2e64-4522-8538-929780487241}"
time="2023-04-04T04:39:52Z" level=info msg="Creating a new deploymentDeploy.NamespaceopenebsDeploy.Namepvc-aa8cec7a-2e64-4522-8538-929780487241-jiva-ctrl"
time="2023-04-04T04:39:52Z" level=info msg="Creating a new StatefulsetStatefulset.NamespaceopenebsSts.Namepvc-aa8cec7a-2e64-4522-8538-929780487241-jiva-rep"
time="2023-04-04T04:39:52Z" level=info msg="Creating a new pod disruption budgetPdb.NamespaceopenebsPdb.Namepvc-aa8cec7a-2e64-4522-8538-929780487241-pdb"
time="2023-04-04T04:39:52Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:39:54Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:39:54Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:39:55Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:39:57Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:39:57Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:39:58Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:00Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:00Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:01Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:03Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:03Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:04Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:06Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:06Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:07Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:09Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:09Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:11Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:13Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:13Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:15Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:17Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:17Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:26Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:28Z" level=info msg="failed to get volume stats errGet \"http://10.111.116.215:9501/v1/stats\": dial tcp 10.111.116.215:9501: connect: connection refused"
time="2023-04-04T04:40:28Z" level=info msg="failed to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
time="2023-04-04T04:40:29Z" level=info msg="not able to get controller pod ip for volume pvc-aa8cec7a-2e64-4522-8538-929780487241: expected 1 controller pod got 0"
  • kubectl get jv <jiva volume cr name> -n openebs -o yaml
apiVersion: openebs.io/v1
kind: JivaVolume
metadata:
  annotations:
    openebs.io/volume-policy: example-jivavolumepolicy
  creationTimestamp: "2023-04-04T04:39:50Z"
  generation: 6
  labels:
    openebs.io/component: jiva-volume
    openebs.io/persistent-volume: pvc-aa8cec7a-2e64-4522-8538-929780487241
    openebs.io/persistent-volume-claim: example-jiva-csi-pvc
  name: pvc-aa8cec7a-2e64-4522-8538-929780487241
  namespace: openebs
  resourceVersion: "1272451"
  uid: 24af2b27-8267-44d4-b4f7-59189e4b926d
spec:
  accessType: mount
  capacity: 4Gi
  desiredReplicationFactor: 2
  iscsiSpec:
    iqn: iqn.2016-09.com.openebs.jiva:pvc-aa8cec7a-2e64-4522-8538-929780487241
    targetIP: 10.111.116.215
    targetPort: 3260
  mountInfo: {}
  policy:
    replica:
      resources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/notReady
        operator: Exists
      - effect: NoExecute
        key: node.cloudprovider.kubernetes.io/uninitialized
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/unschedulable
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/out-of-disk
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/memory-pressure
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/disk-pressure
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/network-unavailable
        operator: Exists
    replicaSC: openebs-hostpath
    target:
      auxResources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      replicationFactor: 2
      resources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/notReady
        operator: Exists
        tolerationSeconds: 0
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 0
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 0
  pv: pvc-aa8cec7a-2e64-4522-8538-929780487241
status:
  phase: Ready
  replicaCount: 2
  replicaStatus:
  - address: tcp://10.244.180.155:9502
    mode: RW
  - address: tcp://10.244.56.90:9502
    mode: RW
  status: RW
versionDetails:
  desired: 3.4.0
  status:
    current: 3.4.0
    dependentsUpgraded: true
    lastUpdateTime: null
  • kubectl get jvp <jiva volume policy> -n openebs -o yaml
apiVersion: openebs.io/v1
kind: JivaVolumePolicy
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"openebs.io/v1alpha1","kind":"JivaVolumePolicy","metadata":{"annotations":{},"name":"example-jivavolumepolicy","namespace":"openebs"},"spec":{"replicaSC":"openebs-hostpath","target":{"replicationFactor":2}}}
  creationTimestamp: "2023-04-04T04:38:05Z"
  generation: 1
  name: example-jivavolumepolicy
  namespace: openebs
  resourceVersion: "1271950"
  uid: a3965e22-61b1-4286-a84e-f5b2bb42c821
spec:
  replicaSC: openebs-hostpath
  target:
    replicationFactor: 2
  • kubectl logs <jiva csi node pod> -n openebs -o yaml
Defaulted container "csi-node-driver-registrar" out of: csi-node-driver-registrar, jiva-csi-plugin, liveness-probe
I0404 04:37:03.528549       1 main.go:164] Version: v2.3.0
I0404 04:37:03.528653       1 main.go:165] Running node-driver-registrar in mode=registration
I0404 04:37:03.529471       1 main.go:189] Attempting to open a gRPC connection with: "/plugin/csi.sock"
I0404 04:37:03.529538       1 connection.go:154] Connecting to unix:///plugin/csi.sock
I0404 04:37:12.845403       1 main.go:196] Calling CSI driver to discover driver name
I0404 04:37:12.845435       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0404 04:37:12.845442       1 connection.go:184] GRPC request: {}
I0404 04:37:12.849168       1 connection.go:186] GRPC response: {"name":"jiva.csi.openebs.io","vendor_version":"3.4.0"}
I0404 04:37:12.849300       1 connection.go:187] GRPC error: <nil>
I0404 04:37:12.849309       1 main.go:206] CSI driver name: "jiva.csi.openebs.io"
I0404 04:37:12.849340       1 node_register.go:52] Starting Registration Server at: /registration/jiva.csi.openebs.io-reg.sock
I0404 04:37:12.849535       1 node_register.go:61] Registration Server started at: /registration/jiva.csi.openebs.io-reg.sock
I0404 04:37:12.849677       1 node_register.go:91] Skipping healthz server because HTTP endpoint is set to: ""

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Jiva version: 3.4.0
  • OpenEBS version: 3.4.0
  • Kubernetes version (use kubectl version): v1.24.9
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration: BareMetal
  • OS (e.g. from /etc/os-release): CentOS 7.9.2009

Volume is not becoming readonly when the space is completely exhausted

What Happened
Provisioned Jiva volume using CSI provisioner and mounted it on busybox application. Filled the space completely but volume did not enter to RO state. (for file-system ext4 and xfs)

/dev/sdc                 12.0G     12.0G     40.0K 100% /busybox
/dev/sdc on /busybox type xfs (rw,relatime,attr2,inode64,noquota)

What to expect
Volume should go in RO state after complete space exhaustion

Quickstart setup - "permission denied" in PVC-using pod

What steps did you take and what happened:
I created a small cluster of just two machines (k0s), I went through the quickstart and created a MariaDB container that uses the PVC.
It runs for a few days or weeks but then it starts crashing saying

/usr/sbin/mysqld: Can't change dir to '/var/lib/mysql/' (Errcode: 13 "Permission denied")

Interestingly I have:

  • 2 nodes
  • 1 PVC
  • 2 PVs with default/example-jiva-csi-pvc claim
  • 3 JVs

I think one of the PVs and one of the JVs got created again but I don't know why.

What did you expect to happen:
I expect the pod to be able to use the mounted PVC.

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

kubectl logs -n openebs jiva-operator-7df8bb6f9c-slvf9
I0323 08:08:25.432240       1 request.go:655] Throttling request took 1.047684099s, request: GET:https://172.18.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s
time="2022-03-23T08:08:25Z" level=info msg="Go Version: go1.14.7"
time="2022-03-23T08:08:25Z" level=info msg="Go OS/Arch: linux/amd64"
time="2022-03-23T08:08:25Z" level=info msg="Version of jiva-operator: 3.1.0"
apiVersion: openebs.io/v1
kind: JivaVolume
metadata:
  annotations:
    openebs.io/volume-policy: example-jivavolumepolicy
  creationTimestamp: "2022-02-01T11:36:51Z"
  generation: 64
  labels:
    nodeID: <secondary_node>
    openebs.io/component: jiva-volume
    openebs.io/persistent-volume: pvc-219fc6d9-d0d4-4524-ab82-6e15be243c21
    openebs.io/persistent-volume-claim: example-jiva-csi-pvc
  name: pvc-219fc6d9-d0d4-4524-ab82-6e15be243c21
  namespace: openebs
  resourceVersion: "6753689"
  uid: d4dca3af-df2e-476b-b330-e6e39db6bbbd
spec:
  accessType: mount
  capacity: 1Gi
  desiredReplicationFactor: 1
  iscsiSpec:
    iqn: iqn.2016-09.com.openebs.jiva:pvc-219fc6d9-d0d4-4524-ab82-6e15be243c21
    targetIP: 172.18.239.191
    targetPort: 3260
  mountInfo:
    devicePath: /dev/disk/by-path/ip-172.18.239.191:3260-iscsi-iqn.2016-09.com.openebs.jiva:pvc-219fc6d9-d0d4-4524-ab82-6e15be243c21-lun-0
    fsType: ext4
    stagingPath: /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/pv/pvc-219fc6d9-d0d4-4524-ab82-6e15be243c21/globalmount
  policy:
    replica:
      resources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/notReady
        operator: Exists
      - effect: NoExecute
        key: node.cloudprovider.kubernetes.io/uninitialized
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/unschedulable
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/out-of-disk
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/memory-pressure
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/disk-pressure
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/network-unavailable
        operator: Exists
    replicaSC: openebs-hostpath
    target:
      auxResources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      replicationFactor: 1
      resources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/notReady
        operator: Exists
        tolerationSeconds: 0
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 0
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 0
  pv: pvc-219fc6d9-d0d4-4524-ab82-6e15be243c21
status:
  phase: Ready
  replicaCount: 1
  replicaStatus:
  - address: tcp://172.17.1.176:9502
    mode: RW
  status: RW
versionDetails:
  desired: 3.1.0
  status:
    current: 3.1.0
    dependentsUpgraded: true
    lastUpdateTime: null
kubectl get jvp -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1
  kind: JivaVolumePolicy
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"openebs.io/v1alpha1","kind":"JivaVolumePolicy","metadata":{"annotations":{},"name":"example-jivavolumepolicy","namespace":"openebs"},"spec":{"replicaSC":"openebs-hostpath","target":{"replicationFactor":1}}}
    creationTimestamp: "2022-01-19T14:16:33Z"
    generation: 1
    name: example-jivavolumepolicy
    namespace: openebs
    resourceVersion: "1282818"
    uid: 0ae414c3-2a21-4ba8-b354-3643576947b4
  spec:
    replicaSC: openebs-hostpath
    target:
      replicationFactor: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Environment:

  • Jiva version: 3.1.0
  • OpenEBS version: 3.0.0
  • Kubernetes version (use kubectl version):
kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"\"not_available\"", GitTreeState:"", BuildDate:"2021-11-30T17:30:31Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4+k0s", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-30T17:23:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes installer & version:
k0s version
v1.22.4+k0s.1
  • Cloud provider or hardware configuration: on premises
  • OS (e.g. from /etc/os-release): AlmaLinux

Generate client-go and fake-clients for CRDs

Describe the problem/challenge you have
Need fake clients and client-go methods to test the CRDs used in other components of openebs like openebsctl

Environment:

  • OpenEBS version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

Helm Chart Friendly with Talos Linux Specificity

Describe the problem/challenge you have
I want to use OpenEBS Jiva in my cluster and use only helm to install it. My Cluster is built with Sidero and uses Talos Linux.

In the documentation of Talos, they explain how to install OpenEBS Jiva but it requires two small patches:

  • Re-Configure the ConfigMap openebs-jiva-csi-iscsiadm ( to be able to use iscsiadm )
apiVersion: v1
kind: ConfigMap
metadata:
  name: openebs-jiva-csi-iscsiadm
  namespace: openebs
data:
  iscsiadm: |
    #!/bin/sh
    iscsid_pid=$(pgrep iscsid)
    nsenter --mount="/proc/${iscsid_pid}/ns/mnt" --net="/proc/${iscsid_pid}/ns/net" -- /usr/local/sbin/iscsiadm "$@"
  • Re-Configure the DaemonSet openebs-jiva-csi-node to access to the hostPID
kubectl --namespace openebs patch daemonset openebs-jiva-csi-node --type=json --patch '[{"op": "add", "path": "/spec/template/spec/hostPID", "value": true}]'

Describe the solution you'd like

  • Add in the helm template, the capacity to overwrite the value of the ConfigMap
  • Add in the helm template, the value of hostPID to false by default in the DaemonSet Specs Template.

Environment:
no relevant

Thanks for your advice and help.

Failed to deploy jiva mode by referring to openebs document

ๆœๅŠก็‰ˆๆœฌ
openebs๏ผšv2.12
kubernetes๏ผš v1.18
Reference link๏ผšhttps://openebs.io/docs/user-guides/jiva-guide#create-a-pool

I participated in the above link to learn how to deploy jiva mode, and deployed pool and SC according to the document. However, when the service is used, it shows that PVC cannot be bound

The jiva operator log is not swiped and an error is reported

Error reported in the busybox service log deployed with reference to the document:

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "busybox-78c496c547-4k5v6": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "busybox-78c496c547-4k5v6": pod has unbound immediate PersistentVolumeClaims

SC configuration๏ผš

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-jiva-3repl
  annotations:
    openebs.io/cas-type: jiva
    cas.openebs.io/config: |
      - name: ReplicaCount
        value: "3"
      - name: StoragePool
        value: jivapool
provisioner: openebs.io/provisioner-iscsi

Pool configuration and mounted disks๏ผš

apiVersion: openebs.io/v1alpha1
kind: StoragePool
metadata:
  name: jivapool
  type: hostdir
spec:
  path: "/home/openebs-jiva"
[root@k8s1 ~]# df -hT | grep /dev/sdb
/dev/sdb                ext4       20G   45M   19G    1% /home/openebs-jiva

Helm chart missing CRDs

$ cat storagepool.yaml
  apiVersion: openebs.io/v1alpha1
  kind: StoragePool
  metadata:
      name: gpdpool
      type: hostdir
  spec:
      path: "/mnt/disk1"

$ kubectl -n openebs apply -f storagepool.yaml
error: unable to recognize "storagepool.yaml": no matches for kind "StoragePool" in version "openebs.io/v1alpha1"

JivaVolumePolicy - should allow to skip parameters with internal default values.

This is to see if any of the fields in the jiva volume policies are incorrectly set as mandatory. for example "enableBufio" and "autoScaling" are good candidates for optional fields - with internally set default values.

apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: jiva-policy-mongo
  namespace: openebs
spec:
  replicaSC: openebs-hostpath
  enableBufio: false
  autoScaling: false
  target:
    replicationFactor: 1
    monitor: true

Volume Busy, NodeStageVolume is already in progress

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)
Step1: Install iSCSI initiator and start service on every k8s node
sudo apt install open-iscsi
sudo systemctl enable --now iscsid
modprobe iscsi_tcp
echo iscsi_tcp >/etc/modules-load.d/iscsi-tcp.conf
systemctl start iscsid
Step2: Add extra_binds under kubelet service in cluster YAML
services:
kubelet:
extra_binds:
- "/etc/iscsi:/etc/iscsi"
- "/sbin/iscsiadm:/sbin/iscsiadm"
- "/var/lib/iscsi:/var/lib/iscsi"
- "/var/openebs/local:/var/openebs/local"
- "/lib/modules"
Step3: install openebs jiva with helm3 follow the Quickstart
helm repo add openebs-jiva https://openebs.github.io/jiva-operator
helm repo update
helm install jiva openebs-jiva/jiva --namespace openebs --create-namespace
Step4: Confirm all of openebs pod is running
Step5: Create a pvc with the sc openebs-jiva-csi-default and all the pvc are bound state
Step6: Create a deployment use the pvc
What did you expect to happen:
I expect everything is ok but is not ok.
Events:
Type Reason Age From Message


Normal Scheduled 42m default-scheduler Successfully assigned kafka/nginx-b9cd4c87-8k4sx to rke-node-219
Warning FailedMount 40m (x3 over 40m) kubelet MountVolume.MountDevice failed for volume "pvc-ba7732f8-58cb-4dd2-a682-ec4ca2cac7dc" : rpc error: code = Aborted desc = Volume Busy, NodeStageVolume is already in progress
Warning FailedMount 13m (x5 over 29m) kubelet Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[default-token-xtmdg vol1]: timed out waiting for the condition
Warning FailedMount 4m13s (x12 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[vol1 default-token-xtmdg]: timed out waiting for the condition
Warning FailedMount 4s (x13 over 40m) kubelet MountVolume.MountDevice failed for volume "pvc-ba7732f8-58cb-4dd2-a682-ec4ca2cac7dc" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs <jiva-operator pod name> -n openebs (optional)
  • kubectl get jv <jiva volume cr name> -n openebs -o yaml
  • kubectl get jvp <jiva volume policy> -n openebs -o yaml

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Jiva version 3.4.0
  • OpenEBS version 3.4.0
  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:21:19Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}
    Kustomize Version: v5.0.1
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:23:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes installer & version:rancher2.5.14
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):ubuntu22.04

Ensure all components have metadata.labels.name: declared

Description

I am exploring Isovalent Cilium/Hubble and when I review the openebs namespace for network flows in the Hubble UI, there are a number of components that appear as "Unknown App" (using ports 9502/9503 and 9501). jiva-operator does display the app name in the Hubble UI.

Note: the the trailing "blank lines" are intentionally left in the output to show there is no name set

$  kubectl get pods -n openebs -o=jsonpath='{range .items[*]}{.metadata.labels.name}{"\n"}{end}'
openebs-jiva-csi-controller
openebs-jiva-csi-node
openebs-jiva-csi-node
openebs-jiva-csi-node
jiva-operator
openebs-localpv-provisioner
openebs-ndm
openebs-ndm
openebs-ndm
ndm-operator







$

Context

I am not (yet) certain about this, but I believe as more fine-grained access is applied to Cilium, the app-name will be relevant for controlling flows. From what I can deduce the pods "pvc-(uuid)-jiva-rep-{0-2}" and "pvc-(uuid)-jiva-ctrl-(uuid)" do not have a name label, and therefore show up as "Unknown App" in the Hubble UI

Possible Solution

I believe .metadata.labels.name needs to be declared for each pod managed by openEBS.

Screenshots

Screenshot 2024-03-19 at 9 20 18โ€ฏAM

Prometheus metrics are not exported for Jiva Volumes

Through deployment of Jiva Operator (v3.0.5), I found that Prometheus (v.2.26) is not able to scrape metrics for the Jiva volumes that are provisioned in my Kubernetes cluster. Upon closer diagnosis, I found that each Jiva controller deployment (pvc-<uuid>-jiva-ctrl) created in response to requesting a PVC, uses an invalid Prometheus scrape annotation below:

prometheus.io/scrap: "true"

The correct annotation should be:

prometheus.io/scrape: "true"

After manually updating my Jiva volume deployment, I was able to Validate Prometheus could now scape my Jiva volumes.

Example is not working

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)
I followed manual for Quickstart deploy and I couldn't get running example app. The pvc didn't bound it.

What did you expect to happen:
I expect example app working

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

[root@node1 opt]# kubectl logs -f jiva-operator-6b69b86894-5lmht -n openebs
I0615 19:16:01.931555 1 request.go:655] Throttling request took 1.046617453s, request: GET:https://10.233.0.1:443/apis/cert-manager.io/v1alpha2?timeout=32s
time="2021-06-15T19:16:02Z" level=info msg="Go Version: go1.14.7"
time="2021-06-15T19:16:02Z" level=info msg="Go OS/Arch: linux/amd64"
time="2021-06-15T19:16:02Z" level=info msg="Version of jiva-operator: 2.9.0"
time="2021-06-15T19:16:05Z" level=info msg="start bootstraping jiva componentsJivaVolume: pvc-9331eb72-6657-4145-8a97-0e7a2768d199"
time="2021-06-15T19:16:05Z" level=info msg="Creating a new serviceService.NamespaceopenebsService.Namepvc-9331eb72-6657-4145-8a97-0e7a2768d199-jiva-ctrl-svc"
time="2021-06-15T19:16:06Z" level=info msg="Updating JivaVolume with iscsi specISCSISpec{10.233.49.69 3260 iqn.2016-09.com.openebs.jiva:pvc-9331eb72-6657-4145-8a97-0e7a2768d199}"
time="2021-06-15T19:16:06Z" level=info msg="Creating a new deploymentDeploy.NamespaceopenebsDeploy.Namepvc-9331eb72-6657-4145-8a97-0e7a2768d199-jiva-ctrl"
time="2021-06-15T19:16:06Z" level=info msg="Creating a new StatefulsetStatefulset.NamespaceopenebsSts.Namepvc-9331eb72-6657-4145-8a97-0e7a2768d199-jiva-rep"
time="2021-06-15T19:16:07Z" level=info msg="Creating a new pod disruption budgetPdb.NamespaceopenebsPdb.Namepvc-9331eb72-6657-4145-8a97-0e7a2768d199-pdb"
time="2021-06-15T19:16:07Z" level=info msg="start bootstraping jiva componentsJivaVolume: pvc-5ea4b089-9617-402e-9a40-9a5d2d69be50"
time="2021-06-15T19:16:07Z" level=info msg="Creating a new serviceService.NamespaceopenebsService.Namepvc-5ea4b089-9617-402e-9a40-9a5d2d69be50-jiva-ctrl-svc"
time="2021-06-15T19:16:08Z" level=info msg="Updating JivaVolume with iscsi specISCSISpec{10.233.5.29 3260 iqn.2016-09.com.openebs.jiva:pvc-5ea4b089-9617-402e-9a40-9a5d2d69be50}"
time="2021-06-15T19:16:08Z" level=info msg="Creating a new deploymentDeploy.NamespaceopenebsDeploy.Namepvc-5ea4b089-9617-402e-9a40-9a5d2d69be50-jiva-ctrl"
time="2021-06-15T19:16:08Z" level=info msg="Creating a new StatefulsetStatefulset.NamespaceopenebsSts.Namepvc-5ea4b089-9617-402e-9a40-9a5d2d69be50-jiva-rep"
time="2021-06-15T19:16:08Z" level=info msg="Creating a new pod disruption budgetPdb.NamespaceopenebsPdb.Namepvc-5ea4b089-9617-402e-9a40-9a5d2d69be50-pdb"
time="2021-06-15T19:16:08Z" level=info msg="Failed to get volume stats errGet "http://10.233.49.69:9501/v1/stats\": dial tcp 10.233.49.69:9501: connect: connection refused"
time="2021-06-15T19:16:08Z" level=info msg="Failed to get volume stats errGet "http://10.233.5.29:9501/v1/stats\": dial tcp 10.233.5.29:9501: connect: connection refused"
time="2021-06-15T19:16:09Z" level=info msg="Failed to get volume stats errGet "http://10.233.49.69:9501/v1/stats\": dial tcp 10.233.49.69:9501: connect: connection refused"
time="2021-06-15T19:16:09Z" level=info msg="Failed to get volume stats errGet "http://10.233.5.29:9501/v1/stats\": dial tcp 10.233.5.29:9501: connect: connection refused"
time="2021-06-15T19:16:11Z" level=info msg="Failed to get volume stats errGet "http://10.233.5.29:9501/v1/stats\": dial tcp 10.233.5.29:9501: connect: connection refused"

apiVersion: openebs.io/v1alpha1
kind: JivaVolume
metadata:
  annotations:
    openebs.io/volume-policy: example-jivavolumepolicy
  creationTimestamp: "2021-06-15T19:16:05Z"
  generation: 5
  labels:
    openebs.io/component: jiva-volume
    openebs.io/persistent-volume: pvc-5ea4b089-9617-402e-9a40-9a5d2d69be50
  managedFields:
  - apiVersion: openebs.io/v1alpha1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:openebs.io/volume-policy: {}
        f:labels:
          .: {}
          f:openebs.io/component: {}
          f:openebs.io/persistent-volume: {}
      f:spec:
        .: {}
        f:accessType: {}
        f:capacity: {}
        f:iscsiSpec: {}
        f:mountInfo: {}
        f:policy:
          .: {}
          f:autoScaling: {}
          f:enableBufio: {}
          f:replica: {}
          f:target: {}
        f:pv: {}
      f:status: {}
      f:versionDetails:
        .: {}
        f:desired: {}
        f:status:
          .: {}
          f:current: {}
          f:dependentsUpgraded: {}
          f:lastUpdateTime: {}
    manager: jiva-csi
    operation: Update
    time: "2021-06-15T19:16:05Z"
  - apiVersion: openebs.io/v1alpha1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:desiredReplicationFactor: {}
        f:iscsiSpec:
          f:iqn: {}
          f:targetIP: {}
          f:targetPort: {}
        f:policy:
          f:replica:
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:cpu: {}
                f:memory: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:tolerations: {}
          f:replicaSC: {}
          f:target:
            f:auxResources:
              .: {}
              f:limits:
                .: {}
                f:cpu: {}
                f:memory: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:replicationFactor: {}
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:cpu: {}
                f:memory: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:tolerations: {}
      f:status:
        f:phase: {}
        f:status: {}
    manager: jiva-operator
    operation: Update
    time: "2021-06-15T19:16:11Z"
  name: pvc-5ea4b089-9617-402e-9a40-9a5d2d69be50
  namespace: openebs
  resourceVersion: "91179"
  uid: 5d514f46-185d-4e91-aa48-2f3b4edc0a51
spec:
  accessType: mount
  capacity: 4Gi
  desiredReplicationFactor: 1
  iscsiSpec:
    iqn: iqn.2016-09.com.openebs.jiva:pvc-5ea4b089-9617-402e-9a40-9a5d2d69be50
    targetIP: 10.233.5.29
    targetPort: 3260
  mountInfo: {}
  policy:
    autoScaling: false
    enableBufio: false
    replica:
      resources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/notReady
        operator: Exists
      - effect: NoExecute
        key: node.cloudprovider.kubernetes.io/uninitialized
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/unschedulable
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/out-of-disk
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/memory-pressure
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/disk-pressure
        operator: Exists
      - effect: NoExecute
        key: node.kubernetes.io/network-unavailable
        operator: Exists
    replicaSC: openebs-hostpath
    target:
      auxResources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      replicationFactor: 1
      resources:
        limits:
          cpu: "0"
          memory: "0"
        requests:
          cpu: "0"
          memory: "0"
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/notReady
        operator: Exists
        tolerationSeconds: 0
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 0
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 0
  pv: pvc-5ea4b089-9617-402e-9a40-9a5d2d69be50
status:
  phase: Syncing
  status: RO
versionDetails:
  desired: 2.9.0
  status:
    current: 2.9.0
    dependentsUpgraded: true
    lastUpdateTime: null
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: JivaVolumePolicy
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"openebs.io/v1alpha1","kind":"JivaVolumePolicy","metadata":{"annotations":{},"name":"example-jivavolumepolicy","namespace":"openebs"},"spec":{"autoScaling":false,"enableBufio":false,"replicaSC":"openebs-hostpath","target":{"replicationFactor":1}}}
    creationTimestamp: "2021-06-15T18:51:32Z"
    generation: 1
    managedFields:
    - apiVersion: openebs.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:spec:
          .: {}
          f:autoScaling: {}
          f:enableBufio: {}
          f:replicaSC: {}
          f:target:
            .: {}
            f:replicationFactor: {}
      manager: kubectl-client-side-apply
      operation: Update
      time: "2021-06-15T18:51:32Z"
    name: example-jivavolumepolicy
    namespace: openebs
    resourceVersion: "83139"
    uid: 36b95f5f-3b79-48a6-bec3-1e5f54b76760
  spec:
    autoScaling: false
    enableBufio: false
    replicaSC: openebs-hostpath
    target:
      replicationFactor: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

Move jivaVolume and jivaVolumePolicy CRDs to v1

Describe the problem/challenge you have

The CRDs for jivaVolume and jivaVolumePolicy are still in v1alpha1 can be moved to v1 version to mark jiva-operator as stable.

Describe the solution you'd like

As the specs for the CRDs are now stable and no more changes to the spec are coming we can safely add the v1 CRDs. And we can still query the v1alpha1 CRDs via v1 version. Since schema is same for both, using conversion strategy as none. Ref the PR openebs/zfs-localpv#140 for help ๐Ÿ™‚

Anything else you would like to add:

Please verify the upgrade scenario where the v1alpha1 CRs are getting synced after migrating to v1.

How to adjust the tolerations of statefulset jiva replica

I want jiva replica to run on the master node๏ผŒcurrently I can only modify the statefulset tolerations after installation
kubectl edit sts pvc-e28e2827-ca73-436e-a0f8-9c8420d2c27e-jiva-rep -n openebs

How to change values.yaml the tolerations of statefulset jiva replica

Add 'platform linux/arm' to github action

Describe the problem/challenge you have
an arm container is currently not available. only linux/amd64,linux/arm64

Describe the solution you'd like
add all 3 platforms: linux/amd64,linux/arm64, linux/arm

thank you :)

Handle Affinity rules on replica STS during volume migration

Describe the problem/challenge you have
[A description of the current limitation/problem/challenge that you are experiencing.]
There might be cases where user had used the affinites which need to be clear from the Jiva replica STS once we migrate the Jiva volume

Jiva (CSI) - GitHub Updates

  • README Updates
    • Badges
    • Project Status - Beta
    • k8s version compatibility
    • Quickstart guide
    • Contributor Docs
    • Adopters.md with links to openebs/openebs adopters
    • Roadmap link to openebs project
    • Community Links
  • Helm Charts
  • GitHub Builds
  • Multiarch builds
  • Disable Travis
  • Downstream tagging
  • e2e tests
  • Upgrades
  • Migration from non CSI
  • Monitoring
  • Troubleshooting guide

The target pod affinity setting in tutorials cannot work

What steps did you take and what happened:
I created an example jivavolumespolicy as policies.md wrote.But target pod affinity setting didn't work even get empty when I get jivavolumespolicy from server.

What did you expect to happen:
The target pod affinity setting works.

Anything else you would like to add:
I think this is one mistake in docs. The filed of affinity.fields must be one of nodeAffinity,podAffinity,podAntiAffinity,the requiredDuringSchedulingIgnoredDuringExecution is subfield.

Environment:

  • Jiva version
  • OpenEBS version
  • Kubernetes version (use kubectl version): v1.20.1
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

JivaVolume resource does not honor helm chart's namespace

What steps did you take and what happened:

  1. Installed the jiva helm chart using the command
helm install openebs-jiva openebs-jiva/jiva -n niladri --create-namespace \
	--set-string release.version="2.9.0" \
	--set-string jivaOperator.image.tag="2.9.0" \
	--set-string jivaOperator.controller.image.tag="2.9.0" \
	--set-string jivaOperator.replica.image.tag="2.9.0" \
	--set-string jivaCSIPlugin.image.tag="2.9.0" \
	--set-string localpv-provisioner.release.version="2.9.0" \
	--set-string localpv-provisioner.helperPod.image.tag="2.9.0" \
	--set-string localpv-provisioner.localpv.image.tag="2.9.0"

I used these flags to use the 2.9.0 images with the 2.8.3 chart. Once the 2.9.0 helm chart is available, this may be considered equivalent to helm install openebs-jiva openebs-jiva/jiva -n niladri --create-namespace
All pods came to RUNNING state.

  1. Created jivavolumepolicy and storageclass resources
apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: example-jivavolumepolicy
  namespace: niladri
spec:
  replicaSC: openebs-hostpath
  enableBufio: false
  autoScaling: false
  target:
    replicationFactor: 2
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-jiva-csi-sc
provisioner: jiva.csi.openebs.io
allowVolumeExpansion: true
parameters:
  cas-type: "jiva"
  policy: "example-jivavolumepolicy"
  1. Created a PVC and a Pod to mount the volume
    https://pastebin.com/4zF4g1Jm

What did you expect to happen:
jivavolume resource to be created in the 'niladri' namespace.

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs <jiva-operator pod name> -n openebs (optional)
$ kubectl logs -n niladri openebs-jiva-operator-7d7f45d6fd-22dz5
time="2021-05-14T18:56:47Z" level=info msg="Go Version: go1.14.7"
time="2021-05-14T18:56:47Z" level=info msg="Go OS/Arch: linux/amd64"
time="2021-05-14T18:56:47Z" level=info msg="Version of jiva-operator: 2.9.0"
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: JivaVolumePolicy
  metadata:
    creationTimestamp: "2021-05-14T18:48:17Z"
    generation: 1
    name: example-jivavolumepolicy
    namespace: niladri
    resourceVersion: "2637489"
    selfLink: /apis/openebs.io/v1alpha1/namespaces/niladri/jivavolumepolicies/example-jivavolumepolicy
    uid: a2d8be93-99f9-4b76-a529-d37db131ef38
  spec:
    autoScaling: false
    enableBufio: false
    replicaSC: openebs-hostpath
    target:
      replicationFactor: 2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Anything else you would like to add:
I have iSCSI initiator packages everywhere.

Environment:

  • Jiva version: 2.9.0
  • OpenEBS version: 2.9.0
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.9", GitCommit:"9dd794e454ac32d97cde41ae10be801ae98f75df", GitTreeState:"clean", BuildDate:"2021-03-18T01:00:06Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.19) exceeds the supported minor version skew of +/-1
  • Kubernetes installer & version: Kubeadm v1.19.9
  • Cloud provider or hardware configuration: The nodes are AWS EC2 VMs. This is not EKS. Kubeadm on EC2.
  • OS (e.g. from /etc/os-release):
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Option to add labels to replica pods

Describe the problem/challenge you have

I have velero setup in my cluster to backup everything by default unless labeled with velero.io/exclude-from-backup: "true". This makes sure that I don't accidentally forget to backup something important.

Since jiva replicas uses normal PVCs under the hood, velero does its thing and backups their content, even though the actual jiva volume has also been backed up. Currently there doesn't seem to be an option to add labels to the replica pods.

Describe the solution you'd like

Some why to add labels to replica pods.

how-to guide on recovering data

What steps did you take and what happened:

I would like to know if there is a guide to recover data on several PV using Jiva Operator after the openebs namespace was deleted by mistake.

What did you expect to happen:
Reinstalling openebs and be able to re-use the existing PV and PVCs.

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs <jiva-operator pod name> -n openebs (optional)
  • kubectl get jv <jiva volume cr name> -n openebs -o yaml
  • kubectl get jvp <jiva volume policy> -n openebs -o yaml

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Jiva version:
    2.12.2

  • OpenEBS version:

  • Kubernetes version (use kubectl version):
    Server and Client: 1.22

  • Kubernetes installer & version:
    microk8s

  • Cloud provider or hardware configuration:
    Custom Hardware (Linux)

  • OS (e.g. from /etc/os-release):
    Ubuntu 20.04.3 LTS

Target cannot connect to new initiator when the k8s node which the old initiator on is down

What steps did you take and what happened:
app pod is on node36
pvc-ctrl pod is on node34
pvc-rep0 is on node34
pvc-rep1 is on node35
pvc-pre2 is on node33
When I shutdown node36, wait a few minutes until app pod is terminated by server and force delete it from server. There is a new app pod created by controller and runs on node34.
New app pod is stuck into ContainerCreating state because the volume is not ready. Check pvc-ctrl pod(target pod), some error message like this "rejecting connection: 10.244.2.1 target already connected at 10.244.4.0".
CNI plugin is flannel, the 10.244.4.0/32 is the NIC flannel.1 on node36, the 10.244.2.1/24 is the NIC cni on node34.

What did you expect to happen:
Target can connect to new initiator when the old initiator and old node not respond.

The output of the following commands will help us better understand what's going on:
https://gist.github.com/von1994/af005cc019ab178c86dfc71bbfe25583

Anything else you would like to add:
When I force deleted the pvc-ctrl pod and wait for a new one is running, pvc-ctrl works and app pod is running! So I think there is something not updated in old pvc-ctrl pod.

Environment:

  • Jiva version: 2.11.0
  • OpenEBS version
  • Kubernetes version (use kubectl version): v1.20.1
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): CentOS 7
  • flannel: v0.14.0
  • jiva-operator version: 2.11.0

Move the operator to the latest operator-sdk or traditional k8s controller

Describe the problem/challenge you have
The current implementation of the jiva-operator uses an outdated which hinders it from using the latest features like resync interval optimizations.

Describe the solution you'd like
We can either move the latest version of the operator-sdk to make use of the latest contoller-runtime pkgs
or we can move to the traditional k8s controller approach we follow in other repos.

Anything else you would like to add:
This issue is open for discussion any inputs would be appreciated. Thanks ๐Ÿ˜„

add google analytics for jiva csi volumes

Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Add google analytics for cstor csi volumes ,whenever a volume is provisioned and de-provisioned plugin will send a google event with following details:

  1. pvName (will shown as app title in google analytics)
  2. pvcName
  3. size of the volume
  4. event type : volume-provision, volume-deprovision
  5. storage type "cstor-csi"
  6. replicacount as per the provisioned volume set via StorageClass
  7. ClientId as default namespace uuid

openebs-jiva-csi-node pod gets in CrashLoopBackOff

Hi all,

I can't get my jiva volumes working, its openebs-jiva-csi-node pod gets stuck in a CrashLoopBackOff.

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)

  1. Clean up old openebs attempts according to the uninstall guide: https://openebs.io/docs/user-guides/uninstall
  2. Install jiva as per the quickstart guide: https://github.com/openebs/jiva-operator/blob/develop/docs/quickstart.md
kubectl apply -f https://openebs.github.io/charts/hostpath-operator.yaml
kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml
  1. Edit the openebs-hostpath SC to have a different BasePath (snipped config)
metadata:
  annotations:
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: BasePath
        value: "/local-fs/virtuals/kubernetes/openebs"
  1. Check status of the jiva components (as per quickguide)
$ kubectl get -n openebs all
NAME                                               READY   STATUS             RESTARTS        AGE
pod/jiva-operator-57b879cfc8-hxnhc                 1/1     Running            0               10m
pod/openebs-jiva-csi-controller-0                  5/5     Running            0               10m
pod/openebs-jiva-csi-node-bd96p                    2/3     CrashLoopBackOff   6 (4m48s ago)   10m
pod/openebs-localpv-provisioner-778d48fff8-9wmbr   1/1     Running            0               14m

NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/openebs-jiva-csi-node   1         1         0       1            0           <none>          10m

NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jiva-operator                 1/1     1            1           10m
deployment.apps/openebs-localpv-provisioner   1/1     1            1           14m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/jiva-operator-57b879cfc8                 1         1         1       10m
replicaset.apps/openebs-localpv-provisioner-778d48fff8   1         1         1       14m

NAME                                           READY   AGE
statefulset.apps/openebs-jiva-csi-controller   1/1     10m

What did you expect to happen:
Working Jiva

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl describe -n openebs pod openebs-jiva-node-bd96p
Name:                 openebs-jiva-csi-node-bd96p
Namespace:            openebs
Priority:             900001000
Priority Class Name:  openebs-jiva-csi-node-critical
Node:                 rohan2013/192.168.1.2
Start Time:           Mon, 13 Jun 2022 13:22:02 +0200
Labels:               app=openebs-jiva-csi-node
                      controller-revision-hash=6799874754
                      name=openebs-jiva-csi-node
                      openebs.io/component-name=openebs-jiva-csi-node
                      openebs.io/version=3.2.0
                      pod-template-generation=1
                      role=openebs-jiva-csi
Annotations:          <none>
Status:               Running
IP:                   192.168.1.2
IPs:
  IP:           192.168.1.2
Controlled By:  DaemonSet/openebs-jiva-csi-node
Containers:
  csi-node-driver-registrar:
    Container ID:  containerd://e08df422db1a7e4e94e033a10f71b74cf53fe83d6ad5e13eedbecf42891d11e0
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0
    Image ID:      k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
      --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
    State:          Running
      Started:      Mon, 13 Jun 2022 13:22:03 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:               /plugin/csi.sock
      DRIVER_REG_SOCK_PATH:  /var/lib/kubelet/plugins/jiva.csi.openebs.io/csi.sock
      KUBE_NODE_NAME:         (v1:spec.nodeName)
      NODE_DRIVER:           openebs-jiva-csi
    Mounts:
      /plugin from plugin-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9dzsn (ro)
  jiva-csi-plugin:
    Container ID:  containerd://bba1da5f70845cd77697a2e8f2f7523b589a298fd9de0ed904c2e5d8c803a036
    Image:         openebs/jiva-csi:3.2.0
    Image ID:      docker.io/openebs/jiva-csi@sha256:506a4d9ca03a956fe28e5fa8d3bc2960e69accd128e5f93c6024f7cfe0650151
    Port:          <none>
    Host Port:     <none>
    Args:
      --name=jiva.csi.openebs.io
      --nodeid=$(OPENEBS_NODE_ID)
      --endpoint=$(OPENEBS_CSI_ENDPOINT)
      --plugin=$(OPENEBS_NODE_DRIVER)
      --retrycount=20
      --metricsBindAddress=:9505
    State:          Running
      Started:      Mon, 13 Jun 2022 13:22:03 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      OPENEBS_NODE_ID:        (v1:spec.nodeName)
      OPENEBS_CSI_ENDPOINT:  unix:///plugin/csi.sock
      OPENEBS_NODE_DRIVER:   node
      OPENEBS_NAMESPACE:     openebs (v1:metadata.namespace)
      REMOUNT:               True
    Mounts:
      /dev from device-dir (rw)
      /host from host-root (rw)
      /plugin from plugin-dir (rw)
      /sbin/iscsiadm from chroot-iscsiadm (rw,path="iscsiadm")
      /var/lib/kubelet/ from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9dzsn (ro)
  liveness-probe:
    Container ID:  containerd://098a2ff188207d6b34f10a3524d1024762e46d86f7409e6e17c6b08e9b76b6d0
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.3.0
    Image ID:      k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-address=/plugin/csi.sock
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Mon, 13 Jun 2022 13:33:04 +0200
      Finished:     Mon, 13 Jun 2022 13:33:04 +0200
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /plugin from plugin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9dzsn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  device-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  Directory
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  DirectoryOrCreate
  plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/jiva.csi.openebs.io/
    HostPathType:  DirectoryOrCreate
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/
    HostPathType:  Directory
  chroot-iscsiadm:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      openebs-jiva-csi-iscsiadm
    Optional:  false
  host-root:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  Directory
  kube-api-access-9dzsn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  12m                   default-scheduler  Successfully assigned openebs/openebs-jiva-csi-node-bd96p to rohan2013
  Normal   Pulled     12m                   kubelet            Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0" already present on machine
  Normal   Created    12m                   kubelet            Created container csi-node-driver-registrar
  Normal   Started    12m                   kubelet            Started container csi-node-driver-registrar
  Normal   Pulled     12m                   kubelet            Container image "openebs/jiva-csi:3.2.0" already present on machine
  Normal   Created    12m                   kubelet            Created container jiva-csi-plugin
  Normal   Started    12m                   kubelet            Started container jiva-csi-plugin
  Normal   Pulled     11m (x4 over 12m)     kubelet            Container image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0" already present on machine
  Normal   Created    11m (x4 over 12m)     kubelet            Created container liveness-probe
  Normal   Started    11m (x4 over 12m)     kubelet            Started container liveness-probe
  Warning  BackOff    2m34s (x48 over 12m)  kubelet            Back-off restarting failed container

As you can see, the liveness-probe is failing.

  • kubectl logs -n openebs openebs-jiva-csi-node-bd96p -c liveness-probe
I0613 11:33:04.652474       1 main.go:149] calling CSI driver to discover driver name
I0613 11:33:04.653452       1 main.go:155] CSI driver name: "jiva.csi.openebs.io"
I0613 11:33:04.653466       1 main.go:183] ServeMux listening at "0.0.0.0:9808"
F0613 11:33:04.653653       1 main.go:186] failed to start http server with error: listen tcp 0.0.0.0:9808: bind: address already in use
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000102001, 0xc0003fa000, 0x89, 0xaa)
        /workspace/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/klog/v2.(*loggingT).output(0xe78700, 0xc000000003, 0x0, 0x0, 0xc0000d6f50, 0xbef6a8, 0x7, 0xba, 0x0)
        /workspace/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/klog/v2.(*loggingT).printf(0xe78700, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0xa708f9, 0x2a, 0xc00045cd50, 0x1, ...)
        /workspace/vendor/k8s.io/klog/v2/klog.go:751 +0x191
k8s.io/klog/v2.Fatalf(...)
        /workspace/vendor/k8s.io/klog/v2/klog.go:1509
main.main()
        /workspace/cmd/livenessprobe/main.go:186 +0x86c

goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0xe78700)
        /workspace/vendor/k8s.io/klog/v2/klog.go:1164 +0x8b
created by k8s.io/klog/v2.init.0
        /workspace/vendor/k8s.io/klog/v2/klog.go:418 +0xdf

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Jiva version: 3.2.0
  • OpenEBS version: 3.2.0
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:26:19Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:18:48Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes installer & version: kubeadm
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:24:38Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: On prem
  • OS (e.g. from /etc/os-release): Debian bullseye

Any advice or help is much appreciated :)

Feature parity with non-CSI Jiva engine

Describe the problem/challenge you have
The jiva operator does not support all the configurable properties as the non-CSI jiva engine.

Describe the solution you'd like
Here is a checklist for the missing tunnables:

  • StoragePool/StorageClass
  • ReplicaCount
  • AuxResources
  • FSType
  • ServiceAccountName
  • VolumeMonitor
  • PriorityClass
  • Event generation in operator
  • Target
    • Tolerations
    • NodeSelector
    • Affinity
    • Resources
  • Replica
    • Tolerations
    • NodeSelector
    • Affinity
    • Resources

Need anti-affinity policies for replica pods.

Describe the problem/challenge you have

Distributed applications like mongodb require the volumes to be spread across multiple nodes - just like its own replicas. Cross scheduling them will cause performance and high availability issues.

Consider this case of 3 replica mongo sts. The mongo pods are neatly distributed across three different nodes:

kiran_mova_mayadata_io@kmova-dev:mongodb$ kubectl get pods -o wide | grep mongo
mongo-0                  2/2     Running   0          56m   10.0.2.15     gke-kmova-helm-default-pool-30f2c6c6-1942   <none>           <none>
mongo-1                  2/2     Running   0          55m   10.0.0.21     gke-kmova-helm-default-pool-30f2c6c6-3jsv   <none>           <none>
mongo-2                  2/2     Running   0          54m   10.0.1.12     gke-kmova-helm-default-pool-30f2c6c6-qf2w   <none>           <none>

However, the target pods are packed into single node:

kiran_mova_mayadata_io@kmova-dev:mongodb$ kubectl get pods -o wide -n openebs | grep jiva-ctrl
pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5-jiva-ctrl-75d9f46fvxng   1/1     Running   0          58m   10.0.0.22     gke-kmova-helm-default-pool-30f2c6c6-3jsv   <none>           <none>
pvc-96120cb1-0f36-4a53-9263-6af8b8cc5a66-jiva-ctrl-6c5db7d7hq6n   1/1     Running   0          59m   10.0.0.17     gke-kmova-helm-default-pool-30f2c6c6-3jsv   <none>           <none>
pvc-faa218d5-46c6-4bb7-a598-024970cf9b4c-jiva-ctrl-548585cnz9js   1/1     Running   0          59m   10.0.0.20     gke-kmova-helm-default-pool-30f2c6c6-3jsv   <none>           <none>
  • A failure to 3jsv will cause all mongo pods to go down.
  • The mongo pods on nodes other than 3jsv will have to go over the network to access their data.

A similar issue exists (but slightly more severe) with the jiva replica pods getting scheduled to same node:

pvc-1b21ac95-fd9f-466f-a39b-c1e1ab6e6cb5-jiva-rep-0               1/1     Running   0          54m   10.0.0.24     gke-kmova-helm-default-pool-30f2c6c6-3jsv   <none>           <none>
pvc-96120cb1-0f36-4a53-9263-6af8b8cc5a66-jiva-rep-0               1/1     Running   0          55m   10.0.0.19     gke-kmova-helm-default-pool-30f2c6c6-3jsv   <none>           <none>
pvc-faa218d5-46c6-4bb7-a598-024970cf9b4c-jiva-rep-0               1/1     Running   0          55m   10.0.2.17     gke-kmova-helm-default-pool-30f2c6c6-1942   <none>           <none>
  • Two of the replicas are on 3jsv - which means data for two of the mongo pods is on only 3jsv. Failure of 3jsv will cause mongo db to be lost.

Describe the solution you'd like
Jiva Volume Policies should allow specifying an anti-affinity feature that allows replica pods of a given application to be not co-located onto same node.

Anything else you would like to add:
This feature was supported with external storage Jiva Volumes - using ReplicaAntiAffinityTopoKey and specifying a unique label to all the PVCs belonging to the same application. openebs.io/replica-anti-affinity.

Workaround
When using single replica volumes - use local storage directly.

Cannot configure pull secrets, so cannot use it at all

Hi thanks for the lib! However, we cannot configure the pull secrets, so cannot use it.

We use the openebs helm chart + jiva helm chart to deploy it. Then we create a PVC with jiva's storage class. Then new pods are created: pvc-bb6efceb-e7b0-4d1f-ae64-5dd2c0fa6aeb-jiva-ctrl-fbb5bf7dc989 and pvc-bb6efceb-e7b0-4d1f-ae64-5dd2c0fa6aeb-jiva-rep-0. However, the two pods does not have pull secrets specified, so the container image cannot be pulled at all (we cannot use the image at docker hub but has to use some private registry with pull secrets instead).

Jiva Volume is not getting mounted on the pod and no error is raised

Jiva is freshely installed but when tring to create the first JV the pod ends mounting directly the host /var FS but not the volume:

  • Pod:
apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
  • Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: jiva.csi.openebs.io
    volume.kubernetes.io/selected-node: t4srv101b4.ad04.eni.intranet
    volume.kubernetes.io/storage-provisioner: jiva.csi.openebs.io
  creationTimestamp: "2022-08-23T10:59:25Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: test-jiva
  namespace: default
  resourceVersion: "27081023"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test-jiva
  uid: 79264c75-4275-453d-9cd8-36e01cb9c7f5
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: replicated
  volumeMode: Filesystem
  volumeName: pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  phase: Bound
  • Evidence from container's shell:
root@task-pv-pod:/# df -h
Filesystem            Size  Used Avail Use% Mounted on
overlay                38G  7.3G   31G  20% /
tmpfs                  64M     0   64M   0% /dev
tmpfs                 3.9G     0  3.9G   0% /sys/fs/cgroup
shm                    64M     0   64M   0% /dev/shm
/dev/mapper/rhel-var   38G  7.3G   31G  20% /etc/hosts
tmpfs                 7.7G   12K  7.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                 3.9G     0  3.9G   0% /proc/acpi
tmpfs                 3.9G     0  3.9G   0% /proc/scsi
tmpfs                 3.9G     0  3.9G   0% /sys/firmware
root@task-pv-pod:/# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/511/fs:/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/510/fs:/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/509/fs:/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/508/fs:/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/507/fs:/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/506/fs,upperdir=/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/523/fs,workdir=/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/523/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/rdma type cgroup (ro,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/mapper/rhel-var on /etc/hosts type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rhel-var on /dev/termination-log type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rhel-var on /etc/hostname type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rhel-var on /etc/resolv.conf type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rhel-var on /usr/share/nginx/html type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,size=8043240k)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime)
tmpfs on /sys/firmware type tmpfs (ro,relatime)
root@task-pv-pod:/#
  • Evidence from host shell:
[root@t4srv101b4 ~]# mount |grep aa8e5673-9d1f-4dbe-85f8-3c1c4249542a
tmpfs on /var/snap/microk8s/common/var/lib/kubelet/pods/aa8e5673-9d1f-4dbe-85f8-3c1c4249542a/volumes/kubernetes.io~projected/kube-api-access-s5tgp type tmpfs (rw,relatime,size=8043240k)
[root@t4srv101b4 ~]# lsscsi
[0:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda
[0:0:1:0]    disk    VMware   Virtual disk     2.0   /dev/sdb
[0:0:2:0]    disk    VMware   Virtual disk     2.0   /dev/sdc
[0:0:3:0]    disk    VMware   Virtual disk     2.0   /dev/sdd
[0:0:4:0]    disk    VMware   Virtual disk     2.0   /dev/sde
[0:0:5:0]    disk    VMware   Virtual disk     2.0   /dev/sdf
[0:0:6:0]    disk    VMware   Virtual disk     2.0   /dev/sdg
[3:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0
[35:0:0:0]   disk    OPENEBS  JIVA             0.1   /dev/sdj
[root@t4srv101b4 ~]# mount |grep sdj
[root@t4srv101b4 ~]#
  • Jiva Operators logs:
time="2022-08-16T17:22:31Z" level=info msg="Go Version: go1.17.6"
time="2022-08-16T17:22:31Z" level=info msg="Go OS/Arch: linux/amd64"
time="2022-08-16T17:22:31Z" level=info msg="Version of jiva-operator: develop-dev"
time="2022-08-16T17:22:31Z" level=info msg="starting manager"
<<redacted>
time="2022-08-23T11:01:47Z" level=info msg="not able to get controller pod ip for volume pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5: expected 1 controller pod got 0"
time="2022-08-23T11:01:48Z" level=info msg="start bootstraping jiva componentsJivaVolume: pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5"
time="2022-08-23T11:01:48Z" level=info msg="Creating a new serviceService.NamespaceopenebsService.Namepvc-79264c75-4275-453d-9cd8-36e01cb9c7f5-jiva-ctrl-svc"
time="2022-08-23T11:01:49Z" level=info msg="Updating JivaVolume with iscsi specISCSISpec{10.152.183.221 3260 iqn.2016-09.com.openebs.jiva:pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5}"
time="2022-08-23T11:01:49Z" level=info msg="Creating a new deploymentDeploy.NamespaceopenebsDeploy.Namepvc-79264c75-4275-453d-9cd8-36e01cb9c7f5-jiva-ctrl"
time="2022-08-23T11:01:49Z" level=info msg="Creating a new StatefulsetStatefulset.NamespaceopenebsSts.Namepvc-79264c75-4275-453d-9cd8-36e01cb9c7f5-jiva-rep"
time="2022-08-23T11:01:49Z" level=info msg="Creating a new pod disruption budgetPdb.NamespaceopenebsPdb.Namepvc-79264c75-4275-453d-9cd8-36e01cb9c7f5-pdb"
time="2022-08-23T11:01:49Z" level=info msg="not able to get controller pod ip for volume pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5: expected 1 controller pod got 0"
time="2022-08-23T11:01:50Z" level=info msg="start bootstraping jiva componentsJivaVolume: pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5"
time="2022-08-23T11:01:50Z" level=info msg="Updating JivaVolume with iscsi specISCSISpec{10.152.183.221 3260 iqn.2016-09.com.openebs.jiva:pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5}"
time="2022-08-23T11:01:50Z" level=error msg="failed to bootstrap volume pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5, due to error: failed to update JivaVolume with service info, err: Operation cannot be fulfilled on jivavolumes.openebs.io \"pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5\": the object has been modified; please apply your changes to the latest version and try again"
time="2022-08-23T11:01:50Z" level=error msg="failed to update JivaVolume, err: Operation cannot be fulfilled on jivavolumes.openebs.io \"pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5\": the object has been modified; please apply your changes to the latest version and try againfailed to update JivaVolume phase"
time="2022-08-23T11:01:50Z" level=info msg="not able to get controller pod ip for volume pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5: expected 1 controller pod got 0"
time="2022-08-23T11:01:53Z" level=error msg="failed to update JivaVolume, err: Operation cannot be fulfilled on jivavolumes.openebs.io \"pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5\": the object has been modified; please apply your changes to the latest version and try againfailed to update status"
time="2022-08-23T11:01:55Z" level=error msg="failed to update JivaVolume, err: Operation cannot be fulfilled on jivavolumes.openebs.io \"pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5\": the object has been modified; please apply your changes to the latest version and try againfailed to update status"
time="2022-08-23T11:02:04Z" level=error msg="failed to update JivaVolume, err: Operation cannot be fulfilled on jivavolumes.openebs.io \"pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5\": the object has been modified; please apply your changes to the latest version and try againfailed to update status"
  • Pv controller logs:
[root@t4srv101b4 ~]# kubectl -n openebs logs pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5-jiva-ctrl-67ff7f6xwnq5 jiva-controller
time="2022-08-23T11:01:51Z" level=info msg="RPC_READ_TIMEOUT env not set"
time="2022-08-23T11:01:51Z" level=info msg="RPC_WRITE_TIMEOUT env not set"
time="2022-08-23T11:01:51Z" level=info msg="REPLICATION_FACTOR: 3, RPC_READ_TIMEOUT: 0s, RPC_WRITE_TIMEOUT: 0s"
time="2022-08-23T11:01:51Z" level=info msg="Starting controller with frontendIP: , and clusterIP: 10.152.183.221"
time="2022-08-23T11:01:51Z" level=info msg="resetting controller"
time="2022-08-23T11:01:51Z" level=info msg="Listening on :9501"
time="2022-08-23T11:01:53Z" level=info msg="Register Replica, Address: 172.16.204.224 UUID: 771be914e8f69feec00c59682a954ada81db8f5a Uptime: 17.103004ms State: closed Type: Backend RevisionCount: 1"
time="2022-08-23T11:01:53Z" level=warning msg="No of yet to be registered replicas are less than 3 , No of registered replicas: 1"
172.16.204.192 - - [23/Aug/2022:11:01:53 +0000] "POST /v1/register HTTP/1.1" 200 0
time="2022-08-23T11:01:53Z" level=info msg="Register Replica, Address: 172.16.67.153 UUID: e461a5cfb6e95b5a50c48250092438c80a4c9448 Uptime: 22.890005ms State: closed Type: Backend RevisionCount: 1"
time="2022-08-23T11:01:53Z" level=info msg="Replica 172.16.204.224 signalled to start, registered replicas: map[string]types.RegReplica{\"172.16.204.224\":types.RegReplica{Address:\"172.16.204.224\", UUID:\"771be914e8f69feec00c59682a954ada81db8f5a\", UpTime:17103004, RevCount:1, RepType:\"Backend\", RepState:\"closed\"}, \"172.16.67.153\":types.RegReplica{Address:\"172.16.67.153\", UUID:\"e461a5cfb6e95b5a50c48250092438c80a4c9448\", UpTime:22890005, RevCount:1, RepType:\"Backend\", RepState:\"closed\"}}"
172.16.67.128 - - [23/Aug/2022:11:01:53 +0000] "POST /v1/register HTTP/1.1" 200 0
time="2022-08-23T11:01:53Z" level=info msg="resetting controller"
time="2022-08-23T11:01:53Z" level=info msg="Connecting to remote: 172.16.204.224:9502"
time="2022-08-23T11:01:53Z" level=info msg="Opening: 172.16.204.224:9502"
time="2022-08-23T11:01:53Z" level=info msg="check if replica tcp://172.16.204.224:9502 is already added"
time="2022-08-23T11:01:53Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:01:53Z" level=info msg="Set replica mode of 172.16.204.224:9502 to : WO"
time="2022-08-23T11:01:53Z" level=info msg="Adding backend: tcp://172.16.204.224:9502"
time="2022-08-23T11:01:53Z" level=info msg="replicator reset false"
time="2022-08-23T11:01:53Z" level=info msg="buildreadwriters: prev: 0 0 cur: 1 0"
time="2022-08-23T11:01:53Z" level=info msg="Start monitoring tcp://172.16.204.224:9502"
time="2022-08-23T11:01:53Z" level=info msg="Get backend tcp://172.16.204.224:9502 clone status"
time="2022-08-23T11:01:53Z" level=error msg="Waiting for replica to update CloneStatus to Completed/NA, retry after 2s"
time="2022-08-23T11:01:55Z" level=info msg="Get backend tcp://172.16.204.224:9502 clone status"
time="2022-08-23T11:01:55Z" level=info msg="Set replica mode of 172.16.204.224:9502 to : RW"
time="2022-08-23T11:01:55Z" level=info msg="Set backend tcp://172.16.204.224:9502 replica mode to RW"
time="2022-08-23T11:01:55Z" level=info msg="Set replica tcp://172.16.204.224:9502 to mode RW"
time="2022-08-23T11:01:55Z" level=info msg="addr tcp://172.16.204.224:9502 m: RW prev: WO in setmode"
time="2022-08-23T11:01:55Z" level=info msg="replicator reset false"
time="2022-08-23T11:01:55Z" level=info msg="buildreadwriters: prev: 0 0 cur: 1 1"
time="2022-08-23T11:01:55Z" level=info msg="Get backend tcp://172.16.204.224:9502 revision counter 1"
time="2022-08-23T11:01:55Z" level=info msg="sending add signal to 172.16.67.153"
time="2022-08-23T11:01:55Z" level=info msg="Create Replica for address tcp://172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=info msg="Update volume status"
time="2022-08-23T11:01:55Z" level=info msg="Previously Volume RO: true, Currently: true, Total Replicas: 1, RW replicas: 1, Total backends: 1"
time="2022-08-23T11:01:55Z" level=info msg="prevCheckpoint: , currCheckpoint: "
time="2022-08-23T11:01:55Z" level=info msg="Start SCSI target"
time="2022-08-23T11:01:55Z" level=info msg="SCSI device created"
time="2022-08-23T11:01:55Z" level=info msg="iSCSI service listening on: 0.0.0.0:3260"
172.16.204.192 - - [23/Aug/2022:11:01:53 +0000] "POST /v1/volumes/cHZjLTc5MjY0Yzc1LTQyNzUtNDUzZC05Y2Q4LTM2ZTAxY2I5YzdmNQ==?action=start HTTP/1.1" 200 1045
time="2022-08-23T11:01:55Z" level=info msg="check if replica tcp://172.16.67.153:9502 is already added"
time="2022-08-23T11:01:55Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:01:55Z" level=info msg="verify replication factor"
time="2022-08-23T11:01:55Z" level=info msg="Connecting to remote: 172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=info msg="Opening: 172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=info msg="Create Replica for address tcp://172.16.98.67:9502"
time="2022-08-23T11:01:55Z" level=info msg="check if replica tcp://172.16.98.67:9502 is already added"
time="2022-08-23T11:01:55Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:01:55Z" level=info msg="verify replication factor"
time="2022-08-23T11:01:55Z" level=info msg="Connecting to remote: 172.16.98.67:9502"
time="2022-08-23T11:01:55Z" level=info msg="Opening: 172.16.98.67:9502"
time="2022-08-23T11:01:55Z" level=info msg="check if replica tcp://172.16.67.153:9502 is already added"
time="2022-08-23T11:01:55Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:01:55Z" level=info msg="Snapshot: 172.16.204.224:9502 b212e6f0-02ff-448b-a0c9-f0b71bf2217d UserCreated false Created at 2022-08-23T11:01:55Z"
time="2022-08-23T11:01:55Z" level=info msg="successfully taken snapshots cnt 1"
time="2022-08-23T11:01:55Z" level=info msg="Snapshot: 172.16.67.153:9502 b212e6f0-02ff-448b-a0c9-f0b71bf2217d UserCreated false Created at 2022-08-23T11:01:55Z"
time="2022-08-23T11:01:55Z" level=info msg="Set replica mode of 172.16.67.153:9502 to : WO"
time="2022-08-23T11:01:55Z" level=info msg="Adding backend: tcp://172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=info msg="replicator reset false"
time="2022-08-23T11:01:55Z" level=info msg="buildreadwriters: prev: 0 1 cur: 2 1"
time="2022-08-23T11:01:55Z" level=info msg="Previously Volume RO: true, Currently: true, Total Replicas: 2, RW replicas: 1, Total backends: 2"
time="2022-08-23T11:01:55Z" level=info msg="prevCheckpoint: , currCheckpoint: "
time="2022-08-23T11:01:55Z" level=info msg="Start monitoring tcp://172.16.67.153:9502"
172.16.67.128 - - [23/Aug/2022:11:01:55 +0000] "POST /v1/replicas HTTP/1.1" 200 435
time="2022-08-23T11:01:55Z" level=info msg="check if replica tcp://172.16.98.67:9502 is already added"
time="2022-08-23T11:01:55Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:01:55Z" level=info msg="Check if Replica: tcp://172.16.98.67:9502 has greater revision count"
time="2022-08-23T11:01:55Z" level=info msg="Get backend tcp://172.16.67.153:9502 revision counter 1"
time="2022-08-23T11:01:55Z" level=info msg="Revision count: 1 of New Replica: tcp://172.16.98.67:9502, Revision count: 1 of WO Replica: tcp://172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=warning msg="can have only one WO replica at a time, found WO replica: tcp://172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=info msg="addReplicaNoLock tcp://172.16.98.67:9502 from addReplica failed can only have one WO replica at a time, found WO Replica: tcp://172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=error msg="Error in request: can only have one WO replica at a time, found WO Replica: tcp://172.16.67.153:9502"
10.110.157.140 - - [23/Aug/2022:11:01:55 +0000] "POST /v1/replicas HTTP/1.1" 500 230
172.16.67.128 - - [23/Aug/2022:11:01:55 +0000] "GET /v1/replicas/dGNwOi8vMTcyLjE2LjY3LjE1Mzo5NTAy HTTP/1.1" 200 435
time="2022-08-23T11:01:55Z" level=info msg="Prepare Rebuild Replica for id tcp://172.16.67.153:9502"
time="2022-08-23T11:01:55Z" level=info msg="Synchronizing volume-head-001.img.meta@tcp://172.16.204.224:9502 to [email protected]:9700"
time="2022-08-23T11:01:58Z" level=info msg="Done synchronizing volume-head-001.img.meta to [email protected]:9700"
172.16.67.128 - - [23/Aug/2022:11:01:55 +0000] "POST /v1/replicas/dGNwOi8vMTcyLjE2LjY3LjE1Mzo5NTAy?action=preparerebuild HTTP/1.1" 200 238
172.16.67.128 - - [23/Aug/2022:11:01:58 +0000] "GET /v1/replicas/dGNwOi8vMTcyLjE2LjY3LjE1Mzo5NTAy HTTP/1.1" 200 435
time="2022-08-23T11:01:58Z" level=info msg="Verify Rebuild Replica for id tcp://172.16.67.153:9502"
time="2022-08-23T11:01:58Z" level=info msg="chain [volume-head-001.img volume-snap-b212e6f0-02ff-448b-a0c9-f0b71bf2217d.img] from rw replica tcp://172.16.204.224:9502, indx: 1"
time="2022-08-23T11:01:58Z" level=info msg="chain [volume-head-001.img volume-snap-b212e6f0-02ff-448b-a0c9-f0b71bf2217d.img] from wo replica tcp://172.16.67.153:9502, indx: 1"
time="2022-08-23T11:01:58Z" level=info msg="Get backend tcp://172.16.204.224:9502 revision counter 1"
time="2022-08-23T11:01:58Z" level=info msg="rw replica tcp://172.16.204.224:9502 revision counter 1"
time="2022-08-23T11:01:58Z" level=info msg="Set replica mode of 172.16.67.153:9502 to : RW"
time="2022-08-23T11:01:58Z" level=info msg="Set backend tcp://172.16.67.153:9502 replica mode to RW"
time="2022-08-23T11:01:58Z" level=info msg="Set revision counter of 172.16.67.153:9502 to : 1"
172.16.67.128 - - [23/Aug/2022:11:01:58 +0000] "POST /v1/replicas/dGNwOi8vMTcyLjE2LjY3LjE1Mzo5NTAy?action=verifyrebuild HTTP/1.1" 200 435
time="2022-08-23T11:01:58Z" level=info msg="Set backend tcp://172.16.67.153:9502 revision counter to 1"
time="2022-08-23T11:01:58Z" level=info msg="WO replica tcp://172.16.67.153:9502's chain verified, update replica mode to RW"
time="2022-08-23T11:01:58Z" level=info msg="Set replica tcp://172.16.67.153:9502 to mode RW"
time="2022-08-23T11:01:58Z" level=info msg="addr tcp://172.16.67.153:9502 m: RW prev: WO in setmode"
time="2022-08-23T11:01:58Z" level=info msg="replicator reset false"
time="2022-08-23T11:01:58Z" level=info msg="buildreadwriters: prev: 0 1 cur: 2 2"
time="2022-08-23T11:01:58Z" level=info msg="Previously Volume RO: true, Currently: false, Total Replicas: 2, RW replicas: 2, Total backends: 2"
time="2022-08-23T11:01:58Z" level=info msg="prevCheckpoint: , currCheckpoint: "
time="2022-08-23T11:02:01Z" level=error msg="Read msg.Version failed, Error: EOF"
time="2022-08-23T11:02:01Z" level=error msg="Error reading from wire: EOF, RemoteAddr: 172.16.98.67:9503"
time="2022-08-23T11:02:01Z" level=info msg="Exiting rpc reader, RemoteAddr:172.16.98.67:9503"
time="2022-08-23T11:02:03Z" level=info msg="Create Replica for address tcp://172.16.98.67:9502"
time="2022-08-23T11:02:03Z" level=info msg="check if replica tcp://172.16.98.67:9502 is already added"
time="2022-08-23T11:02:03Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:02:03Z" level=info msg="verify replication factor"
time="2022-08-23T11:02:03Z" level=info msg="Connecting to remote: 172.16.98.67:9502"
time="2022-08-23T11:02:03Z" level=info msg="Opening: 172.16.98.67:9502"
time="2022-08-23T11:02:03Z" level=info msg="check if replica tcp://172.16.98.67:9502 is already added"
time="2022-08-23T11:02:03Z" level=info msg="check if any WO replica available"
time="2022-08-23T11:02:03Z" level=info msg="Snapshot: 172.16.67.153:9502 4ede9bde-8043-4656-b86a-0c638e46b11c UserCreated false Created at 2022-08-23T11:02:03Z"
time="2022-08-23T11:02:03Z" level=info msg="Snapshot: 172.16.204.224:9502 4ede9bde-8043-4656-b86a-0c638e46b11c UserCreated false Created at 2022-08-23T11:02:03Z"
time="2022-08-23T11:02:03Z" level=info msg="successfully taken snapshots cnt 2"
time="2022-08-23T11:02:03Z" level=info msg="Snapshot: 172.16.98.67:9502 4ede9bde-8043-4656-b86a-0c638e46b11c UserCreated false Created at 2022-08-23T11:02:03Z"
time="2022-08-23T11:02:03Z" level=info msg="Set replica mode of 172.16.98.67:9502 to : WO"
time="2022-08-23T11:02:03Z" level=info msg="Adding backend: tcp://172.16.98.67:9502"
time="2022-08-23T11:02:03Z" level=info msg="replicator reset false"
time="2022-08-23T11:02:03Z" level=info msg="buildreadwriters: prev: 0 2 cur: 3 2"
time="2022-08-23T11:02:03Z" level=info msg="Previously Volume RO: false, Currently: false, Total Replicas: 3, RW replicas: 2, Total backends: 3"
time="2022-08-23T11:02:03Z" level=info msg="prevCheckpoint: , currCheckpoint: "
time="2022-08-23T11:02:03Z" level=info msg="Start monitoring tcp://172.16.98.67:9502"
10.110.157.140 - - [23/Aug/2022:11:02:03 +0000] "POST /v1/replicas HTTP/1.1" 200 434
10.110.157.140 - - [23/Aug/2022:11:02:03 +0000] "GET /v1/replicas/dGNwOi8vMTcyLjE2Ljk4LjY3Ojk1MDI= HTTP/1.1" 200 434
time="2022-08-23T11:02:03Z" level=info msg="Prepare Rebuild Replica for id tcp://172.16.98.67:9502"
time="2022-08-23T11:02:03Z" level=info msg="Synchronizing volume-head-002.img.meta@tcp://172.16.204.224:9502 to [email protected]:9700"
time="2022-08-23T11:02:03Z" level=info msg="Exiting rpc loop for 172.16.98.67:9503 with err EOF"
time="2022-08-23T11:02:03Z" level=info msg="Closing read on RPC connection"
time="2022-08-23T11:02:03Z" level=info msg="Closing write on RPC connection"
time="2022-08-23T11:02:03Z" level=info msg="Exiting rpc writer, RemoteAddr:172.16.98.67:9503"
time="2022-08-23T11:02:03Z" level=info msg="Done synchronizing volume-head-002.img.meta to [email protected]:9700"
10.110.157.140 - - [23/Aug/2022:11:02:03 +0000] "POST /v1/replicas/dGNwOi8vMTcyLjE2Ljk4LjY3Ojk1MDI=?action=preparerebuild HTTP/1.1" 200 291
time="2022-08-23T11:02:04Z" level=info msg="connection establishing at: 172.16.98.127:3260"
time="2022-08-23T11:02:04Z" level=info msg="Target is connected to initiator: 10.110.157.140:17301"
time="2022-08-23T11:02:04Z" level=error msg="read BHS failed:EOF"
time="2022-08-23T11:02:04Z" level=warning msg="iscsi connection[0] closed"
time="2022-08-23T11:02:04Z" level=info msg="connection establishing at: 172.16.98.127:3260"
time="2022-08-23T11:02:04Z" level=info msg="Target is connected to initiator: 10.110.157.140:35608"
time="2022-08-23T11:02:04Z" level=info msg="Discovery request received from initiator: iqn.1994-05.com.redhat:e431f0a16835, Session type: Discovery, ISID: 0x23d000000"
time="2022-08-23T11:02:04Z" level=warning msg="unexpected connection state: full feature"
time="2022-08-23T11:02:04Z" level=error msg="read BHS failed:read tcp 172.16.98.127:3260->10.110.157.140:35608: read: connection reset by peer"
time="2022-08-23T11:02:04Z" level=warning msg="iscsi connection[0] closed"
time="2022-08-23T11:02:04Z" level=info msg="connection establishing at: 172.16.98.127:3260"
time="2022-08-23T11:02:04Z" level=info msg="Target is connected to initiator: 10.110.157.140:57446"
time="2022-08-23T11:02:04Z" level=info msg="Login request received from initiator: iqn.1994-05.com.redhat:e431f0a16835, Session type: Normal, Target name:iqn.2016-09.com.openebs.jiva:pvc-79264c75-4275-453d-9cd8-36e01cb9c7f5, ISID: 0x23d000006"
time="2022-08-23T11:02:04Z" level=error msg="rsa: 0h, sa:false not supported"
time="2022-08-23T11:02:04Z" level=warning msg="opcode: a3h err: check condition"
time="2022-08-23T11:02:05Z" level=warning msg="Closing RPC conn with replica: 172.16.98.67:9503"
10.110.157.140 - - [23/Aug/2022:11:02:13 +0000] "GET /v1/replicas/dGNwOi8vMTcyLjE2Ljk4LjY3Ojk1MDI= HTTP/1.1" 200 434
time="2022-08-23T11:02:13Z" level=info msg="Verify Rebuild Replica for id tcp://172.16.98.67:9502"
time="2022-08-23T11:02:13Z" level=info msg="chain [volume-head-002.img volume-snap-4ede9bde-8043-4656-b86a-0c638e46b11c.img volume-snap-b212e6f0-02ff-448b-a0c9-f0b71bf2217d.img] from rw replica tcp://172.16.204.224:9502, indx: 2"
time="2022-08-23T11:02:13Z" level=info msg="chain [volume-head-001.img volume-snap-4ede9bde-8043-4656-b86a-0c638e46b11c.img volume-snap-b212e6f0-02ff-448b-a0c9-f0b71bf2217d.img] from wo replica tcp://172.16.98.67:9502, indx: 2"
time="2022-08-23T11:02:13Z" level=info msg="Get backend tcp://172.16.204.224:9502 revision counter 85"
time="2022-08-23T11:02:13Z" level=info msg="rw replica tcp://172.16.204.224:9502 revision counter 85"
time="2022-08-23T11:02:13Z" level=info msg="Set replica mode of 172.16.98.67:9502 to : RW"
time="2022-08-23T11:02:13Z" level=info msg="Set backend tcp://172.16.98.67:9502 replica mode to RW"
time="2022-08-23T11:02:13Z" level=info msg="Set revision counter of 172.16.98.67:9502 to : 85"
time="2022-08-23T11:02:13Z" level=info msg="Set backend tcp://172.16.98.67:9502 revision counter to 85"
time="2022-08-23T11:02:13Z" level=info msg="WO replica tcp://172.16.98.67:9502's chain verified, update replica mode to RW"
time="2022-08-23T11:02:13Z" level=info msg="Set replica tcp://172.16.98.67:9502 to mode RW"
time="2022-08-23T11:02:13Z" level=info msg="addr tcp://172.16.98.67:9502 m: RW prev: WO in setmode"
time="2022-08-23T11:02:13Z" level=info msg="replicator reset false"
time="2022-08-23T11:02:13Z" level=info msg="buildreadwriters: prev: 0 2 cur: 3 3"
time="2022-08-23T11:02:13Z" level=info msg="Previously Volume RO: false, Currently: false, Total Replicas: 3, RW replicas: 3, Total backends: 3"
time="2022-08-23T11:02:13Z" level=info msg="successfully set checkpoint cnt 3"
time="2022-08-23T11:02:13Z" level=info msg="prevCheckpoint: , currCheckpoint: volume-snap-4ede9bde-8043-4656-b86a-0c638e46b11c.img"
10.110.157.140 - - [23/Aug/2022:11:02:13 +0000] "POST /v1/replicas/dGNwOi8vMTcyLjE2Ljk4LjY3Ojk1MDI=?action=verifyrebuild HTTP/1.1" 200 434
172.16.204.192 - - [23/Aug/2022:11:02:55 +0000] "GET /v1/checkpoint HTTP/1.1" 200 161

CRDs for helm chart do not match latest changes to operator

The CRDs for the helm chart were not updated with the changes for issue# 128. The result is that the JivaVolumes cannot be created when a PV is created after installing the operator from the helm chart because the enableBufio and autoScaling settings are still required on the CRDs installed by helm.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.