GithubHelp home page GithubHelp logo

openebs-archive / e2e-tests Goto Github PK

View Code? Open in Web Editor NEW
10.0 17.0 24.0 12.83 MB

E2e tests for OpenEBS. The tests are run on various platforms and results can be seen at https://openebs.ci

Home Page: https://openebs.io

License: Apache License 2.0

Makefile 2.16% Shell 29.79% Python 28.33% Dockerfile 2.64% Jinja 37.09%
litmus kubernetes openebs hacktoberfest

e2e-tests's People

Contributors

a4abhishek avatar ashishranjan738 avatar chandankumar4 avatar dargasudarshan avatar doruboyina avatar epowell101 avatar gprasath avatar harshita-sharma011 avatar ibreakthecloud avatar kaustumbh7 avatar kmova avatar nsathyaseelan avatar payes avatar prabhu43 avatar prateekpandey14 avatar roshanjossey avatar satyamz avatar shashank855 avatar shubhambhattar avatar shyamjalan avatar skchoudhary avatar somesh2905 avatar strikerrus avatar sushma1118 avatar uditgaurav avatar umamukkara avatar vibhor995 avatar vishnuitta avatar w3aman avatar yudaykiran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

e2e-tests's Issues

Unable to get ReleaseTag String in OpenebsDeploy job

Issue Description :

  • Release version String not present in openebs-deploy job logs . Because of this, Openshift and Konvoy pipelines doen't contains the OpenEBS version .

Screenshot 2020-02-26 at 4 26 53 PM

Expected Logs be Like ,

Checking out 3a09bff2 as release-branch...
Skipping Git submodules setup
$ chmod 755 ./openebs-konvoy-e2e/pipelines/OpenEBS-base/stages/2-Infra-setup/XJGT-openebs-deploy/infra-setup
$ ./openebs-konvoy-e2e/pipelines/OpenEBS-base/stages/2-Infra-setup/XJGT-openebs-deploy/infra-setup
+ '[' '' == node ']'
+ pod
+ sshpass -p Test@1658 ssh -o StrictHostKeyChecking=no [email protected] -p 1658 'cd e2e-konvoy && bash openebs-konvoy-e2e/pipelines/OpenEBS-base/stages/2-Infra-setup/XJGT-openebs-deploy/infra-setup node '\''162411'\''' '"5326"' '"3a09bff28a273ee2e92a316626f086527885c445"' '"1.7.0"' '"master"' '"v0.4.7"' '"elastic"' '"bhgq74qtkkkk6xj5wbmm4bzz"'
Warning: Permanently added '[106.51.78.18]:1658' (ECDSA) to the list of known hosts.
+ '[' node == node ']'
+ node 162411 5326 3a09bff28a273ee2e92a316626f086527885c445 1.7.0 master v0.4.7 elastic bhgq74qtkkkk6xj5wbmm4bzz
++ echo 162411
+ job_id=162411
++ echo 5326
+ pipeline_id=5326
++ echo 3a09bff28a273ee2e92a316626f086527885c445
+ commit_id=3a09bff28a273ee2e92a316626f086527885c445
++ echo 1.7.0
+ releaseTag=1.7.0
++ echo master
+ releaseBranch=master
++ echo v0.4.7
+ ndmTag=v0.4.7
+ case_id=XJGT
++ echo elastic
+ elastic_user=elastic
++ echo bhgq74qtkkkk6xj5wbmm4bzz
+ elastic_password=bhgq74qtkkkk6xj5wbmm4bzz
+ time=date
++ eval date
+++ date
+ current_time='Fri Feb 14 20:47:14 IST 2020'
++ pwd
+ present_dir=/home/d2iq/e2e-konvoy
+ echo /home/d2iq/e2e-konvoy

Refactor the CSPC pool creation litmusbook

  • Refactor the litmusbook to created the pools on specific number of nodes instead of creating all the available nodes
  • Obtain the block device based on the stae as Active and the claimstate as Unclaimed

Service selector mismatch in minio-deployment.

What happened:

  • Litmusbook for deploying minio application creates minio-service with wrong selector field.

What you expected to happen*:

  • minio-service should have proper selector defined for serving requests.

How to reproduce it (as minimally and precisely as possible):

  • Use minio provisioning litmusbook .

target affinity check util fails for csi based cstor volumes

What happened:

2020-03-19T03:23:44.777727 (delta: 0.100552)         elapsed: 81.712128 ******* 
ok: [127.0.0.1] => {"changed": false, "resources": [{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"annotations": {"pv.kubernetes.io/provisioned-by": "cstor.csi.openebs.io"}, "creationTimestamp": "2020-03-19T03:22:41Z", "finalizers": ["kubernetes.io/pv-protection", "external-attacher/cstor-csi-openebs-io"], "name": "pvc-bd2c6fd2-f4f0-4d21-863b-fb108bd128dd", "resourceVersion": "173829", "selfLink": "/api/v1/persistentvolumes/pvc-bd2c6fd2-f4f0-4d21-863b-fb108bd128dd", "uid": "e077be50-ad36-48f5-8f11-c20460814ab1"}, "spec": {"accessModes": ["ReadWriteOnce"], "capacity": {"storage": "15Gi"}, "claimRef": {"apiVersion": "v1", "kind": "PersistentVolumeClaim", "name": "percona-mysql-claim", "namespace": "percona", "resourceVersion": "173772", "uid": "bd2c6fd2-f4f0-4d21-863b-fb108bd128dd"}, "csi": {"driver": "cstor.csi.openebs.io", "fsType": "ext4", "volumeAttributes": {"openebs.io/cas-type": "cstor", "storage.kubernetes.io/csiProvisionerIdentity": "1584550299705-8081-cstor.csi.openebs.io"}, "volumeHandle": "pvc-bd2c6fd2-f4f0-4d21-863b-fb108bd128dd"}, "persistentVolumeReclaimPolicy": "Delete", "storageClassName": "openebs-csi", "volumeMode": "Filesystem"}, "status": {"phase": "Bound"}}]}

TASK [debug] *******************************************************************
2020-03-19T03:23:45.562501 (delta: 0.784721)         elapsed: 82.496902 ******* 
fatal: [127.0.0.1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'openebs.io/cas-type'\n\nThe error appears to have been in '/utils/scm/openebs/target_affinity_check.yml': line 31, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- debug:\n  ^ here\n"}

Velero backup fails for cstor voume greater than 50Gi

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

What happened:

  • velero backup for cstor volume fails when data size is greater than 50 Gi

What you expected to happen:

  • Backup should happen successfully.
    How to reproduce it (as minimally and precisely as possible):
  • Provision percona application on cstor volume and write 50Gi of data on application mount point. Then create velero backup.
    Anything else we need to know?:
    Velero logs:
time="2020-03-20T20:36:19Z" level=info msg="setting log-level to INFO" logSource="pkg/cmd/server/server.go:171"
time="2020-03-20T20:36:19Z" level=info msg="Starting Velero server v1.2.0 (5d008491bbf681658d3e372da1a9d3a21ca4c03c)" logSource="pkg/cmd/server/server.go:173"
time="2020-03-20T20:36:19Z" level=info msg="No feature flags enabled" logSource="pkg/cmd/server/server.go:177"
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=BackupItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/pod
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=BackupItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/pv
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=BackupItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/service-account
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/add-pv-from-pvc
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/add-pvc-from-pod
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/change-storage-class
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/cluster-role-bindings
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/job
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/pod
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/restic
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/role-bindings
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/service
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/velero kind=RestoreItemAction logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/service-account
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/plugins/velero-blockstore-cstor kind=VolumeSnapshotter logSource="pkg/plugin/clientmgmt/registry.go:100" name=openebs.io/cstor-blockstore
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/plugins/velero-plugin-for-aws kind=VolumeSnapshotter logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/aws
time="2020-03-20T20:36:19Z" level=info msg="registering plugin" command=/plugins/velero-plugin-for-aws kind=ObjectStore logSource="pkg/plugin/clientmgmt/registry.go:100" name=velero.io/aws
time="2020-03-20T20:36:19Z" level=info msg="Checking existence of namespace" logSource="pkg/cmd/server/server.go:337" namespace=velero
time="2020-03-20T20:36:19Z" level=info msg="Namespace exists" logSource="pkg/cmd/server/server.go:343" namespace=velero
time="2020-03-20T20:36:22Z" level=info msg="Checking existence of Velero custom resource definitions" logSource="pkg/cmd/server/server.go:372"
time="2020-03-20T20:36:22Z" level=info msg="All Velero custom resource definitions exist" logSource="pkg/cmd/server/server.go:406"
time="2020-03-20T20:36:22Z" level=info msg="Checking that all backup storage locations are valid" logSource="pkg/cmd/server/server.go:413"
time="2020-03-20T20:36:22Z" level=warning msg="Velero restic daemonset not found; restic backups/restores will not work until it's created" logSource="pkg/cmd/server/server.go:472"
time="2020-03-20T20:36:22Z" level=info msg="Starting controllers" logSource="pkg/cmd/server/server.go:520"
time="2020-03-20T20:36:22Z" level=info msg="Starting metric server at address [:8085]" logSource="pkg/cmd/server/server.go:528"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=backup-deletion logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=backup-deletion logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=gc-controller logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=gc-controller logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=restore logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=restore logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=schedule logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=schedule logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=downloadrequest logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=downloadrequest logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=serverstatusrequest logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=serverstatusrequest logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=backup logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=backup logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Backup sync period is 1m0s" logSource="pkg/controller/backup_sync_controller.go:72"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=restic-repository logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=restic-repository logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:22Z" level=info msg="Server started successfully" logSource="pkg/cmd/server/server.go:780"
time="2020-03-20T20:36:22Z" level=info msg="Starting controller" controller=backup-sync logSource="pkg/controller/generic_controller.go:76"
time="2020-03-20T20:36:22Z" level=info msg="Waiting for caches to sync" controller=backup-sync logSource="pkg/controller/generic_controller.go:79"
time="2020-03-20T20:36:24Z" level=info msg="Caches are synced" controller=schedule logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:24Z" level=info msg="Caches are synced" controller=serverstatusrequest logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=backup-deletion logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=downloadrequest logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=restore logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=gc-controller logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=backup logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="checking for expiration of DownloadRequest" controller=downloadrequest key=velero/percona-backup-20200321020321 logSource="pkg/controller/download_request_controller.go:196"
time="2020-03-20T20:36:25Z" level=info msg="checking for expiration of DownloadRequest" controller=downloadrequest key=velero/percona-backup-20200321020352 logSource="pkg/controller/download_request_controller.go:196"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=backup-sync logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:36:25Z" level=info msg="Caches are synced" controller=restic-repository logSource="pkg/controller/generic_controller.go:83"
time="2020-03-20T20:37:12Z" level=info msg="Setting up backup log" backup=velero/percona-backup1 controller=backup logSource="pkg/controller/backup_controller.go:440"
time="2020-03-20T20:37:12Z" level=info msg="Setting up backup temp file" backup=velero/percona-backup1 logSource="pkg/controller/backup_controller.go:462"
time="2020-03-20T20:37:12Z" level=info msg="Setting up plugin manager" backup=velero/percona-backup1 logSource="pkg/controller/backup_controller.go:469"
time="2020-03-20T20:37:12Z" level=info msg="Getting backup item actions" backup=velero/percona-backup1 logSource="pkg/controller/backup_controller.go:473"
time="2020-03-20T20:37:12Z" level=info msg="Setting up backup store" backup=velero/percona-backup1 logSource="pkg/controller/backup_controller.go:479"
time="2020-03-20T20:37:12Z" level=info msg="Writing backup version file" backup=velero/percona-backup1 logSource="pkg/backup/backup.go:213"
time="2020-03-20T20:37:12Z" level=info msg="Including namespaces: app-percona-ns" backup=velero/percona-backup1 logSource="pkg/backup/backup.go:219"
time="2020-03-20T20:37:12Z" level=info msg="Excluding namespaces: <none>" backup=velero/percona-backup1 logSource="pkg/backup/backup.go:220"
time="2020-03-20T20:37:12Z" level=info msg="Including resources: *" backup=velero/percona-backup1 logSource="pkg/backup/backup.go:223"
time="2020-03-20T20:37:12Z" level=info msg="Excluding resources: <none>" backup=velero/percona-backup1 logSource="pkg/backup/backup.go:224"
time="2020-03-20T20:37:12Z" level=info msg="Backing up group" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/group_backupper.go:101"
time="2020-03-20T20:37:12Z" level=info msg="Backing up resource" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/resource_backupper.go:106" resource=pods
time="2020-03-20T20:37:12Z" level=info msg="Listing items" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/resource_backupper.go:227" namespace=app-percona-ns resource=pods
time="2020-03-20T20:37:12Z" level=info msg="Retrieved 1 items" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/resource_backupper.go:241" namespace=app-percona-ns resource=pods
time="2020-03-20T20:37:12Z" level=info msg="Backing up item" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:169" name=percona-755f6678bf-sj87j namespace=app-percona-ns resource=pods
time="2020-03-20T20:37:12Z" level=info msg="Executing custom action" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:330" name=percona-755f6678bf-sj87j namespace=app-percona-ns resource=pods
time="2020-03-20T20:37:12Z" level=info msg="Executing podAction" backup=velero/percona-backup1 cmd=/velero logSource="pkg/backup/pod_action.go:51" pluginName=velero
time="2020-03-20T20:37:12Z" level=info msg="Adding pvc percona-mysql-claim to additionalItems" backup=velero/percona-backup1 cmd=/velero logSource="pkg/backup/pod_action.go:67" pluginName=velero
time="2020-03-20T20:37:12Z" level=info msg="Done executing podAction" backup=velero/percona-backup1 cmd=/velero logSource="pkg/backup/pod_action.go:77" pluginName=velero
time="2020-03-20T20:37:12Z" level=info msg="Backing up item" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:169" name=percona-mysql-claim namespace=app-percona-ns resource=persistentvolumeclaims
time="2020-03-20T20:37:12Z" level=info msg="Executing custom action" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:330" name=percona-mysql-claim namespace=app-percona-ns resource=persistentvolumeclaims
time="2020-03-20T20:37:12Z" level=info msg="Executing PVCAction" backup=velero/percona-backup1 cmd=/velero logSource="pkg/backup/backup_pv_action.go:49" pluginName=velero
time="2020-03-20T20:37:12Z" level=info msg="Backing up item" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:169" name=pvc-8e734176-6a91-11ea-be34-42010a800037 namespace= resource=persistentvolumes
time="2020-03-20T20:37:12Z" level=info msg="Executing takePVSnapshot" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:395" name=pvc-8e734176-6a91-11ea-be34-42010a800037 namespace= resource=persistentvolumes
time="2020-03-20T20:37:12Z" level=info msg="label \"failure-domain.beta.kubernetes.io/zone\" is not present on PersistentVolume" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:420" name=pvc-8e734176-6a91-11ea-be34-42010a800037 namespace= persistentVolume=pvc-8e734176-6a91-11ea-be34-42010a800037 resource=persistentvolumes
time="2020-03-20T20:37:12Z" level=info msg="Initializing velero plugin for CStor" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/snapshot/snap.go:36" pluginName=velero-blockstore-cstor
time="2020-03-20T20:37:12Z" level=info msg="Ip address of velero-plugin server: 10.4.0.26" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:141" pluginName=velero-blockstore-cstor
time="2020-03-20T20:37:12Z" level=info msg="Got volume ID for persistent volume" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:446" name=pvc-8e734176-6a91-11ea-be34-42010a800037 namespace= persistentVolume=pvc-8e734176-6a91-11ea-be34-42010a800037 resource=persistentvolumes volumeSnapshotLocation=minio
time="2020-03-20T20:37:12Z" level=info msg="Getting volume information" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:466" name=pvc-8e734176-6a91-11ea-be34-42010a800037 namespace= persistentVolume=pvc-8e734176-6a91-11ea-be34-42010a800037 resource=persistentvolumes volumeID=pvc-8e734176-6a91-11ea-be34-42010a800037
time="2020-03-20T20:37:12Z" level=info msg="Snapshotting persistent volume" backup=velero/percona-backup1 group=v1 logSource="pkg/backup/item_backupper.go:472" name=pvc-8e734176-6a91-11ea-be34-42010a800037 namespace= persistentVolume=pvc-8e734176-6a91-11ea-be34-42010a800037 resource=persistentvolumes volumeID=pvc-8e734176-6a91-11ea-be34-42010a800037
time="2020-03-20T20:37:12Z" level=info msg="Writing to {backups/percona-backup1/-pvc-8e734176-6a91-11ea-be34-42010a800037-percona-backup1.pvc} with provider{aws} to bucket{velero}" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:78" pluginName=velero-blockstore-cstor
time="2020-03-20T20:37:12Z" level=info msg="successfully writtern object{backups/percona-backup1/-pvc-8e734176-6a91-11ea-be34-42010a800037-percona-backup1.pvc} to {aws}" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:98" pluginName=velero-blockstore-cstor
time="2020-03-20T20:37:12Z" level=info msg="creating snapshot{percona-backup1}" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:334" pluginName=velero-blockstore-cstor
time="2020-03-20T20:37:14Z" level=info msg="Snapshot Successfully Created" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:375" pluginName=velero-blockstore-cstor
time="2020-03-20T20:37:14Z" level=info msg="Uploading snapshot to  'backups/percona-backup1/-pvc-8e734176-6a91-11ea-be34-42010a800037-percona-backup1' with provider{aws} to bucket{velero}" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/operation.go:28" pluginName=velero-blockstore-cstor
time="2020-03-20T21:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-20T21:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-20T22:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-20T22:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-20T23:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-20T23:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T00:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T00:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T01:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T01:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T02:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T02:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T03:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T03:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T04:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T04:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T05:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T05:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T06:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T06:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T07:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T07:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T08:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T08:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T09:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T09:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T10:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T10:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T11:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T11:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T12:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T12:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T13:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T13:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T14:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T14:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T15:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T15:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T16:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T16:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T17:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T17:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T18:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T18:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T19:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T19:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T20:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T20:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T21:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T21:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T22:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T22:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-21T23:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-21T23:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T00:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T00:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T01:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T01:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T02:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T02:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T03:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T03:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T04:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T04:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T05:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T05:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T06:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T06:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T07:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T07:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T08:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T08:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T08:47:18Z" level=warning msg="Failed to close file interface : blob (code=Unknown): MultipartUpload: upload multipart failed\n\tupload id: d8a3aa2a-175f-4c16-88c7-8b525675ee38\ncaused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/conn.go:242" pluginName=velero-blockstore-cstor
time="2020-03-22T08:47:18Z" level=info msg="Client{14} operation completed.. completed count{0}" backup=velero/percona-backup1 cmd=/plugins/velero-blockstore-cstor logSource="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/clouduploader/server_utils.go:160" pluginName=velero-blockstore-cstor
time="2020-03-22T09:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T09:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T10:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T10:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"
time="2020-03-22T11:36:25Z" level=info msg="Checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:456"
time="2020-03-22T11:36:25Z" level=info msg="Done checking for expired DeleteBackupRequests" controller=backup-deletion logSource="pkg/controller/backup_deletion_controller.go:484"

Velero Backup/Restore experiment doesn't verify local snapshot deletion

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

What happened:

  • Litmusbook for Velero backup and Restore doen't verify that the local snapshots are deleted.

What you expected to happen:

  • After taking backup on other object storage medium (minio or S3), the local snapshot (or zfs snapshot ) should get deleted.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Implement chaos util to kill containers based on containerd.

If this is a FEATURE REQUEST, please:
-Implement a chaoslib util based on crictl to kill containers based on containerd

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Make it easy to debug the build (CI) failures with stateful apps

Here is my usecase:

Tim is a DevOps engineer at a large Retail store who is responsible for running a complex build pipeline that involves several mico-services. The microservices that implement a order and supply management functionalities - store the states in a set of common datastores. The Jenkins CI pipeline simulates real world interactions with the system that begin with simulating customers placing the orders to the backend systems optimizing the supply and delivery of these orders to the customers. Time has setup the Job execution pipeline in such a way that, if there are failures, the developers can back trace the state of the database and the logs associated with each stage.

  • The build (or job) logs are saved onto OpenEBS PV, say Logs PV
  • The datastores are created on OpenEBS Volumes, say Datastore PVs.
  • At the end of each job, either on success of failure, snapshots are taken of the Logs PV and the Datastore PVs.
  • When there is a build failure, the volume snapshot information is sent to all the developers whose service were running when the job was getting executed.
  • Each developer can bring up their own debug session in their namespace by creating a environment with cloned volumes. Either they re-run the tests manually by going back to the previous state with higher debug level or analyze the data currently available that is causing the issue.

Unable to get unclaimed block devices in pool expansion litmus experiment

Is this a BUG REPORT or FEATURE REQUEST?

While trying to fetch unclaimed block device on each node, the task fails

2020-03-05T07:59:05.394204 (delta: 0.31172)         elapsed: 25.990578 ******** 
FAILED - RETRYING: Getting the Unclaimed block-device from each node (10 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (9 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (8 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (7 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (6 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (5 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (4 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (3 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (2 retries left).
FAILED - RETRYING: Getting the Unclaimed block-device from each node (1 retries left).
fatal: [127.0.0.1]: FAILED! => {"attempts": 10, "changed": true, "cmd": "kubectl get blockdevice -n openebs -l kubernetes.io/hostname=d2iq-node5.mayalabs.io -o jsonpath='{.items[?(@.status.claimState==\"Unclaimed\")].metadata.name}' | tr \" \" \"\\n\" | grep -v sparse | head -n \"2\"", "delta": "0:00:01.348516", "end": "2020-03-05 08:00:14.271413", "rc": 0, "start": "2020-03-05 08:00:12.922897", "stderr": "", "stderr_lines": [], "stdout": "blockdevice-894a9bbc1843ed383328418938f55128\nblockdevice-a06dbf5d294022e8354341fb12d74c61", "stdout_lines": ["blockdevice-894a9bbc1843ed383328418938f55128", "blockdevice-a06dbf5d294022e8354341fb12d74c61"]}

Jiva node failure test cases.

(check for ext4 meta/buffer sync dependency)
(with nfs on ext4 on jiva - may have additional challenges)
(process restart works fine for Ext4)
(blocked on setup)

[Refactor] : Converting shell module (kubectl command) into k8s / facts module.

FEATURE REQUEST

WHY

  • Using the correct module for a particular task is a good coding practice although we can use the shell module but k8s and k8s_facts have more rights for Kubernetes tasks.
  • Use the OpenShift Python client to perform read operations on K8s objects.
  • Access to the full range of K8s APIs.
  • k8s and k8s facts module are idempotent.

HOW

  • There are some ansible modules which is used for this purpose. Those modules are k8s and k8s_facts. These modules along with the JSON filter will help to get the required set output.

WHAT ELSE ?

  • Although these modules have many plus point but we are using an overhead task to filter the k8s /facts output and get the required output. So there is a scope of improvement for such cases in these modules.

Add AccessMode for application other than minio.

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

What happened:

  • Access mode for the PVC is not the tunable value for the applications other than minio.

What you expected to happen:

  • Access mode should come as litmus env ad should reflect in the apllication PVC spec for all the applicatios.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Enhance the EBS disk attach playbooks to factor in existing device names on VM instance

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

FEATURE REQUEST?

  • From @chandankumar4 :

  • Attach multiple EBS Volume to AWS instances by looking for available disk name inside instances.

  • Recommended disk name for EBS volume is - /dev/sd[f-p]

for example: -

  • At the time of attaching EBS Volume in instances if dev/sdf disk name already present then use the next available likedev/sdg

Currently used ansible module for creating EBS Volume

        - name: Creating and attaching EBS Volume in AWS
          ec2_vol:
            instance: i-0820e863967d14dc0
            device_name: /dev/xvdb
            region: eu-west-2
            state: present
            volume_size: 50
            volume_type: gp2
            zone: eu-west-2a

Disk name is auto generated by ansible module if device_name is not specified but currently it not check for available disk name.

Reference :- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html

For details on ec2_vol ansible module's approach to selecting device names, please see here:

https://docs.ansible.com/ansible/2.6/modules/ec2_vol_module.html

For EBS device naming considerations, please see here:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html#available-ec2-device-names

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html#device-name-limits

Approach for solve this issue:Approach for solve this issue:

Velero backup fails when inducing pool failure.

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

What happened:

devuser@mlrack1:~$ velero backup get
NAME                         STATUS                      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
newschedule-20200227233642   PartiallyFailed (1 error)   2020-02-27 23:36:42 +0000 UTC   29d       default            <none>
newschedule-20200227233442   PartiallyFailed (1 error)   2020-02-27 23:34:42 +0000 UTC   29d       default            <none>
newschedule-20200227233242   PartiallyFailed (1 error)   2020-02-27 23:32:42 +0000 UTC   29d       default            <none>
newschedule-20200227233042   PartiallyFailed (1 error)   2020-02-27 23:30:42 +0000 UTC   29d       default            <none>
newschedule-20200227232842   PartiallyFailed (1 error)   2020-02-27 23:28:42 +0000 UTC   29d       default            <none>
newschedule-20200227232642   PartiallyFailed (1 error)   2020-02-27 23:26:42 +0000 UTC   29d       default            <none>
newschedule-20200227232442   PartiallyFailed (1 error)   2020-02-27 23:24:42 +0000 UTC   29d       default            <none>
newschedule-20200227232242   PartiallyFailed (1 error)   2020-02-27 23:22:42 +0000 UTC   29d       default            <none>
newschedule-20200227232042   PartiallyFailed (1 error)   2020-02-27 23:20:42 +0000 UTC   29d       default            <none>
newschedule-20200227231842   PartiallyFailed (1 error)   2020-02-27 23:18:42 +0000 UTC   29d       default            <none>
newschedule-20200227231642   PartiallyFailed (1 error)   2020-02-27 23:16:42 +0000 UTC   29d       default            <none>
newschedule-20200227231442   PartiallyFailed (1 error)   2020-02-27 23:14:42 +0000 UTC   29d       default            <none>
newschedule-20200227231242   PartiallyFailed (1 error)   2020-02-27 23:12:42 +0000 UTC   29d       default            <none>
newschedule-20200227230853   PartiallyFailed (1 error)   2020-02-27 23:08:53 +0000 UTC   29d       default            <none>

While creating schedule backup on velero for cstor volume with three repicas. On bringing down the two of the pool pods all the scheduled backup is failed.

What you expected to happen:
Once the pool pod comes to running state the scheduled backup should completes successfully.

How to reproduce it (as minimally and precisely as possible):

  • Provision velero server
  • Use GCP bucket for object storage.
  • Create schedule
  • Once the first backup is Inprogres state delete pool pods.
    Anything else we need to know?:
  • Platform: Openshift-4.2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.