GithubHelp home page GithubHelp logo

backup-restore-operator's Introduction

Backup and Restore Operator

Latest release

Latest

Description

  • This operator provides ability to backup and restore Kubernetes applications (metadata) running on any cluster. It accepts a list of resources that need to be backed up for a particular application. It then gathers these resources by querying the Kubernetes API server, packages all the resources to create a tarball file and pushes it to the configured backup storage location. Since it gathers resources by querying the API server, it can back up applications from any type of Kubernetes cluster.
  • The operator preserves the ownerReferences on all resources, hence maintaining dependencies between objects.
  • It also provides encryption support, to encrypt user specified resources before saving them in the backup file. It uses the same encryption configuration that is used to enable Kubernetes Encryption at Rest. Follow the steps in this section to configure this.

Branches and Releases

  • the tag v5.x.x is cut from the release/v5.0 branch for Rancher v2.9.x line
  • the tag v4.x.x is cut from the release/v4.0 branch for Rancher v2.8.x line
  • the tag v3.x.x is cut from the release/v3.0 branch for Rancher v2.7.x line

Quickstart

If Rancher v2.5+ is installed, you can install the backup-restore-operator, from the Cluster Explorer UI. Otherwise, you can install the charts via helm repo by executing the commands below.

First, add our charts repository.

helm repo add rancher-charts https://charts.rancher.io
helm repo update

Then, install both charts. Ensure that the CRD chart is installed first.

helm install --wait \
    --create-namespace -n cattle-resources-system \
    rancher-backup-crd rancher-charts/rancher-backup-crd
helm install --wait \
    -n cattle-resources-system \
    rancher-backup rancher-charts/rancher-backup

If you are using S3, you can configure the s3.credentialSecretNamespace to determine where the Backup and Restore Operator will look for the S3 backup secret. For more information on configuring backups, see the backup documentation.


Uninstallation

If you are uninstalling and want to keep backup(s), ensure that you have created Backup CR(s) and that your backups are stored in a safe location. Execute the following commands to uninstall:

helm uninstall -n cattle-resources-system rancher-backup
helm uninstall -n cattle-resources-system rancher-backup-crd
kubectl delete namespace cattle-resources-system

CRDs

It installs the following cluster-scoped CRDs:

Backup

A backup can be performed by creating an instance of the Backup CRD. It can be configured to perform a one-time backup, or to schedule recurring backups. For help configuring backups, see this documentation.

Restore

Creating an instance of the Restore CRD lets you restore from a backup file. For help configuring restores, see this documentation.

ResourceSet

ResourceSet specifies the Kubernetes core resources and CRDs that need to be backed up. This chart comes with a predetermined ResourceSet to be used for backing up Rancher application


User flow

  1. Create a ResourceSet, that targets all the resources you want to backup. The ResourceSet required for backing up Rancher will be provided and installed by the chart. Refer to the default rancher-resourceset as an example for creating resourceSets
  2. Performing a backup: To take a backup, user has to create an instance of the Backup CRD (create a Backup CR). Each Backup CR must reference a ResourceSet. A Backup CR can be used to perform a one-time backup or recurring backups. Refer examples folder for sample manifests
  3. Restoring from a backup: To restore from a backup, user has to create an instance of the Restore CRD (create a Restore CR). A Restore CR must contain the exact Backup filename. Refer to the examples folder for sample manifests.

Storage Location

For help configuring the storage location, see this documentation.


S3 Credentials

If you are using S3 to store your backups, the Backup custom resource can reference an S3 credential secret in any namespace. The credentialSecretNamespace directive tells the backup application where to look for the secret:

s3:
  bucketName: ''
  credentialSecretName: ''
  credentialSecretNamespace: ''
  enabled: false
  endpoint: ''
  endpointCA: ''
  folder: ''
  insecureTLSSkipVerify: false
  region: ''

Developer Documentation

Refer to DEVELOPING.md for developer tips, tricks, and workflows when working with the backup-restore-operator.

Troubleshooting

Refer to troubleshooting.md for troubleshooting commands.

backup-restore-operator's People

Contributors

aiyengar2 avatar catherineluse avatar dramich avatar eliyamlevy avatar ericpromislow avatar hezhizhen avatar ibrokethecloud avatar jasonhancock avatar jiaqiluo avatar jsoref avatar kalinon avatar macedogm avatar mallardduck avatar maxsokolovsky avatar mbolotsuse avatar mjura avatar mrajashree avatar nflynt avatar nickgerace avatar oxr463 avatar paynejacob avatar raulcabello avatar rayandas avatar renovate-rancher[bot] avatar rudimt avatar snasovich avatar strongmonkey avatar superseb avatar thedadams avatar yankcrime avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

backup-restore-operator's Issues

Error using base64 encoded s3 credentials while performing backup

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create a repo - https://github.com/mrajashree/charts and branch - backup-restore
  • Deploy app - backup-restore-operator from this catalog
  • Create resourceset using kubectl
  • Create a secret using Dashboard
  • Create a backup from resources.cattle.io.
  • Give resourceset name and name of the backup.
  • Backup gets deployed successfully.
  • Error in backup-restore-operator
getting creds secret creds from default
INFO[2020/08/25 01:36:09] invoking set s3 service client                s3-accessKey="<>=" s3-bucketName=<>t s3-endpoint=<> s3-endpoint-ca= s3-folder=ecm1 s3-region=us-west-2
ERRO[2020/08/25 01:36:10] error syncing 'default/s3-backup-demo': handler backups: failed to check s3 bucket:<>, err:400 Bad Request, requeuing
INFO[2020/08/25 01:36:10] Processing backup s3-backup-demo

Result:
Error should not be displayed in the logs. Backup should successfully get created in S3

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit id: 56b819995
  • Installation option (single install/HA): HA

Not able to change storage locations from storage class 1 to storage class 2 while upgrading the rancher-backup app

On master-head - commit id: 27000be7

  • Install backup storage. Using the default storage class (gp2). App is installed successfully. User is able to take a backup
  • Upgrade the app --> change it to another storage class (st1) and hit save.
  • Error seen:
helm upgrade --history-max=5 --install=true --namespace=cattle-resources-system --timeout=10m0s --values=/home/shell/helm/values-rancher-backup-0.1.0.yaml --version=0.1.0 --wait=true rancher-backup /home/shell/helm/rancher-backup-0.1.0.tgz
checking 5 resources for changes
Looks like there are no changes for ServiceAccount "rancher-backup"
error updating the resource "rancher-backup":
cannot patch "rancher-backup" with kind PersistentVolumeClaim: PersistentVolumeClaim "rancher-backup" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Looks like there are no changes for ClusterRoleBinding "rancher-backup"
Looks like there are no changes for Deployment "rancher-backup"
Error: UPGRADE FAILED: cannot patch "rancher-backup" with kind PersistentVolumeClaim: PersistentVolumeClaim "rancher-backup" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
  • This is expected because it is trying to create another PVC when it is already bound.
  • Delete the PVC manually, and delete the deployment rancher-backup and then try an upgrade using storage class st1 and it worked.

Cron schedule for recurring backups is not working as expected

On master-head - commit id: 1b0778b1d

  • */35 * * * * vs 35 * * * * is interpreted the same way

Expected Result:

  • b-star35 should have NOT taken a backup at 8.35AM GMT. It should have taken it 35 minutes after the backup was created. Backup CR was created around 8.33 AM

Screen Shot 2020-10-02 at 9 02 31 AM

Screen Shot 2020-10-02 at 9 03 48 AM

Screen Shot 2020-10-02 at 8 45 22 AM

Update the order of restore for Admission webhooks

When restoring rancher-webhook was tried here, it failed because of the order in which restore is done

  1. CRDs first
  2. Cluster-scoped resources
  3. Namespaced resources

Because of this, the validating webhook config that is cluster-scoped gets restored, before its service is restored since the service is namespaced. So creation of any objects that go through validating webhook will fail. This can be fixed by the following restore order

  1. CRDs
  2. Cluster-scoped (except admission webhooks validating/mutating)
  3. Namespaced
  4. Validating/Mutating admission webhooks

Failing backup with miss-leading error message.

I Was trying to create a backup to an AWS s3 bucket. The backup stuck in retry state. Iwas checking for error messages in the pod log"

handler backups: failed to check s3 bucket:rancher-backups-xxx, err:Head https://rancher-backups-xxx.s3.dualstack.eu-west-1.amazonaws.com/: 301 response missing Location header, requeuing 

My backup CRD did contain the loaction:

apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: s3-backup-demo
spec:
  storageLocation:
    s3:
      ...
      region: eu-west-1
      endpoint: s3.eu-west-1.amazonaws.com

After a while I relized the bucket was in a different region, so it was clearly miss-configured, but it wasn't obvious from the error message above.

Solution:

  • just use this issue as refence, so if somebody is searching for the error text can find the solution
  • provide a hint in the error message to check for the bucket location

Background

Unfortunately minio-go uses string based errors, so it can only be string-matched ... Furthermore seems like the error message is not coming from minio directly, but from the aws-sdk-go library. Actually the s3 api sends a redirect (hence the 301) but that is ignore by the official sdk:
aws/aws-sdk-go#356

The SDK should not be retrying request when they error with this request. The 301 error returned is due to the request being made to the wrong region. The Go SDK does require that the request are made to the correct region.
The redirect provided by S3 is not intended to actually be followed in this case.

Prevent multiple restores from happening in parallel

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • on 2.5.0-alpha1
  • Deploy custom charts for enabling backup MCM
  • Deploy resourceset and create a backup
  • Currently if a restore fails and is stuck in "In Progress" state, user is able create another restore CR.

Expected Result:
User must be prevented from creating multiple restore CRs

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): 2.5.0-alpha1
  • Installation option (single install/HA): HA

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): HA rke cluster
  • Kubernetes version (use kubectl version):
1.18

User has access when user is removed from cluster

On master-head - commit id: 4911f8b

Steps:

  • On a HA setup (RKE cluster) Deploy a custom cluster (downstream)
  • Add user u1 as cluster owner of the custom cluster
  • Take a backup b1 using backup restore operator.
  • Remove the cluster owner association of the user from the cluster
  • take a backup b2
  • restore to backup b1. (on the same HA rancher setup)
  • When the rancher server has come up, delete the associating of user u1 as cluster owner of the cluster.
  • Login as user u1
  • User can list the cluster and can add namespaces/perform actions on the cluster like enable monitoring/run cis scan

Expected
User should NOT have access to the cluster after his association has been removed

A restore does not move namespaces back to the original project in the cluster

On master-head - commit id: 731b33e1a776, backup restore operator tag: v0.0.1-rc9

  • In a HA setup, Deploy a custom cluster --> create a project --> create a namespace --> deploy a workload in this namespace.
  • Take a backup b1 (install backup restore operator in the
  • Move the namespace to a different project say test2
  • restore from b1.
  • The namespace is still seen in project test2

Expected:
Namespace be moved back to the original project

User is not able to create a resourceset from dashboard

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create a repo - https://github.com/mrajashree/charts and branch - backup-restore
  • Deploy app - backup-restore-operator from this catalog
  • From Dashboard create a Resourceset
  • Resourceset Yaml file
apiVersion: resources.cattle.io/v1
kind: ResourceSet
metadata:
  name: ecm-resource-set
resourceSelectors:
  - apiVersion: "v1"
    kindsRegexp: "^namespaces$"
    resourceNameRegexp: "^cattle-|^p-|^c-|^user-|^u-"
    resourceNames:
      - "local"
  - apiVersion: "v1"
    kindsRegexp: "^Secret$|^serviceaccounts$"
    namespaceRegexp: "^cattle-|^p-|^c-|^local$|^user-|^u-"
  - apiVersion: "rbac.authorization.k8s.io/v1"
    kindsRegexp: "^roles$|^rolebindings$"
    namespaceRegexp: "^cattle-|^p-|^c-|^local$|^user-|^u-"
  - apiVersion: "rbac.authorization.k8s.io/v1"
    kindsRegexp: "^clusterrolebindings$"
    resourceNameRegexp: "^cattle-|^clusterrolebinding-|^globaladmin-user-|^grb-u-"
  - apiVersion: "rbac.authorization.k8s.io/v1"
    kindsRegexp: "^clusterroles$"
    resourceNameRegexp: "^cattle-|^p-|^c-|^local-|^user-|^u-|^project-|^create-ns$"
  - apiVersion: "apiextensions.k8s.io/v1beta1"
    kindsRegexp: "."
    resourceNameRegexp: "management.cattle.io$|project.cattle.io$"
  - apiVersion: "management.cattle.io/v3"
    kindsRegexp: "."
  - apiVersion: "project.cattle.io/v3"
    kindsRegexp: "."
controllerReferences:
  - apiVersion: "apps/v1"
    resource: "deployments"
    name: "rancher"
    namespace: "cattle-system"
  • Click on Create
  • Error seen on UI: { "type": "error", "links": {}, "code": "NotFound", "message": "the server could not find the requested resource", "status": 404 }
  • Error in console log:
xhr.js:178 POST https://<rancher-server>/v1/resources.cattle.io.resourcesets 404
(anonymous) @ xhr.js:178
t.exports @ xhr.js:12
t.exports @ dispatchRequest.js:50
Promise.then (async)
d.request @ Axios.js:61
(anonymous) @ bind.js:9
request @ actions.js:15
(anonymous) @ vuex.esm.js:847
y.dispatch @ vuex.esm.js:512
dispatch @ vuex.esm.js:402
r.dispatch @ vuex.esm.js:775
(anonymous) @ resource-instance.js:691
(anonymous) @ ResourceYaml.vue:228
d @ runtime.js:45
(anonymous) @ runtime.js:274
O.forEach.t.<computed> @ runtime.js:97
r @ asyncToGenerator.js:3
l @ asyncToGenerator.js:25
(anonymous) @ asyncToGenerator.js:32
(anonymous) @ asyncToGenerator.js:21
save @ ResourceYaml.vue:222
Qt @ vue.runtime.esm.js:1854
n @ vue.runtime.esm.js:2179
Qt @ vue.runtime.esm.js:1854
t.$emit @ vue.runtime.esm.js:3888
save @ Footer.vue:36
Qt @ vue.runtime.esm.js:1854
n @ vue.runtime.esm.js:2179
Qt @ vue.runtime.esm.js:1854
t.$emit @ vue.runtime.esm.js:3888
clicked @ AsyncButton.vue:161
Qt @ vue.runtime.esm.js:1854
n @ vue.runtime.esm.js:2179
c._wrapper @ vue.runtime.esm.js:6917

Expected Result:
User should be able to deploy resourceset

Other details that may be helpful:
Able to deploy this yaml through kubectl

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit id: 56b819995
  • Installation option (single install/HA): HA - imported k8s

Chart changes

  • Rename chart to rancher-backup instead of backup-restore-operator. We want to present this chart more as a tool for backing up and restoring rancher in 2.5 than presenting it as a generic utility

Error on restore CR can be made better when encryptionConfigName is set in backup and No error in backup-restore-operator logs

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Take a backup b1 with encryptionConfigName set.
  • Verified the secrets in backup created in S3 are encrypted.
  • Restore from this backup b1 but do NOT provide the value - encryptionConfigName
  • restore fails with error : json: cannot unmarshal string into Go value of type map[string]interface
  • Status of restore CR:
status:
  conditions:
  - lastUpdateTime: "2020-08-29T21:16:48Z"
    message: 'json: cannot unmarshal string into Go value of type map[string]interface
      {}'
    reason: Error
    status: "False"
    type: Reconciling
  • Logs of backup-restore-operator:
[2020/08/29 21:17:03] Retry rancher/rancher#23: Retrying restore from default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz
INFO[2020/08/29 21:17:03] invoking set s3 service client                s3-accessKey=<> s3-bucketName=<> s3-endpoint=<> s3-endpoint-ca= s3-folder=test-new s3-region=<>
INFO[2020/08/29 21:17:03] Temporary location of backup file from s3: /tmp/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz
INFO[2020/08/29 21:17:03] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:03] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:03] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:03] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:05] Restoring from backup default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz
INFO[2020/08/29 21:17:05] Retry rancher/rancher#24: Retrying restore from default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz
INFO[2020/08/29 21:17:05] invoking set s3 service client                s3-accessKey=<> s3-bucketName=<> s3-endpoint=<> s3-endpoint-ca= s3-folder=test-new s3-region=<>
INFO[2020/08/29 21:17:05] Temporary location of backup file from s3: /tmp/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz
INFO[2020/08/29 21:17:05] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:05] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:05] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:05] Successfully downloaded [test-new/default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz]
INFO[2020/08/29 21:17:08] Restoring from backup default-b-2-encryp-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T21#09#14Z.tar.gz
INFO

Expected Result:

  • restore operator should throw an error.
  • Error on the restore CR can be better presented?

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit id: 8d9cedde9
  • Installation option (single install/HA): HA rke

RFE: Update the name for the backup file when it's encrypted

If the backend named encrypted backups with an e.g. .encrypted suffix, then the UI could look at the filename entered and definitively tell you that you need to have the encryption config instead of having the user guess from the warning message.

Cluster Migration : failed to check s3 bucket: invalid header field value

Hi,

I'm having trouble trying to migrate my cluster from RKE to EKS.
I followed the instructions of this documentation https://rancher.com/docs/rancher/v2.x/en/backups/v2.5/migrating-rancher/.
After applying the restore ressource on the target cluster, I've got this status message on the ressource :

    Message:              failed to check s3 bucket:s3-rancher2-backups, err:Head https://<bucket_name>.s3.dualstack.eu-west-3.amazonaws.com/: net/http: invalid header field value "AWS4-HMAC-SHA256 Credential=\x00\xa2\x00X\x1d\x96@.\xc0,>\x8b@\xa1C/20201104/eu-west-3/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=<redacted>" for key Authorization

Here is the logs of the rancher-backup pod (this is looping endlessly)

INFO[2020/11/04 10:00:34] Processing Restore CR restore-migration
INFO[2020/11/04 10:00:34] Restoring from backup xxx.tar.gz
INFO[2020/11/04 10:00:34] invoking set s3 service client                s3-accessKey="\x00\xa2\x00X\x1d\x96@.\xc0,>\x8b@\xa1C" s3-bucketName=<bucket_name> s3-endpoint=s3.eu-west-3.amazonaws.com s3-endpoint-ca= s3-folder= s3-region=eu-west-3

I guess there is something buggy with ths accesskey ?
I added the aws credentials following the instructions, with accesskey and secretkey in plain text.
Here is the ressource I applied :

---
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
  name: restore-migration
spec:
  backupFilename: xxx.tar.gz
  prune: false
  storageLocation:
    s3:
      credentialSecretName: s3-creds
      credentialSecretNamespace: default
      bucketName: <bucket_name>
      region: eu-west-3
      endpoint: s3.eu-west-3.amazonaws.com

And the secret

---
apiVersion: v1
kind: Secret
metadata:
  name: s3-creds
type: Opaque
data:
  accessKey: ABCDABCDABCD
  secretKey: ABcdAbcd

I tried to remove these and re-apply without success, I also tried to drop the whole cluster and re-create it, without success.
I may have miss something but I can't find what :(

Thanks for your help !

A Restore fails when there is a failed restore existing in the setup

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create a restore CR from a backup file. Give an invalid bucketname.
  • Restore will fail and the CR will be seen stuck in "In Progress"
  • create another restore CR. With valid details
  • The restore starts and the rancher fails to come up.
  • backup-restore-operator logs:
INFO[2020/08/28 23:24:20] Temporary location of backup file from s3: /tmp/default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz 
INFO[2020/08/28 23:24:20] Successfully downloaded [test/default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz] 
INFO[2020/08/28 23:24:20] Successfully downloaded [test/default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz] 
INFO[2020/08/28 23:24:20] Successfully downloaded [test/default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz] 
INFO[2020/08/28 23:24:20] Successfully downloaded [test/default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz] 
INFO[2020/08/28 23:24:21] Processing controllerRef apps/v1/deployments/rancher 
INFO[2020/08/28 23:24:21] Scaling down controllerRef apps/v1/deployments/rancher to 0 
INFO[2020/08/28 23:24:21] Starting to restore CRDs for restore CR restore-112 
INFO[2020/08/28 23:24:21] restoreResource: Restoring apps.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored apps.project.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring dynamicschemas.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored dynamicschemas.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring multiclusterapps.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored multiclusterapps.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring rkek8ssystemimages.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored rkek8ssystemimages.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clustermonitorgraphs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Restoring from backup default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz 
INFO[2020/08/28 23:24:21] Retry rancher/rancher#55: Retrying restore from default-b-invalid-creds-5c9edf51-3742-4d9d-9ce4-08d7382b9b1f-2020-08-28T23#13#55Z.tar.gz 
INFO[2020/08/28 23:24:21] invoking set s3 service client                s3-accessKey= s3-bucketName=<> s3-endpoint=<> s3-endpoint-ca= s3-folder=<> s3-region=<>
INFO[2020/08/28 23:24:21] invoking set s3 service client use IAM role  
INFO[2020/08/28 23:24:21] Successfully restored clustermonitorgraphs.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clustertemplates.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored clustertemplates.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring kontainerdrivers.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored kontainerdrivers.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring projectcatalogs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored projectcatalogs.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring templateversions.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored templateversions.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring userattributes.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored userattributes.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring projectmonitorgraphs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored projectmonitorgraphs.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring projectroletemplatebindings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored projectroletemplatebindings.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring roletemplates.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored roletemplates.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring templates.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored templates.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring apprevisions.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored apprevisions.project.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clusters.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored clusters.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring users.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored users.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring globaldnsproviders.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored globaldnsproviders.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring nodepools.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored nodepools.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring podsecuritypolicytemplateprojectbindings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored podsecuritypolicytemplateprojectbindings.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring catalogtemplates.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored catalogtemplates.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clusteralertgroups.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored clusteralertgroups.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clusterregistrationtokens.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored clusterregistrationtokens.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clusterroletemplatebindings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored clusterroletemplatebindings.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring tokens.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored tokens.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring clustercatalogs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored clustercatalogs.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring groupmembers.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored groupmembers.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring projects.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored projects.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring sourcecodeproviderconfigs.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored sourcecodeproviderconfigs.project.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring cisconfigs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored cisconfigs.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring globalrolebindings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored globalrolebindings.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring rkek8sserviceoptions.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored rkek8sserviceoptions.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring templatecontents.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored templatecontents.management.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring pipelinesettings.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:21] Successfully restored pipelinesettings.project.cattle.io 
INFO[2020/08/28 23:24:21] restoreResource: Restoring projectalertgroups.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored projectalertgroups.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring projectalerts.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored projectalerts.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring clusterscans.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored clusterscans.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring etcdbackups.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored etcdbackups.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring features.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored features.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring nodes.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored nodes.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring samltokens.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored samltokens.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring sourcecodecredentials.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored sourcecodecredentials.project.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring authconfigs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored authconfigs.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring clusterloggings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored clusterloggings.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring nodetemplates.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored nodetemplates.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring rkeaddons.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored rkeaddons.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring groups.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored groups.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring monitormetrics.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored monitormetrics.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring pipelines.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored pipelines.project.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring sourcecoderepositories.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored sourcecoderepositories.project.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring pipelineexecutions.project.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored pipelineexecutions.project.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring settings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored settings.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring catalogs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored catalogs.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring cisbenchmarkversions.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored cisbenchmarkversions.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring clusteralerts.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored clusteralerts.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring globalroles.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored globalroles.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring globaldnses.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored globaldnses.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring clustertemplaterevisions.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored clustertemplaterevisions.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring composeconfigs.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored composeconfigs.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring nodedrivers.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored nodedrivers.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring projectnetworkpolicies.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored projectnetworkpolicies.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring catalogtemplateversions.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored catalogtemplateversions.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring clusteralertrules.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored clusteralertrules.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring multiclusterapprevisions.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored multiclusterapprevisions.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring notifiers.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored notifiers.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring projectalertrules.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored projectalertrules.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring podsecuritypolicytemplates.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored podsecuritypolicytemplates.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring preferences.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored preferences.management.cattle.io 
INFO[2020/08/28 23:24:22] restoreResource: Restoring projectloggings.management.cattle.io of type apiextensions.k8s.io/v1beta1, Resource=customresourcedefinitions 
INFO[2020/08/28 23:24:22] Successfully restored projectloggings.management.cattle.io 
INFO[2020/08/28 23:24:22] Starting to restore clusterscoped resources for restore CR restore-112 
INFO[2020/08/28 23:24:22] Starting to restore namespaced resources for restore CR restore-112 
INFO[2020/08/28 23:24:22] Pruning resources that are not part of the backup for restore CR restore-112 
INFO[2020/08/28 23:24:22] Will retry pruning resources by removing finalizers in 0s 
INFO[2020/08/28 23:24:22] Retrying pruning resources by removing finalizers 
INFO[2020/08/28 23:24:22] Done restoring     

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): 2.5.0-alpha1
  • Installation option (single install/HA): HA rke

region field is mandatory and hence launching backup-restore-operator with minio backend fails

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Launch backup-restore-operator app providing minio backend. Note that region field is not required for minio
debug: false
global:
  systemDefaultRegistry: ''
image: rancher/backup-restore-operator
pvc: {}
s3: 
  credentialSecretName: miniocreds
  bucketName: backups
  endpoint: minio.x.yz.a.xip.io
  endpointCA: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lKQUw1QlFxR0Z1VjFWTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NakF3T0RJNE1qSXdOekkyV2hjTk1qQXhNREkzQUxOclNIYlBSSUIvN0svbm0yT1ZFWVNML2pwYzZ4R1NDVW5wdXFOTGRMVjJwQ01Mb0ZlCjQ4OUdBZk5nOGt0dkY1bzZDbVVpYlRBYXpBWTUreExreWdjTzNkTC9La0hBZUNzS1V0QThSQkZ0bkFnU0RVeWIKZEFscmdUT0ZBQlhqWW5STWQ5N0ZtRmN2UjdBYXg5T0t4U2c0WjZVVSs0NnVBbnh5UlR0OUVkaDJOVUU2WE5WUwovcXJWRTJt
tag: v0.0.1-rc4

Result:
Backup-restore-operator fails to launch with the error below:

Fri, Aug 28 2020 5:41:27 pm
helm --debug install --create-namespace=true --namespace=cattle-resources-system --values=/home/shell/helm/values-backup-restore-operator-0.0.1.yaml --version=0.0.1 --wait=true backup-restore-operator /home/shell/helm/backup-restore-operator-0.0.1.tgz
Fri, Aug 28 2020 5:41:27 pm
install.go:159: [debug] Original chart version: "0.0.1"
Fri, Aug 28 2020 5:41:27 pm
install.go:176: [debug] CHART PATH: /home/shell/helm/backup-restore-operator-0.0.1.tgz
Fri, Aug 28 2020 5:41:27 pm
Fri, Aug 28 2020 5:41:28 pm
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in Secret.stringData.region
Fri, Aug 28 2020 5:41:28 pm
helm.go:84: [debug] error validating "": error validating data: unknown object type "nil" in Secret.stringData.region
Fri, Aug 28 2020 5:41:28 pm
helm.sh/helm/v3/pkg/kube.scrubValidationError
Fri, Aug 28 2020 5:41:28 pm
/home/circleci/helm.sh/helm/pkg/kube/client.go:570
Fri, Aug 28 2020 5:41:28 pm
helm.sh/helm/v3/pkg/kube.(*Client).Build


Region field should be optional
Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head(master-7636d25d8dd9ec)
  • Installation option (single install/HA): HA

Failed to take backup when there were about 500 secrets in local cluster

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create about 500 secrets in local cluster.
  • create a backup CR
  • Backup fails with error
  • Backup CR error:
status:
  conditions:
  - lastUpdateTime: "2020-08-29T22:34:25Z"
    message: the server was unable to return a response in the time allotted, but
      may still be processing the request
    reason: Error
    status: "False"
    type: Reconciling
  • backup-restore operator logs:
INFO[2020/08/29 22:33:25] Processing backup b2                         
INFO[2020/08/29 22:33:25] For backup CR b2, filename: default-b2-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T22#33#25Z 
INFO[2020/08/29 22:33:25] Temporary backup path for storing all contents for backup CR b2 is /tmp/default-b2-28e36ff0-c48a-41fd-9d71-5b760c5748e7-2020-08-29T22#33#25Z680481891 
INFO[2020/08/29 22:33:25] Using resourceSet ecm-resource-set for gathering resources for backup CR b2 
INFO[2020/08/29 22:33:25] Gathering resources for backup CR b2         
INFO[2020/08/29 22:33:25] Gathering resources for groupVersion: v1     
INFO[2020/08/29 22:33:25] resource kind namespaces, matched regex ^namespaces$ 
INFO[2020/08/29 22:33:25] Gathering resources for groupVersion: v1     
INFO[2020/08/29 22:33:25] resource kind secrets, matched regex ^Secret$|^serviceaccounts$ 
INFO[2020/08/29 22:33:25] resource kind serviceaccounts, matched regex ^Secret$|^serviceaccounts$ 
ERRO[2020/08/29 22:34:25] error syncing 'default/b2': handler backups: the server was unable to return a response in the time allotted, but may still be processing the request, requeuing 
INFO[2020/08/29 22:34:25] Processing backup b2                         

Expected Result:
Backup should happen with NO error.

Other details that may be helpful:
After this the rancher server also crashed.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit id: 8d9cedde9
  • Installation option (single install/HA): HA

Backup fails to get created when using a minio backend with base64 encoded certs

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Launch backup-restore-operator app providing minio backend. Provide a dummy region and provide the certs in base64 encoded form
debug: false
global:
  systemDefaultRegistry: ''
image: rancher/backup-restore-operator
pvc: {}
s3: 
  credentialSecretName: miniocreds
  bucketName: backups
  region: us-west-2
  endpoint: minio.x.yz.a.xip.io
  endpointCA: abcdef
tag: v0.0.1-rc4

  1. Create a backup with the config below:
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: test1
  namespace: default
#  annotations:
#    key: string
#  labels:
#    key: string
spec:
#  encryptionConfigName: string
   resourceSetName: ecm-resource-set


Backup fails with an error

ERRO[2020/08/31 21:13:54] error syncing 'default/bkp3': handler backups: failed to check s3 bucket:rancherbackups, err:400 Bad Request, requeuing

backup-restore-operator logs:

ERRO[2020/08/31 21:13:54] error syncing 'default/bkp3': handler backups: failed to check s3 bucket:rancherbackups, err:400 Bad Request, requeuing
INFO[2020/08/31 21:13:54] Processing backup bkp1
INFO[2020/08/31 21:13:54] For backup CR bkp1, filename: default-bkp1-4b824d64-513b-4667-a77e-de2567128fe3-2020-08-31T21#13#54Z
INFO[2020/08/31 21:13:54] Temporary backup path for storing all contents for backup CR bkp1 is /tmp/default-bkp1-4b824d64-513b-4667-a77e-de2567128fe3-2020-08-31T21#13#54Z658372547
INFO[2020/08/31 21:13:54] Using resourceSet ecm-resource-set for gathering resources for backup CR bkp1
INFO[2020/08/31 21:13:54] Gathering resources for backup CR bkp1
INFO[2020/08/31 21:13:54] Gathering resources for groupVersion: v1
INFO[2020/08/31 21:13:54] resource kind namespaces, matched regex ^namespaces$
INFO[2020/08/31 21:13:54] Gathering resources for groupVersion: v1
INFO[2020/08/31 21:13:54] resource kind secrets, matched regex ^Secret$|^serviceaccounts$
INFO[2020/08/31 21:13:54] resource kind serviceaccounts, matched regex ^Secret$|^serviceaccounts$
INFO[2020/08/31 21:13:54] invoking set s3 service client                s3-accessKey=6EukuHPdeaY s3-bucketName=rancherbackups s3-endpoint=abcd.xip.io s3-endpoint-ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lKQUpscFM4Skowb2RMTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NakF3T0RNeE1UazFOakF3V2hjTk1qQXhUUVBCnVPd3pFVGpuRkpFTnNDdml6TE0rNUp4QU9pVnRqbThQY3ZkQmFGbkxXM1pFMU5LN0d6N3hobkRBOUw4dlcySFgKd0JocHp4WWRiYm1OTE9IeVlqdnlEL1lNMGhhelFMNlRYNXEzRG85VWRXZmZUbUFyaWdEQnFoUFBVhS21pMGYxaDhLVHRsUFRvbWdKVzRIQQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t s3-folder= s3-region=us-west-2
INFO[2020/08/31 21:13:54] Gathering resources for groupVersion: rbac.authorization.k8s.io/v1
INFO[2020/08/31 21:13:54] resource kind rolebindings, matched regex ^roles$|^rolebindings$
INFO[2020/08/31 21:13:54] resource kind roles, matched regex ^roles$|^rolebindings$
INFO[2020/08/31 21:13:54] Gathering resources for groupVersion: rbac.authorization.k8s.io/v1
INFO[2020/08/31 21:13:54] resource kind clusterrolebindings, matched regex ^clusterrolebindings$
ERRO[2020/08/31 21:13:55] error syncing 'default/bkp2': handler backups: failed to check s3 bucket:rancherbackups, err:400 Bad Request, requeuing
INFO[2020/08/31 21:13:55] Processing backup bkp3

Could not create Rancher Backup Using S3

Hi everyone,

I have a stuck when deploy rancher-backup with Explorer: Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.StringData: ReadString: expects " or n, but found t, error found in rancher/dashboard#10 byte of ...\|pVerify":true,"regio\|..., bigger context ...\|.com","folder":"rancher","insecureTLSSkipVerify":true,"region":"ap-southeast-1"},"type":"Opaque"}

here is the log output:


helm upgrade --install=true --namespace=cattle-resources-system --timeout=10m0s --values=/home/shell/helm/values-rancher-backup-crd-1.0.201.yaml --version=1.0.201 --wait=true rancher-backup-crd /home/shell/helm/rancher-backup-crd-1.0.201.tgz
--
Tue, Nov 17 2020 1:37:37 pm | Release "rancher-backup-crd" does not exist. Installing it now.
Tue, Nov 17 2020 1:37:39 pm | W1117 06:37:39.136818 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:39 pm | W1117 06:37:39.138626 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:39 pm | W1117 06:37:39.140769 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:39 pm | creating 3 resource(s)
Tue, Nov 17 2020 1:37:39 pm | W1117 06:37:39.156027 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:39 pm | W1117 06:37:39.162773 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:39 pm | W1117 06:37:39.162902 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:39 pm | beginning wait for 3 resources with timeout of 10m0s
Tue, Nov 17 2020 1:37:41 pm | W1117 06:37:41.174093 24 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Tue, Nov 17 2020 1:37:41 pm | NAME: rancher-backup-crd
Tue, Nov 17 2020 1:37:41 pm | LAST DEPLOYED: Tue Nov 17 06:37:38 2020
Tue, Nov 17 2020 1:37:41 pm | NAMESPACE: cattle-resources-system
Tue, Nov 17 2020 1:37:41 pm | STATUS: deployed
Tue, Nov 17 2020 1:37:41 pm | REVISION: 1
Tue, Nov 17 2020 1:37:41 pm | TEST SUITE: None
Tue, Nov 17 2020 1:37:41 pm | ย 
Tue, Nov 17 2020 1:37:41 pm | ---------------------------------------------------------------------
Tue, Nov 17 2020 1:37:41 pm | SUCCESS: helm upgrade --install=true --namespace=cattle-resources-system --timeout=10m0s --values=/home/shell/helm/values-rancher-backup-crd-1.0.201.yaml --version=1.0.201 --wait=true rancher-backup-crd /home/shell/helm/rancher-backup-crd-1.0.201.tgz
Tue, Nov 17 2020 1:37:41 pm | ---------------------------------------------------------------------
Tue, Nov 17 2020 1:37:41 pm | helm upgrade --install=true --namespace=cattle-resources-system --timeout=10m0s --values=/home/shell/helm/values-rancher-backup-1.0.201.yaml --version=1.0.201 --wait=true rancher-backup /home/shell/helm/rancher-backup-1.0.201.tgz
Tue, Nov 17 2020 1:37:41 pm | Release "rancher-backup" does not exist. Installing it now.
Tue, Nov 17 2020 1:37:42 pm | creating 5 resource(s)
Tue, Nov 17 2020 1:37:42 pm | Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.StringData: ReadString: expects " or n, but found t, error found in rancher/dashboard#10       byte of ...\|pVerify":true,"regio\|..., bigger context ...\|.com","folder":"rancher","insecureTLSSkipVerify":true,"region":"ap-southeast-1"},"type":"Opaque"}

And this is my configuration:
image

Could you please help explain me why I got the errors in above? and how to solve it?
Thanks

Cluster migration: Issues when restored to a new cluster with Rancher installed

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create a HA on rke. Create a custom cluster, node templates, users, roles, template revisions
  • Take a backup
  • Delete this HA, bring up a new HA on EKS cluster with rancher installed
  • Restore from backup to this cluster

Issues:

  • cluster (present earlier) is stuck in Unavailable state. Some error in rancher logs
2020/08/28 22:19:17 [ERROR] failed to start cluster controllers c-pjhjc: context canceled
2020/08/28 22:19:51 [ERROR] error syncing 'c-pjhjc': handler cluster-deploy: Get "https://3.129.25.80:6443/apis/apps/v1/namespaces/cattle-system/daemonsets/cattle-node-agent": waiting for cluster [c-pjhjc] agent to connect, requeuing
2020/08/28 22:20:26 [INFO] Active TLS secret serving-cert (ver=10665) (count 9): map[field.cattle.io/projectId:local:p-fh7md listener.cattle.io/cn-10.42.0.3:10.42.0.3 listener.cattle.io/cn-10.42.1.5:10.42.1.5 listener.cattle.io/cn-10.42.2.6:10.42.2.6 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.31.23.22:172.31.23.22 listener.cattle.io/cn-172.31.31.67:172.31.31.67 listener.cattle.io/cn-172.31.45.28:172.31.45.28 listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:<>]
2020/08/28 22:21:30 [INFO] Stopping cluster agent for c-pjhjc
2020/08/28 22:21:30 [ERROR] failed to start cluster controllers c-pjhjc: context canceled
2020/08/28 22:23:30 [INFO] Stopping cluster agent for c-pjhjc
2020/08/28 22:23:30 [ERROR] failed to start cluster controllers c-pjhjc: context canceled
  • namespaces/projects - messed up. (snapshot)

Screen Shot 2020-08-28 at 3 22 48 PM

  • some error in the backup-restore container
INFO[2020/08/28 21:42:05] Will retry pruning resources by removing finalizers in 0s
INFO[2020/08/28 21:42:05] Retrying pruning resources by removing finalizers
INFO[2020/08/28 21:42:06] Processing controllerRef apps/v1/deployments/rancher
INFO[2020/08/28 21:42:06] Scaling up controllerRef apps/v1/deployments/rancher to 3
INFO[2020/08/28 21:42:06] Done restoring
E0828 21:47:57.305691       1 reflector.go:380] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to watch *v1.Backup: the server has asked for the client to provide credentials (get backups.meta.k8s.io)
E0828 21:47:57.310833       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized
E0828 21:47:59.021907       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized
E0828 21:48:02.728944       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized
E0828 21:48:09.755693       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized
E0828 21:48:26.413138       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized
E0828 21:49:05.208455       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized
E0828 21:49:38.306123       1 reflector.go:380] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to watch *v1.Restore: the server has asked for the client to provide credentials (get restores.meta.k8s.io)
E0828 21:49:38.311219       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Restore: Unauthorized

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): 2.5.0-alpha1
  • Installation option (single install/HA): HA

Backup stuck in "In Progress" state after disabling recurring backup

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Launch backup-restore-operator app
  2. Create a recurring backup specifying the schedule and retention count
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: sjrec
  namespace: default
#  annotations:
#    key: string
#  labels:
#    key: string
spec:
#  encryptionConfigName: string
   resourceSetName: ecm-resource-set
   retentionCount: 3
   schedule: "*/3 * * * *"



  1. Edit the backup CR and disable the recurring backup by commenting the schedule and retentionCount fields

Result:
The backup is stuck in "In Progress state"

Screen Shot 2020-08-31 at 6 48 56 PM

Message seen in the UI
Screen Shot 2020-08-31 at 6 54 42 PM

No new backups are taken as we have disabled the recurring backup, but the "In Progress" state is displayed.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head(
    Installation option (single install/HA):HA

RFE: Change Error to Warning if rancher controller does not exist during restore DR use case

Version: master-head(master-33f755f1809e5cb14b7749af462be8b8db240278-head)

Steps:

  1. Create a HA setup using RKE nodes.
  2. On the local cluster, add a repo https://github.com/rancher/charts
  3. Create a downstream DO cluster and enable monitoring and istio
  4. Create another custom cluster and deploy monitoring
  5. Deploy BackupRestoreOperator Chart in the local cluster and take a backup bkp1
  6. Restore to bkp1 on a new RKE cluster using the below steps:
a. Install the backup-restore-operator chart on the new cluster using Helm CLI
b. helm repo add rancherchart https://charts.rancher.io
c. helm repo update
d. helm install backup-restore-operator-crd rancherchart/backup-restore-operator-crd -n cattle-resources-system --create-namespace
e. helm install backup-restore-operator rancherchart/backup-restore-operator -n cattle-resources-system
f. Restore from backup using a restore CR and prune must be set to false. 
g. If using certificates issued by Rancherโ€™s generated CA, follow steps to install cert-manager from rancher HA install docs
h. Upgrade rancher release. If on the previous cluster rancher deployment was scaled down, you can also set the right scale (3) through the upgrade command

helm upgrade rancher rancher-alpha/rancher --version 2.5.0-alpha1 --namespace cattle-system 
--set hostname=<same hostname as first rancher server> --set rancherImageTag=master-head 

During restore step f, the below error is seen:

INFO[2020/09/17 04:36:54] restoreResource: Restoring library-citrix-adc-istio-ingress-gateway-1.2.0 of type management.cattle.io/v3, Resource=catalogtemplateversions 
INFO[2020/09/17 04:36:54] Getting new UID for library-citrix-adc-istio-ingress-gateway  
INFO[2020/09/17 04:36:54] Processing controllerRef apps/v1/deployments/rancher 
ERRO[2020/09/17 04:36:54] Error getting object for controllerRef rancher, skipping it 
INFO[2020/09/17 04:36:54] Done restoring           

Can we make the error a warning if we are expecting this error for the use case above?

Charts in the local cluster are not displayed after a restore

Version: master-head(master-33f755f1809e5cb14b7749af462be8b8db240278-head)

Steps:

  1. Create a HA setup using RKE nodes.
  2. On the local cluster, add a repo https://github.com/rancher/charts
  3. Create a downstream DO cluster and enable monitoring and istio
  4. Create another custom cluster and deploy monitoring
  5. Create an EKS clsuter
  6. Deploy BackupRestoreOperator Chart in the local cluster and take a backup bkp1
  7. Restore to bkp1 on a new RKE cluster using the below steps:
a. Install the backup-restore-operator chart on the new cluster using Helm CLI
b. helm repo add rancherchart https://charts.rancher.io
c. helm repo update
d. helm install backup-restore-operator-crd rancherchart/backup-restore-operator-crd -n cattle-resources-system --create-namespace
e. helm install backup-restore-operator rancherchart/backup-restore-operator -n cattle-resources-system
f. Restore from backup using a restore CR and prune must be set to false. 
g. If using certificates issued by Rancherโ€™s generated CA, follow steps to install cert-manager from rancher HA install docs
h. Upgrade rancher release. If on the previous cluster rancher deployment was scaled down, you can also set the right scale (3) through the upgrade command

helm upgrade rancher rancher-alpha/rancher --version 2.5.0-alpha1 --namespace cattle-system 
--set hostname=<same hostname as first rancher server> --set rancherImageTag=master-head 

After restore, the charts in the local cluster are not displayed. The repositories are displayed correctly
Screen Shot 2020-09-16 at 10 13 25 PM

Screen Shot 2020-09-16 at 10 19 58 PM

Note:
Charts in the downstream clusters are displayed as expected.
Monitoring and istio are enabled and running in the downstream clusters and EKS cluster is also running after restore.

Backup CR with `s3.folder` does not allow multiple path components

Version
I have installed rancher-backup version 1.0.201 on a 2.5.4 Rancher (hosted on EKS).

Description of issue
Following the instructions here, I find that when I add the Backup CR with s3.folder parameter containing multiple/path/components that I get the error:

mkdir /tmp/uploadpath787919661/rancher.example.com/rancher-backup: no such file or directory

In the above example rancher.example.com/rancher-backup is the configured S3 folder in the bucket for the backups.

Here is the full YAML of the Backup CR:

apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  annotations:
    meta.helm.sh/release-name: rancher-s3-recurring-backup
    meta.helm.sh/release-namespace: cattle-resources-system
  creationTimestamp: "2020-12-14T08:46:39Z"
  generation: 1
  labels:
    app.kubernetes.io/managed-by: Helm
  name: s3-recurring-backup
  resourceVersion: "94832"
  selfLink: /apis/resources.cattle.io/v1/backups/s3-recurring-backup
  uid: 1d483729-a60f-4c47-865c-6f0708562efe
spec:
  resourceSetName: rancher-resource-set
  retentionCount: 10
  schedule: '@every 1h'
  storageLocation:
    s3:
      bucketName: not-a-real-bucket
      credentialSecretName: rancher-backup-s3
      credentialSecretNamespace: cattle-resources-system
      folder: rancher.example.com/rancher-backup
      region: ap-southeast-2
      endpoint: s3.ap-southeast-2.amazon.aws.com

What did you expect to happen
I thought that the s3.folder parameter would be able to accept S3 bucket paths that contain slashes.

Ideas
Would it be possible to use mkdir -p instead of mkdir?

I am thinking that this is where the error occurs

if err := os.Mkdir(filepath.Join(tmpBackupGzipFilepath, objectStore.Folder), os.ModePerm); err != nil {

Mutliple scheduled backups created for hourly recurring backup using backup-restore-operator

What kind of request is this (question/bug/enhancement/feature request):
bug

Steps to reproduce (least amount of steps as possible):

  1. Add the rancher-backups app in the Cluster Explorer - Apps. Configure it to use an S3 endpoint.
  2. Add a new scheduled backup with cron 45 * * * *, retention count set to 100
  3. Wait until 45 minutes after the hour.

Result:
Saw 13 backups. Only should see one:
image
rancher-backup log:
rancher-backup-5c5bcd7f44-5vg9c.log

Other details that may be helpful:
Checked back next day and backups were created every hour at 45 minutes passed the hour.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI):
    v2.5.0-rc4

  • Installation option (single install/HA):
    HA

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported):
  • Machine type (cloud/VM/metal) and specifications (CPU/memory):
  • Kubernetes version (use kubectl version):
(paste the output here)
  • Docker version (use docker version):
(paste the output here)

Rancher does not get scaled back up if there is an error during restore

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create a secret using kubectl create secret generic test-encryptionconfig --from-file=./encryptionConfig.yaml -n cattle-resources-system
  • encryptionConfig.yaml - is similar to the one here
  • Create some secrets
  • Create a backup CR and give in EncryptionConfigName
  • Backup is created successfully. And the secrets in the backup are encrypted.
  • Create some more secrets in rancher setup
  • Now restore from this backup.
  • The rancher pods are not scaled up.
  • Logs from backup-restore-operator container:
INFO[2020/08/27 23:07:43] Will retry pruning resources by removing finalizers in 0s 
INFO[2020/08/27 23:07:43] Retrying pruning resources by removing finalizers 
INFO[2020/08/27 23:07:43] Processing controllerRef apps/v1/deployments/rancher 
INFO[2020/08/27 23:07:43] Scaling up controllerRef apps/v1/deployments/rancher to 0 
  • Bring back rancher to scale-3
    error in restore CR
status:
  conditions:
  - lastUpdateTime: "2020-08-27T23:07:25Z"
    message: 'error pruning during restore: [Operation cannot be fulfilled on secrets
      "test-new": StorageError: invalid object, Code: 4, Key: /registry/secrets/p-g7jm4/test-new,
      ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition:
      e3beec7b-6d0d-4578-b021-9c748ed7ca5e, UID in object meta: ]'
    reason: Error
    status: "False"
    type: Reconciling
  - lastUpdateTime: "2020-08-27T23:07:43Z"
    status: "True"
    type: Ready
  • Notice that test-new is NOT available in the rancher setup

Expected Result:
Rancher deployment should scale back up when restore is finished.

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit : 5e1b21b931a
  • Installation option (single install/HA): HA

Remove ununsed/un-updated numRetries field from Backup CR Status

On master-head - commit id: 731b33e1a776

  • Create a restore CR from a backup file. Give an invalid bucketname.
  • Restore will fail and the CR will be seen stuck in "Error"
  • create another restore CR. With valid details
  • The restore starts and completes. Rancher comes up successfully.
  • On the failed restore CR i see numRetries: 0 and count NOT increasing. But i see the retries happening in the backup-restore-operator logs.
status:
  backupSource: ""
  conditions:
  - lastUpdateTime: "2020-09-03T17:51:25Z"
    message: 'failed to download s3 backup: no backups found'
    reason: Error
    status: "False"
    type: Reconciling
  - lastUpdateTime: "2020-09-03T17:45:57Z"
    message: Retrying
    status: Unknown
    type: Ready
  numRetries: 0
  observedGeneration: 0
  restoreCompletionTs: ""
  summary: ""

Expected:
Remove field numRetries

Create an operator to backup and restore rancher

For the backup/restore controllers

  • Get CRD fields, api version finalized in design meeting
  • Backup controller should gather resources specified in resourceSet
  • Restore controller should create/update any resources present in backup
  • Restore controller should by default prune resources present on the cluster that match the resourceSet but were not a part of the backup
  • Restore controller should preserve ownerReferences that are allowed by k8s design. Although as per k8s docs, cross-namespaced ownerRefs and clusterscoped objects having namespaced owners is disallowed, kubernetes has a bug where it allows creation of such objects and enforces the ownerRefs. Such objects will get restore without ownerRefs. Controller will log these resources.
  • Accept controllerRefs to scale down before restore and scale back up after restore
  • Support recurring backups, delete them as per the retention policy
  • Support local and s3/minio as storage locations
  • Encrypt resources such as secrets before storing at the backup location. Decrypt during restore

Clear Backup CR Reconciling condition when backup completes

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Create a Backup CR, by specifying the an incorrect credential in S3 spec. (credential does NOT exist in rancher setup yet)
  • The status of the CR is "Error"
  • Now create a credential in rancher setup.
  • Backup happens.
  • But the state of the backup shows "Error"
  • View in YAML of the backup CR status is:
status:
  conditions:
  - lastUpdateTime: "2020-08-29T20:39:44Z"
    message: secrets "creds2" not found
    reason: Error
    status: "False"
    type: Reconciling
  - lastUpdateTime: "2020-08-29T20:39:46Z"
    status: "True"
    type: Uploaded

Expected Result:
Backup should be seen in "Active" state

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit id: 8d9cedde93
  • Installation option (single install/HA): HA rke

Creating a recurring backup with incorrect schedule creates a backup in the backend

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Install backup-restore-operator using helm command providing minio/S3 configuration
  2. Create a recurring backup providing incorrect/invalid schedule
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: minrec3
  namespace: default
spec:
#  encryptionConfigName: string
    resourceSetName: ecm-resource-set
    retentionCount: 3
    schedule: "abc"

apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: minrec2
  namespace: default
spec:
#  encryptionConfigName: string
    resourceSetName: ecm-resource-set
    retentionCount: 3
    schedule: "*/2****"

Result:
With the incorrect schedule, backup gets created in the backend and the recurring backups continue to happen.

The UI shows the error as below:
Screen Shot 2020-09-01 at 3 54 37 PM

Screen Shot 2020-09-01 at 4 44 58 PM

Backup-restore operator logs below:

ERRO[2020/09/01 22:43:45] error syncing 'default/minrec3': handler backups: Expected exactly 5 fields, found 1: abc, requeuing

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI):master-head (master-c428bc9feb4)
  • Installation option (single install/HA): HA

No Status on the backup and restore CR when a backup is in progress

On master-head - commit id: ec04f7878

  • Create a backup CR
  • The CR's State will be Active but no entry in Status field while the backup is in progress.

Screen Shot 2020-09-29 at 9 17 57 AM

Can we have a state like In Progress when the backup is happening?
Same issue is seen when creating/deploying a Restore CR.

Backup restore operator fails to install when persistence storage is enabled with storage class set to "local-path"

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Install local-path-provisioner
    https://github.com/rancher/local-path-provisioner
  2. Install backup restore operator chart in rancher with persistence enable=true and storage class = "local-path"
affinity: {}
image:
  repository: rancher/backup-restore-operator
  tag: v0.0.1-rc10
nodeSelector: {}
persistence:
  enabled: true
  size: 2Gi
  storageClass: 'local-path'
  volumeName: ''
s3:
  bucketName: rancherbackups
  credentialSecretName: creds
  credentialSecretNamespace: ''
  enabled: false
  endpoint: s3.us-west-2.amazonaws.com
  endpointCA: base64 encoded CA cert
  folder: base folder
  region: us-west-2
tolerations: []




Result:
Backup restore operator fails to install with an exit code; 123

Screen Shot 2020-09-10 at 10 12 46 AM

Only backup-restore-operator-crd is installed

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head(master-198ec5bdf52d3)
  • Installation option (single install/HA): HA

Restoring in a new rancher setup fails

On master-head - commit id: ec04f78

  • On a rancher HA setup - 1 downstream cluster - 5 nodes
  • about 200 downstream empty custom clusters.
  • About 1000 roles.
  • Take a backup
  • Bring down the rancher server. (deleted the nodes)
  • Bring up a cluster using rke.
  • Install the backup-restore-operator chart on the new cluster using Helm CLI
  • Restore from backup using a restore CR and prune must be set to false.
  • Restore is stuck in retrying
  • Error seen in rancher-backup operator logs:
WARN[2020/09/29 16:52:29] Error getting object for controllerRef rancher, skipping it 
ERRO[2020/09/29 16:52:29] Error restoring cluster-scoped resources [error restoring grb-q8tz4 of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-9f2b6 of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring globalrolebinding-tv42r of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-qp9zx of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-fxkgj of type management.cattle.io/v3, Resource=globalrolebindings: restoreResource: err updating resource Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-7hrjw of type management.cattle.io/v3, Resource=globalrolebindings: restoreResource: err updating resource Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-4jfwv of type management.cattle.io/v3, Resource=globalrolebindings: restoreResource: err updating resource Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-nkw6c of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-qlcsx of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-gm4nm of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found error restoring grb-4fprt of type management.cattle.io/v3, Resource=globalrolebindings: Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post "https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s": service "rancher-webhook" not found] 
ERRO[2020/09/29 16:52:29] error syncing 'restore-migration-1': handler restore: error restoring cluster-scoped resources, check logs for exact error, requeuing 
INFO[2020/09/29 16:52:29] Restoring from backup new-a9cf8718-53fe-45a9-893e-8754ba49acb6-2020-09-29T16-16-59Z.tar.gz 
INFO[2020/09/29 16:52:29] invoking set s3 service client                

rancher v2.5.2 did not restore namespace cattle-logging-system from backup

env: rancher v2.5.2, single node k8s cluster

To recreate:

  • install rancher-backup in app&marketplace, config local store (local pv)
  • install rancher-logging in app&marketplace (namespace cattle-logging-system)
  • take a backup
  • delete namespace cattle-logging-system
  • restore from backup, but found nothing was restored in namespace cattle-logging-system

No error found in rancher-backup operator logs.

question:

  • Rancher doc says: The backup-restore operator needs to be installed in the local cluster, and only backs up the Rancher app. , so what exactly apps/namespaces does rancher-backup support? Is there a plan to support non-rancher apps/namespaces?

Thanks very much.

Uncrypted recurring S3 backup via rancher-backup operator fails with invalid AWS net/http header

What kind of request is this (question/bug/enhancement/feature request):

Bug

Steps to reproduce (least amount of steps as possible):

  1. create S3 bucket lalala-etcd-snapshot with AES256 encryption on the server side and these allow rules in the IAM:
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
  1. create the following plain text secret for S3 access:
apiVersion: v1
kind: Secret
metadata:
  name: s3-backups-etcd-snapshot
  namespace: cattle-system
type: Opaque
data:
  accessKey: "AKIA..."
  secretKey: "mDzJ2SCm..."
  1. create the backup object (I did it via the UI at first but this is the resulting YAML):
apiVersion: resources.cattle.io/v1
type: resources.cattle.io.backup
kind: Backup
metadata:
  name: lalala-etcd-snapshot
spec:
  resourceSetName: rancher-resource-set
  retentionCount: 72
  schedule: 0 * * * *
  storageLocation:
    s3:
      bucketName: lalala-etcd-snapshot
      credentialSecretName: s3-backups-etcd-snapshot
      credentialSecretNamespace: cattle-system
      endpoint: s3.us-west-2.amazonaws.com
      insecureTLSSkipVerify: true
      region: us-west-2

Result:

INFO[2020/11/11 19:39:24] Compressing backup CR lalala-etcd-snapshot 
INFO[2020/11/11 19:39:24] invoking set s3 service client s3-accessKey="\x00\xa2\x00\\\x017,n\rd\x957\b\xfeF"
s3-bucketName=lalala-etcd-snapshot s3-endpoint=s3.us-west-2.amazonaws.com
s3-endpoint-ca= s3-folder= s3-region=us-west-2
ERRO[2020/11/11 19:40:46] error syncing 'lalala-etcd-snapshot': handler backups:
failed to check s3 bucket:lalala-etcd-snapshot,
err:Head https://lalala-etcd-snapshot.s3.dualstack.us-west-2.amazonaws.com/:
net/http: invalid header field value "AWS4-HMAC-SHA256
Credential=\x00\xa2\x00\\\x017,n\rd\x957\b\xfeF/20201111/us-west-2/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date, 
Signature=8f11dff2300a6773d6bb7a1db8be452c1808cb1b7991e41a32c0f81c6b44b8d2" for key Authorization, requeuing 

Other details that may be helpful:

The documentation is not clear about a working example so I'm not sure if my settings are correct to be honest.

I tried disabling the S3 bucket server-side encryption but it failed in the same way.

I tried creating the S3 secret via Rancher's UI directly too. It worked.

Environment information

  • Rancher version: 2.5.1
  • Installation option: single

Cluster information

  • Cluster type: imported AWS EC2 cluster set up manually with Terraform
  • Kubernetes version: v1.18.9

Upgrading backup-restore-operator with a new bucket name does not save backups in the new bucket

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Launch backup-restore-operator app providing S3 data
debug: false
global:
  systemDefaultRegistry: ''
image: rancher/backup-restore-operator
pvc: {}
s3: 
  credentialSecretName: test
  region: us-east-2
  bucketName: bucket1
  folder: test1
  endpoint: s3.us-east-2.amazonaws.com
tag: v0.0.1-rc4
  1. Create a backup CR "bkp1" and verify backup is saved in S3
  2. Create a new bucket "bucket2" in S3
  3. Upgrade the backup-restore-operator and provide the new bucket "bucket2"
debug: false
global:
  systemDefaultRegistry: ''
image: rancher/backup-restore-operator
pvc: {}
s3: 
  credentialSecretName: test
  region: us-east-2
  bucketName: bucket2
  folder: test1
  endpoint: s3.us-east-2.amazonaws.com
tag: v0.0.1-rc4

backup-restore-operator upgrade succeeds and the logs specify the new bucket:

Thu, Aug 27 2020 4:29:46 pm
helm --debug upgrade --history-max=5 --namespace=cattle-resources-system --values=/home/shell/helm/values-backup-restore-operator-0.0.1.yaml --version=0.0.1 --wait=true backup-restore-operator /home/shell/helm/backup-restore-operator-0.0.1.tgz
Thu, Aug 27 2020 4:29:46 pm
upgrade.go:121: [debug] preparing upgrade for backup-restore-operator
Thu, Aug 27 2020 4:29:47 pm
upgrade.go:129: [debug] performing update for backup-restore-operator
Thu, Aug 27 2020 4:29:47 pm
upgrade.go:308: [debug] creating upgraded release for backup-restore-operator
Thu, Aug 27 2020 4:29:47 pm
client.go:173: [debug] checking 7 resources for changes
Thu, Aug 27 2020 4:29:47 pm
client.go:440: [debug] Looks like there are no changes for ServiceAccount "backup-restore-operator-serviceaccount"
Thu, Aug 27 2020 4:29:47 pm
client.go:440: [debug] Looks like there are no changes for ClusterRoleBinding "backup-restore-operator-installer"
Thu, Aug 27 2020 4:29:47 pm
client.go:440: [debug] Looks like there are no changes for Deployment "backup-restore-operator"
Thu, Aug 27 2020 4:29:47 pm
wait.go:53: [debug] beginning wait for 7 resources with timeout of 5m0s
Thu, Aug 27 2020 4:29:49 pm
upgrade.go:136: [debug] updating status for upgraded release for backup-restore-operator
Thu, Aug 27 2020 4:29:49 pm
Release "backup-restore-operator" has been upgraded. Happy Helming!
Thu, Aug 27 2020 4:29:49 pm
NAME: backup-restore-operator
Thu, Aug 27 2020 4:29:49 pm
LAST DEPLOYED: Thu Aug 27 23:29:46 2020
Thu, Aug 27 2020 4:29:49 pm
NAMESPACE: cattle-resources-system
Thu, Aug 27 2020 4:29:49 pm
STATUS: deployed
Thu, Aug 27 2020 4:29:49 pm
REVISION: 3
Thu, Aug 27 2020 4:29:49 pm
TEST SUITE: None
Thu, Aug 27 2020 4:29:49 pm
USER-SUPPLIED VALUES:
Thu, Aug 27 2020 4:29:49 pm
s3:
Thu, Aug 27 2020 4:29:49 pm
bucketName: bucket2
Thu, Aug 27 2020 4:29:49 pm
credentialSecretName: creds
Thu, Aug 27 2020 4:29:49 pm
endpoint: s3.us-east-2.amazonaws.com
Thu, Aug 27 2020 4:29:49 pm
folder: test1
Thu, Aug 27 2020 4:29:49 pm
region: us-east-2
Thu, Aug 27 2020 4:29:49 pm
Thu, Aug 27 2020 4:29:49 pm
COMPUTED VALUES:
Thu, Aug 27 2020 4:29:49 pm
debug: false
Thu, Aug 27 2020 4:29:49 pm
global:
Thu, Aug 27 2020 4:29:49 pm
systemDefaultRegistry: ""
Thu, Aug 27 2020 4:29:49 pm
image: rancher/backup-restore-operator
Thu, Aug 27 2020 4:29:49 pm
pvc: {}
Thu, Aug 27 2020 4:29:49 pm
s3:
Thu, Aug 27 2020 4:29:49 pm
bucketName: soumyabucket3
Thu, Aug 27 2020 4:29:49 pm
credentialSecretName: test
Thu, Aug 27 2020 4:29:49 pm
endpoint: s3.us-east-2.amazonaws.com
Thu, Aug 27 2020 4:29:49 pm
folder: test1
Thu, Aug 27 2020 4:29:49 pm
region: us-east-2
Thu, Aug 27 2020 4:29:49 pm
tag: v0.0.1-rc4
Thu, Aug 27 2020 4:29:49 pm
Thu, Aug 27 2020 4:29:49 pm
HOOKS:
Thu, Aug 27 2020 4:29:49 pm
MANIFEST:
Thu, Aug 27 2020 4:29:49 pm
---
Thu, Aug 27 2020 4:29:49 pm
# Source: backup-restore-operator/templates/serviceaccount.yaml
Thu, Aug 27 2020 4:29:49 pm
apiVersion: v1
Thu, Aug 27 2020 4:29:49 pm
kind: ServiceAccount
Thu, Aug 27 2020 4:29:49 pm
metadata:
Thu, Aug 27 2020 4:29:49 pm
name: backup-restore-operator-serviceaccount
Thu, Aug 27 2020 4:29:49 pm
namespace: cattle-resources-system
Thu, Aug 27 2020 4:29:49 pm
---
Thu, Aug 27 2020 4:29:49 pm
# Source: backup-restore-operator/templates/s3secret.yaml
Thu, Aug 27 2020 4:29:49 pm
apiVersion: v1
Thu, Aug 27 2020 4:29:49 pm
kind: Secret
Thu, Aug 27 2020 4:29:49 pm
metadata:
Thu, Aug 27 2020 4:29:49 pm
name: backup-restore-operator-s3
Thu, Aug 27 2020 4:29:49 pm
namespace: cattle-resources-system
Thu, Aug 27 2020 4:29:49 pm
type: Opaque
Thu, Aug 27 2020 4:29:49 pm
stringData:
Thu, Aug 27 2020 4:29:49 pm
credentialSecretName: creds
Thu, Aug 27 2020 4:29:49 pm
region: us-east-2
Thu, Aug 27 2020 4:29:49 pm
bucketName: soumyabucket3
Thu, Aug 27 2020 4:29:49 pm
folder: test1
Thu, Aug 27 2020 4:29:49 pm
endpoint: s3.us-east-2.amazonaws.com

  1. Create a new backup CR.

Result:

The backup in step 5 gets saved in the older S3 bucket "bucket1" and not in bucket2
The backup should get saved in bucket2 after upgrading the backup-restore-operatoe.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head(master-9b6984d5f-head)
  • Installation option (single install/HA): HA

Cluster migration: User is not able to take a backup after a restore

On master-head - commit id: 5d5ef3f8f and backup-restore tag: v0.0.1-rc9

  • Take a backup into S3
  • Deploy Rancher in a HA setup in EKS cluster
  • Deploy backup-restore app.
  • Restore from backup.
  • Rancher is restored successfully.
  • Create another backup by creating a backup CR.
  • No Backup is taken
  • Error seen in backup-restore-operator logs: E0904 03:36:51.157283 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Backup: Unauthorized

RFE: Display Backup file name on Status of the backup CR

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible):

  • Deploy backup-restore chart on master-head
  • Create a resourcet set
  • Create a backup CR.
  • The backup CR does not have Backup file name

Expected Result:
Backup CR can have the name of the backup file created in S3.
Recurring backups should also have the file name in the CR>

Other details that may be helpful:

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head commit id: 5e1b21b931
  • Installation option (single install/HA): HA

Migration error: ServiceAccount "rancher" in namespace "cattle-system" exists and cannot be imported

Process:

  • deploy Rancher 2.5/latest to AKS, configure/schedule backups
  • configure Authentication, Cloud Creds, Node Templates, create cluster with node template.
  • Delete entire AKS cluster/rancher
  • Create new AKS exactly as it was (Terraform) and recover rancher

Issue: After the cluster is restored it has NO retention of configured Authentication, Cloud Creds, Node Templates, or the cluster that was created (the cluster is fine, but rancher has no idea it even exists.

Expected behavior: After restoring the cluster I expect it to look like it did when I took the backup, including retention of configured Authentication, Cloud Creds, Node Templates, or the cluster that was created.


Useful Info
Versions Rancher v2.5.1 UI: v2.5.0
Route undefined

RFE: Different backup file names for one-time backup and recurring backups

What kind of request is this (question/bug/enhancement/feature request): enhancement

Steps to reproduce (least amount of steps as possible):

  • Deploy backup-restore chart on master-head
  • Create a resourcet set
  • Create a backup CR.
  • Currently one-time backup and recurring backups do not have a difference in the name of the files it generates and uploads to S3.

Like how RKE does, we can have a way to differentiate between one-time backup and recurring backups

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - commit id: 5e1b21b931
  • Installation option (single install/HA): HA

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.