GithubHelp home page GithubHelp logo

kanisterio / kanister Goto Github PK

View Code? Open in Web Editor NEW
737.0 36.0 149.0 79.41 MB

An extensible framework for application-level data management on Kubernetes

Home Page: https://kanister.io

License: Apache License 2.0

Makefile 0.33% Shell 2.16% Go 96.30% Dockerfile 0.54% Mustache 0.32% Python 0.02% TypeScript 0.08% CSS 0.17% JavaScript 0.08%
data-protection golang kubernetes cloud-native operator

kanister's Introduction

Kanister Logo

Kanister

Go Report Card GitHub Actions

OpenSSF Best Practices OpenSSF Scorecard

Kanister is a data protection workflow management tool. It provides a set of cohesive APIs for defining and curating data operations by abstracting away tedious details around executing data operations on Kubernetes. It's extensible and easy to install, operate and scale.

Highlights

Kubernetes centric - Kanister's APIs are implemented as Custom Resource Definitions that conforms to Kubernetes' declarative management, security and distribution models.

Storage agnostic - Kanister allows you to efficiently and securely transfer backup data between your services and the object storage service of your choice. Use Kanister to backup, restore, and copy your data using your storage's APIs, and Kanister won't get in the way.

Asynchronous or synchronous task execution - Kanister can schedule your data operation to run asynchronously in dedicated job pods, or synchronously via Kubernetes apimachinery ExecStream framework.

Re-usable workflow artifacts - A Kanister blueprint can be re-used across multiple workflows to protect different environment deployments.

Extensible, atomic data operation functions - Kanister provides a collection of easy-to-use data operation functions that you can add to your blueprint to express detailed backup and restore operation steps, including pre-backup scaling down of replicas, working with all mounted volumes in a pod etc.

Secured via RBAC - Prevent unauthorized access to your workflows via Kubernetes role-based access control model.

Observability - Kanister exposes logs, events and metrics to popular observability tools like Prometheus, Grafana and Loki to provide you with operational insights into your data protection workflows.

Quickstart

Follow the instructions in the installation documentation, to install Kanister on your Kubernetes cluster.

Walk through the tutorial to define, curate and run your first data protection workflow using Kanister blueprints, actionsets and profiles.

The examples directory contains many sample blueprints that you can use to define data operations for:

The Kanister architecture is documented here.

Getting Help

If you have any questions or run into issues, feel free to reach out to us on Slack.

GitHub issues or pull requests that have been inactive for more than 60 days will be labeled as stale. If they remained inactive for another 30 days, they will be automatically closed. To be exempted from the issue lifecycle, discuss with a maintainer the reasons behind the exemption, and add the frozen label to the issue or pull request.

If you discovered any security issues, refer to our SECURITY.md documentation for our security policy, including steps on how to report vulnerabilities.

Community

The Kanister community meetings happen once every two weeks on Thursday, 16:00 UTC, where we discuss ongoing interesting features, issues, and pull requests. Come join us! Everyone is welcome! 🙌 (Zoom link is pinned on Slack)

If you are currently using Kanister, we would love to hear about it! Feel free to add your organization to the ADOPTERS.md by submitting a pull request.

Code of Conduct

Kanister is for everyone. We ask that our users and contributors take a few minutes to review our Code of Conduct.

Contributing to Kanister

We welcome contributions to Kanister! If you're interested in getting involved, please take a look at our guidelines:

  • BUILD.md: Contains detailed instructions on how to build and test Kanister locally or within a CI/CD pipeline. Please refer to this guide if you want to make changes to Kanister's codebase. Build and Test Instructions

  • CONTRIBUTING.md: Provides essential information on how to contribute code, documentation, or bug reports, as well as our coding style and commit message conventions. Contribution Guidelines

Resources

License

Apache License 2.0, see LICENSE.

kanister's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kanister's Issues

The controller should fail if it's RBAC is misconfigured

The controller tries to access blueprints and actionsets using the watcher callbacks. If there is an access issue, these watcher goroutines will go into an infinite fail loop.

For example:

kubectl logs myrelease-kanister-operator-77dcff8694-5sz5d
time="2018-03-27T18:35:12Z" level=info msg="Getting kubernetes context"
E0327 18:35:13.178643       1 reflector.go:205] github.com/kanisterio/kanister/vendor/github.com/rook/operator-kit/watcher.go:76: Failed to list *v1alpha1.Blueprint: the server could not find the requested resource (get blueprints.cr.kanister.io)
E0327 18:35:13.178720       1 reflector.go:205] github.com/kanisterio/kanister/vendor/github.com/rook/operator-kit/watcher.go:76: Failed to list *v1alpha1.ActionSet: the server could not find the requested resource (get actionsets.cr.kanister.io)

We should check to make sure we have access to these synchronously and shutdown the controller if we cannot get actionsets.

Surface ActionSet errors using Status or Events

ActionSet controller failures are currently surfaced by marking the ActionSet status failed. Detailed information is only available by viewing controller logs.

We should surface detailed error information by adding a error detail field to ActionSet schema and/or using events.

Please add an example ActionSet with Profile field

I didn't find any documentation with an example of ActionSet with Profile.
Took some time to find that Profile needs to be specified per each action, and it's a k8s object reference like this one.

spec:
  actions:
  - name: backup
    blueprint: test-bp
    object:
      kind: Deployment
      name: my-test
      namespace: default
    profile:
      name: default-profile
      namespace: kanister

Kanister does not find Secret without "kind" in reference

I was trying out Kanister on my local machine with Docker Desktop for Windows I stumbled over the following issue:
When omitting kind: Secret in the object reference of the postgres secret I get the following error (with "kind" key and value it works totally fine):

Failed to execute phase: v1alpha1.Phase{Name:"takeDatabaseBackup", State:"pending", Output:map[string]interface {}(nil)}: template: config:16:29: executing "config" at <.Secrets.postgres.Da...>: map has no entry for key "postgres"
github.com/kanisterio/kanister/pkg/param.renderStringArg
	/go/src/github.com/kanisterio/kanister/pkg/param/render.go:101
github.com/kanisterio/kanister/pkg/param.render
	/go/src/github.com/kanisterio/kanister/pkg/param/render.go:32
github.com/kanisterio/kanister/pkg/param.render
	/go/src/github.com/kanisterio/kanister/pkg/param/render.go:36
github.com/kanisterio/kanister/pkg/param.RenderArgs
	/go/src/github.com/kanisterio/kanister/pkg/param/render.go:18
github.com/kanisterio/kanister/pkg.(*Phase).Exec
	/go/src/github.com/kanisterio/kanister/pkg/phase.go:44
github.com/kanisterio/kanister/pkg/controller.(*Controller).runAction.func1
	/go/src/github.com/kanisterio/kanister/pkg/controller/controller.go:382
github.com/kanisterio/kanister/vendor/gopkg.in/tomb%2ev2.(*Tomb).run
	/go/src/github.com/kanisterio/kanister/vendor/gopkg.in/tomb.v2/tomb.go:163
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1333

Cutout of the objects I deployed:

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
        ...
        command:
          - bash
          - -o
          - errexit
          - -c
          - |
...
            PGPASSWORD="{{ index .Secrets.postgres.Data "postgresql-password" | toString }}"
...
apiVersion: cr.kanister.io/v1alpha1
kind: ActionSet
...
    secrets:
      postgres:
        # kind: Secret - works with this line
        name: postgres-secret
        namespace: default

Software Versions

Kanister 0.20.0
Docker Desktop 2.1.0.1
Kubernetes 1.14.3

comparison to other tools

Hi,

just stumpled by chance upon kanister and scratched it's documentation.
While doing so I was asking myself a couple of times how kanister fits in with other tools.
Particularly it looks to me kanister has quite some overlap with kubedb and argo (it does a bit of both).
Could you clarify - perhaps in the readme - how kanister compares to such tools?

Prefer goreleaser for non-release images.

The current Dockerfile.In templating approach is obsoleted by using goreleaser. We should use this following.

goreleaser --debug release --snapshot

This can replace lots of the logic in Makefile as well as build/package.sh and build/release_kanctl.sh.

[BUG] Kanister Settings will not be deleted on helm delete

Describe the bug
I installed Kasten by
helm install kasten/k10 --name=k10 --tiller-namespace=kasten-poc --set persistence.storageClass=sc-kasten-poc

The ReclaimPolicy of 'sc-kasten-poc' is 'Delete'
i created a Mobility Profile 'export'
after a
helm delete --purge k10 --tiller-namespace=kasten-poc
the Mobility Profile 'export' is still available

Real problem is that i have a invalid Kanister Profile. If i want to do "Create Kanister Profile" i cannot save it, as i see an error "Failed to parse profile validation error object,A profile with this name already exists." in the logs.
The Mobility Profile is just a easy way to check if state remains after reinstall.

To Reproduce
Steps to reproduce the behavior:

  1. install k10 by helm in storage class with ReclaimPolicy 'Delete'
  2. create Mobility Profile
  3. do a helm delete --purge
  4. go the Mobility Profiles

Expected behavior
All state should be deleted

Set `Object` in TemplateParams with unstructured content for well-known types

Kanister operates on the granularity of an Object. As of the current release, well known Object types are Deployment, StatefulSet, PersistentVolumeClaim, or Namespace.

TemplateParams->Object includes the unstructured representation of the underlying Kubernetes object but this is not set for well-known types. Always setting this will allow Blueprint authors to reference fields within the Kubernetes object that Kanister does not hoist into TemplateParams

e.g. To access a "helm release label", a blueprint author can use .Object.metadata.labels.release

default-profile stiil needs to be specified in ActionSet

I've create a kanister profile via helm: helms install kanister/profile --name kanister-aws --namespace=kanister --set defaultProfile=false,profileName=kan-aws
This cmd created a default-profile.
But ActionsSet has failed with the following error:

Cannot execute action without a profile. Specify a profile in the action set\ngithub.com/kanisterio/kanister/pkg/param.fetchProfile\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/param/param.go:168\ngithub.com/kanisterio/kanister/pkg/param.New\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/param/param.go:116\ngithub.com/kanisterio/kanister/pkg/controller.(*Controller).runAction\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/controller/controller.go:348\ngithub.com/kanisterio/kanister/pkg/controller.(*Controller).handleActionSet\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/controller/controller.go:323\ngithub.com/kanisterio/kanister/pkg/controller.(*Controller).onAddActionSet\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/controller/controller.go:192\ngithub.com/kanisterio/kanister/pkg/controller.(*Controller).onAdd\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/controller/controller.go:127\ngithub.com/kanisterio/kanister/pkg/controller.(*Controller).onAdd-fm\n\t/root/src/github.com/kastenhq/k10/go/src/github.com/kanisterio/kanister/pkg/controller/controller.go:87\nvendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/client-go/tools/cache/controller.go:195\nvendor/k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnAdd\n\t<autogenerated>:1\nvendor/k8s.io/client-go/tools/cache.NewInformer.func1\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/client-go/tools/cache/controller.go:314\nvendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:444\nvendor/k8s.io/client-go/tools/cache.(*controller).processLoop\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/client-go/tools/cache/controller.go:150\nvendor/k8s.io/client-go/tools/cache.(*controller).processLoop-fm\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/client-go/tools/cache/controller.go:124\nvendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nvendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nvendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nvendor/k8s.io/client-go/tools/cache.(*controller).Run\n\t/root/src/github.com/kastenhq/k10/go/src/vendor/k8s.io/client-go/tools/cache/controller.go:124\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:133

Everything works fine with:

  - name: backup
    blueprint: my-bp
    object:
      kind: Deployment
      name: my-test
      namespace: default
    profile:
      name: default-profile
      namespace: kanister

But I would expect default profile been used when profile needed without specifying it.

Continuous logs from spawned commands

KubeTask and KubeExec currently only print logs after completion and for failures don't print anything. This makes it difficult to determine what happened during execution.

Structured fields support for logging

In order to facilitate debugging and troubleshooting, it is desirable to include contextual information in errors and log messages in a structured manner. In particular, it is desirable to preserve semantic information about the values of request parameters or other variables in a way that makes it easy to later extract the original semantic meaning without having to perform log parsing and inference. At a high level, imagine if log messages where structured records (JSON documents) instead of simply arbitrary strings.

As a first step, it is desirable to have a simple mechanism and API that allows capturing the semantic boundaries of different fields and data types pushed through logging, or attached to errors for that matter.

One potential approach to address this issue is to include both in logs and errors, a collection (think list) of (key, value) tuples or fields. It would be desirable to be able to attach these fields to context.Context and extract those at logging time.

[BUG] installing kanctl results in an error

Describe the bug
Running the command $ go install -v github.com/kanisterio/kanister/cmd/kanctl to install kanctl results in below error message

go: github.com/kanisterio/[email protected] requires
	github.com/graymeta/[email protected]: invalid version: unknown revision 000000000000

To Reproduce
Just try to install kanctl using command go install -v github.com/kanisterio/kanister/cmd/kanctl.

Expected behavior
kanctl should be installed correctly.

Additional context
If I run go install -v github.com/kanisterio/kanister/cmd/kanctl after cloning the kanister repo and it seems to be working fine.

I have below go version installed on my machine

go version go1.13.4 linux/amd64

Update for Kubernetes 1.9

We already use client-go 6.0, which supports the new API objects.

We should also upgrade to using the V1 Apps group rather than the current V1beta1.

Update comment for resourceMatcher.TypeMatcher()

// The `usageExclusion` flag should be set to true
// if the type matcher will be used as an exclude filter
func (rm ResourceMatcher) TypeMatcher(usageInclusion bool) ResourceTypeMatcher

Looks like the flag is actually usuageInclusion and it should be true when used as an include filter, not an exclude filter.

Sample blueprint to snapshot/restore AWS RDS database instances

We should implement a sample blueprint that leverages the AWS API to snapshot/restore AWS RDS databases.

K8s applications that use a managed service such as RDS will typically create a ConfigMap with the relevant configuration (e.g. see https://github.com/kastenhq/pgtest).

This task tracks implementing a blueprint that uses the AWS snapshot/restore functionality would demonstrate how users can leverage Kanister with such applications.

Use constants for Function Names

Is your feature request related to a problem? Please describe.
While writing blueprints in Go source code, it's highly likely to make typos writing the function name as strings.

Describe the solution you'd like
Having constants to define the function names will avoid potential bugs.

Scrubbing confidential data

This project looks very promising - I came across it while reviewing KubeCon CfPs. I had a few initial questions for you?

  • How are you performing scrubbing of confidential data such as payment information?
  • What DB back-ends are supported and what is in the roadmap for the future?
  • Do DB backups have to be stored on S3 or are other back-ends supported?

Cheers,

Alex

Refactor codebase to use helm3

Is your feature request related to a problem? Please describe.
We should be refactoring codebase to use helm3 in build and test script

Higher-level changes we need to do:

  • Remove make tiller target (since helm 3 is tillerless)
  • Use helm3 in travis CI
  • Refactor release_helm.sh
  • Add a step in installer docs to create namespace before installing charts since helm3 doesn't create ns if not present (https://docs.kanister.io/install.html#deploying-via-helm)

kanister bins are working only inside alpine image

I faced an issue running kando inside Debian based image
and digging further found this one: rust-lang/rust#40049

ldd from alpine

22cf735ce9ad:/# ldd $(which kando)
	/lib/ld-musl-x86_64.so.1 (0x7fe89c655000)
	libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7fe89c655000)

ldd from debian:

root@c684e1008dc1:/# ldd /usr/local/bin/kando
	linux-vdso.so.1 (0x00007ffebfbfe000)
	libc.musl-x86_64.so.1 => not found

as a workaround, i installed musl-dev on Debian.

Kanister functions should have a way to configure ImagePullPolicy

As of now, there is no way to set imagePullPolicy for Kanister functions like KubeTasks or PrepareData where we need to mention container image. The imagePullPolicy is set as "Always" by default. It would save a lot of time if we can configure this especially when the image size is considerably large.

Kando can't process documented profile schema

When running kando in the version 0.21.0 it can't read the profile in the documented schema.
Error message:

ERRO[0000] json: cannot unmarshal object into Go struct field KeyPair.Secret of type string
failed to unmarshal profile
github.com/kanisterio/kanister/pkg/kando.unmarshalProfileFlag
        /go/src/github.com/kanisterio/kanister/pkg/kando/location.go:53
github.com/kanisterio/kanister/pkg/kando.runLocationPush
        /go/src/github.com/kanisterio/kanister/pkg/kando/location_push.go:48
github.com/kanisterio/kanister/pkg/kando.newLocationPushCommand.func1
        /go/src/github.com/kanisterio/kanister/pkg/kando/location_push.go:36
github.com/spf13/cobra.(*Command).execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).Execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/kanisterio/kanister/pkg/kando.Execute
        /go/src/github.com/kanisterio/kanister/pkg/kando/kando.go:31
main.main
        /go/src/github.com/kanisterio/kanister/cmd/kando/main.go:22
runtime.main
        /usr/local/go/src/runtime/proc.go:200
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1337

The JSON I try to call kando with:

{
    "apiVersion": "cr.kanister.io/v1alpha1",
    "credential": {
        "keyPair": {
            "ID": "access_key_id",
            "secret": {
                "apiVersion": "v1",
                "name": "default-profile-creds",
                "namespace": "kanister"
            },
            "secretField": "secret_access_key"
        },
        "type": "keyPair"
    },
    "kind": "Profile",
    "location": {
        "bucket": "my_bucket",
        "endpoint": "https://example.com",
        "prefix": null,
        "region": null,
        "type": "s3Compliant"
    },
    "metadata": {
        "labels": {
            "app": "profile",
            "chart": "profile-0.20.0",
            "heritage": "Tiller",
            "release": "kanister-default-profile"
        },
        "name": "default-profile",
        "namespace": "kanister"
    },
    "skipSSLVerify": false
}

Then I modify the JSON to the following it works:

{
    "apiVersion": "cr.kanister.io/v1alpha1",
    "credential": {
        "keyPair": {
            "id": "xxxxxxxxx",
            "secret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
        },
        "type": "keyPair"
    },
    "kind": "Profile",
    "location": {
        "bucket": "my_bucket",
        "endpoint": "https://example.com",
        "prefix": null,
        "region": null,
        "type": "s3Compliant"
    },
    "metadata": {
        "labels": {
            "app": "profile",
            "chart": "profile-0.20.0",
            "heritage": "Tiller",
            "release": "kanister-default-profile"
        },
        "name": "default-profile",
        "namespace": "kanister"
    },
    "skipSSLVerify": false
}

The JSON for the Kubernetes resource that is not working was created by the helm chart "Profile CustomResource" contained in the repository.

Software Versions

Kanister 0.21.0
Kubernetes 1.14.6

Helm install of Kanister can't create events

The current helm chart for kanister grants the kanister-operater SA permissions in the default edit permission group which lacks the ability to create events. Giving the SA the default admin permission group would most likely resolve this issue. To accomplish this, the user would have to create a second SA for Tiller which would need at least the admin permissions. This is required because the kubernetes API prevents privilege escalation and, by default, Tiller runs with the default privilege group which does not have admin rights. Setting up a new SA for Tiller, binding it to the admin permission group, and running helm init ... should configure the system properly.

This example, or any of the following ones should be sufficient to properly configure Tiller.

Installation instruction inconsistent between README and docs

README:

Kanister Installation
Kanister is based on the operator pattern. The first step to using Kanister is to deploy the Kanister controller:

$ git clone [email protected]:kanisterio/kanister.git

# Install Kanister operator controller
$ kubectl apply -f bundle.yaml

docs:


This will install the controller in the default namespace

# install Kanister operator controller using helm
$ helm install stable/kanister-operator

Support Cloud Object Storage

A common use case for Kanister is to send and retrieve data from Google Cloud Storage or AWS S3. This can be currently achieved by using these a cloud provider's command line tool from within a container. This mechanism is may be error prone and could be handled by Kanister.

Kanister has an artifact type called CloudObject. We should add Kanister Functions that support interacting with this type of Object.

[BUG] Unable to install Kanister tools (kanctl/kando) from latest release - 0.21.0

Describe the bug
Not able to install the kanister tools - kanctl and kando using the latest release binary.
Fails on both linux and Mac OSX.

To Reproduce
Steps to reproduce the behavior:
Run the following command:
curl https://raw.githubusercontent.com/kanisterio/kanister/master/scripts/get.sh | bash

Expected behavior
kanctl and kando to be installed under usr/local/bin

Screenshots
The scripts stops running in the function downloadFile().

Checking hash of kanister_0.21.0_darwin_amd64.tar.gz
+ pushd /var/folders/pt/sds0k5nx2wnbl_ypg10ljhp80000gn/T/kanister-installer-XXXXXX.OSrAlUqk
/var/folders/pt/sds0k5nx2wnbl_ypg10ljhp80000gn/T/kanister-installer-XXXXXX.OSrAlUqk ~/Work/Kasten/k10
+ local filtered_checksum=./kanister_0.21.0_darwin_amd64.tar.gz.sha256
+ grep kanister_0.21.0_darwin_amd64.tar.gz
pdevaraj @ ~/Work $

Additional context
From my initial debugging, the checksum.txt file in the release tarball seems incorrect. The sizes of the binaries themselves seems very different from previous releases.

Relax check for `Profile`

runAction checks to ensure the ActionSet specifies a Profile reference. The assumption here that a remote storage Profile will be required for most actions.

This assumption is likely valid for actions such as backup, restore, retire - but for other custom workflows e.g. copying data b/w 2 volumes, etc - a remote storage profile is not required.

Suggest relaxing this check. A couple of options (simple->more complex):

  1. Only check for Profile for backup, restore, retire
  2. Introspect the Action spec in the Blueprint to see if Profile is used

I think it's safe to do (1) first and (2) can be a follow-up improvement.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.