GithubHelp home page GithubHelp logo

kubernetes-csi / external-snapshotter Goto Github PK

View Code? Open in Web Editor NEW
437.0 23.0 355.0 74.35 MB

Sidecar container that watches Kubernetes Snapshot CRD objects and triggers CreateSnapshot/DeleteSnapshot against a CSI endpoint.

License: Apache License 2.0

Go 85.82% Dockerfile 0.07% Makefile 1.54% Shell 11.14% Python 1.42%
k8s-sig-storage

external-snapshotter's Introduction

CSI Snapshotter

The CSI snapshotter is part of Kubernetes implementation of Container Storage Interface (CSI).

The volume snapshot feature supports CSI v1.0 and higher. It was introduced as an Alpha feature in Kubernetes v1.12 and has been promoted to a Beta feature in Kubernetes 1.17. In Kubernetes 1.20, the volume snapshot feature moves to GA.

⚠️ WARNING: There is a new validating webhook server which provides tightened validation on snapshot objects. This SHOULD be installed by all users of this feature. More details below.

Overview

With the promotion of Volume Snapshot to GA, the feature is enabled by default on standard Kubernetes deployments and cannot be turned off.

Blog post for the GA feature can be found here

Compatibility

This information reflects the head of this branch.

Minimum CSI Version Recommended CSI Version Container Image Min K8s Version Recommended K8s Version
CSI Spec v1.0.0 CSI Spec v1.5.0 k8s.gcr.io/sig-storage/csi-snapshotter 1.20 1.20
CSI Spec v1.0.0 CSI Spec v1.5.0 k8s.gcr.io/sig-storage/snapshot-controller 1.20 1.20
CSI Spec v1.0.0 CSI Spec v1.5.0 k8s.gcr.io/sig-storage/snapshot-validation-webhook 1.20 1.20

Note: snapshot-controller, snapshot-validation-webhook, csi-snapshotter v4.1 requires v1 snapshot CRDs to be installed, but it serves both v1 and v1beta1 snapshot objects. Storage version is changed from v1beta1 to v1 in 4.1.0 so v1beta1 is deprecated and will be removed in a future release.

Feature Status

The VolumeSnapshotDataSource feature gate was introduced in Kubernetes 1.12 and it is enabled by default in Kubernetes 1.17 when the volume snapshot feature is promoted to beta. In Kubernetes 1.20, the feature gate is enabled by default on standard Kubernetes deployments and cannot be turned off.

Design

Both the snapshot controller and CSI external-snapshotter sidecar follow controller pattern and uses informers to watch for events. The snapshot controller watches for VolumeSnapshot and VolumeSnapshotContent create/update/delete events.

The CSI external-snapshotter sidecar only watches for VolumeSnapshotContent create/update/delete events. It filters out these objects with Driver==<CSI driver name> specified in the associated VolumeSnapshotClass object and then processes these events in workqueues with exponential backoff.

The CSI external-snapshotter sidecar talks to CSI over socket (/run/csi/socket by default, configurable by -csi-address).

Snapshot v1 APIs

In the current release, both v1 and v1beta1 APIs are served while the stored API version is changed from v1beta1 to v1. v1beta1 APIs is deprecated and will be removed in a future release. It is recommended for users to switch to v1 APIs as soon as possible. Any previously created invalid v1beta1 objects have to be deleted before upgrading to version 4.1.

Usage

Volume Snapshot feature contains the following components:

The Volume Snapshot feature depends on a volume snapshot controller and the volume snapshot CRDs. Both the volume snapshot controller and the CRDs are independent of any CSI driver. The CSI Snapshotter sidecar must run once per CSI driver. The single snapshot controller deployment works for all CSI drivers in a cluster. With leader election configured, the CSI sidecars and snapshot controller elect one leader per deployment. If deployed with two or more pods and leader election is enabled, the non-leader containers will attempt to get the lease. If the leader container dies, a non-leader will take over.

Therefore, it is strongly recommended that Kubernetes distributors bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).

If your Kubernetes distribution does not bundle the snapshot controller, you may manually install these components by executing the following steps. Note that the snapshot controller YAML files in the git repository deploy into the default namespace for system testing purposes. For general use, update the snapshot controller YAMLs with an appropriate namespace prior to installing. For example, on a Vanilla Kubernetes cluster update the namespace from 'default' to 'kube-system' prior to issuing the kubectl create command.

There is a new validating webhook server which provides tightened validation on snapshot objects. The cluster admin or Kubernetes distribution admin should install the webhook alongside the snapshot controllers and CRDs. More details below.

Install Snapshot CRDs:

Install Common Snapshot Controller:

  • Update the namespace to an appropriate value for your environment (e.g. kube-system)
  • kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
  • Do this once per cluster

Install CSI Driver:

Validating Webhook

The snapshot validating webhook is an HTTP callback which responds to admission requests. It is part of a larger plan to tighten validation for volume snapshot objects. This webhook introduces the ratcheting validation mechanism targeting the tighter validation. The cluster admin or Kubernetes distribution admin should install the webhook alongside the snapshot controllers and CRDs.

Along with the validation webhook, the volume snapshot controller will start labeling invalid snapshot objects which already existed. This is to enable quick identification of invalid snapshot objects in the system by running:

kubectl get volumesnapshots --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource: ""
kubectl get volumesnapshotcontents --selector=snapshot.storage.kubernetes.io/invalid-snapshot-content-resource: ""

Users should run this to identify, remove any invalid objects, and correct their workflows before upgrading to v1. Once the API has been switched to the v1 type, those invalid objects will not be deletable from the system.

If there are no existing invalid v1beta1 objects, after upgrading to v1, the webhook and schema validation will prevent the user from creating new invalid v1 and v1beta1 objects.

If there are existing invalid v1beta1 objects, the user should make sure that the snapshot controller is upgraded to v3.0.0 or higher (v3.0.3 is the latest recommended v3.0.x release) and install the corresponding validation webhook before upgrading to v1 so that those invalid objects will be labeled and can be identified easily and removed before upgrading to v1.

If there are existing invalid v1beta1 objects, and the user didn't upgrade to the snapshot controller 3.0.0 or higher and install the corresponding validation webhook before upgrading to v1, those existing invalid v1beta1 objects will not be labeled by the snapshot controller.

So the recommendation is that before upgrading to v1 CRDs and upgrading snapshot controller and validation webhook to v4.0, the user should upgrade to the snapshot controller 3.0.0 and higher (v3.0.3 is the latest recommended version for 3.0.x) and install the corresponding validation webhook so that all existing invalid objects will be labeled and can be easily identified and deleted.

⚠️ WARNING: Cluster admins choosing not to install the webhook server and participate in the phased release process can cause future problems when upgrading from v1beta1 to v1 volumesnapshot API, if there are currently persisted objects which fail the new stricter validation. Potential impacts include being unable to delete invalid snapshot objects.

Read more about how to install the example webhook here.

Validating Webhook Command Line Options

  • --tls-cert-file: File containing the x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). Required.

  • --tls-private-key-file: File containing the x509 private key matching --tls-cert-file. Required.

  • --port: Secure port that the webhook listens on (default 443)

  • --kubeconfig <path>: Path to Kubernetes client configuration that the webhook uses to connect to Kubernetes API server. When omitted, default token provided by Kubernetes will be used. This option is useful only when the snapshot controller does not run as a Kubernetes pod, e.g. for debugging.

  • --prevent-volume-mode-conversion: Boolean that prevents an unauthorised user from modifying the volume mode when creating a PVC from an existing VolumeSnapshot. Was present as an alpha feature in v6.0.0; Having graduated to beta, defaults to true.

Validating Webhook Validations

Volume Snapshot
  • Spec.VolumeSnapshotClassName must not be an empty string or nil on creation
  • Spec.Source.PersistentVolumeClaimName must not be changed on update requests
  • Spec.Source.VolumeSnapshotContentName must not be changed on update requests
Volume Snapshot Content
  • Spec.VolumeSnapshotRef.Name must not be an empty string on creation
  • Spec.VolumeSnapshotRef.Namespace must not be an empty string on creation
  • Spec.Source.VolumeHandle must not be changed on update requests
  • Spec.Source.SnapshotHandle must not be changed on update requests
  • Spec.SourceVolumeMode must not be changes on update requests
Volume Snapshot Classes
  • There can only be a single default volume snapshot class for a particular driver.

Distributed Snapshotting

The distributed snapshotting feature is provided to handle snapshot operations for local volumes. To use this functionality, the snapshotter sidecar should be deployed along with the csi driver on each node so that every node manages the snapshot operations only for the volumes local to that node. This feature can be enabled by setting the following command line options to true:

Snapshot controller option

  • --enable-distributed-snapshotting: This option lets the snapshot controller know that distributed snapshotting is enabled and the snapshotter sidecar will be running on each node. Off by default.

CSI external snapshotter sidecar option

  • --node-deployment: Enables the snapshotter sidecar to handle snapshot operations for the volumes local to the node on which it is deployed. Off by default.

Other than this, the NODE_NAME environment variable must be set where the CSI snapshotter sidecar is deployed. The value of NODE_NAME should be the name of the node where the sidecar is running.

Snapshot controller command line options

Important optional arguments that are highly recommended to be used

  • --leader-election: Enables leader election. This is useful when there are multiple replicas of the same snapshot controller running for the same Kubernetes deployment. Only one of them may be active (=leader). A new leader will be re-elected when current leader dies or becomes unresponsive for ~15 seconds.

  • --leader-election-namespace <namespace>: The namespace where the leader election resource exists. Defaults to the pod namespace if not set.

  • --leader-election-lease-duration <duration>: Duration, in seconds, that non-leader candidates will wait to force acquire leadership. Defaults to 15 seconds.

  • --leader-election-renew-deadline <duration>: Duration, in seconds, that the acting leader will retry refreshing leadership before giving up. Defaults to 10 seconds.

  • --leader-election-retry-period <duration>: Duration, in seconds, the LeaderElector clients should wait between tries of actions. Defaults to 5 seconds.

  • --kube-api-qps <num>: QPS for clients that communicate with the kubernetes apiserver. Defaults to 5.0.

  • --kube-api-burst <num>: Burst for clients that communicate with the kubernetes apiserver. Defaults to 10.

  • --http-endpoint: The TCP network address where the HTTP server for diagnostics, including metrics and leader election health check, will listen (example: :8080 which corresponds to port 8080 on local host). The default is empty string, which means the server is disabled.

  • --metrics-path: The HTTP path where prometheus metrics will be exposed. Default is /metrics.

  • --worker-threads: Number of worker threads. Default value is 10.

  • --retry-interval-start: Initial retry interval of failed volume snapshot creation or deletion. It doubles with each failure, up to retry-interval-max. Default value is 1 second.

  • --retry-interval-max: Maximum retry interval of failed volume snapshot creation or deletion. Default value is 5 minutes.

  • --retry-crd-interval-max: Maximum retry duration for detecting the snapshot CRDs on controller startup. Default is 30 seconds.

  • --enable-distributed-snapshotting : Enables each node to handle snapshots for the volumes local to that node. Off by default. It should be set to true only if --node-deployment parameter for the csi external snapshotter sidecar is set to true. See https://github.com/kubernetes-csi/external-snapshotter/blob/master/README.md#distributed-snapshotting for details.

  • --prevent-volume-mode-conversion: Boolean that prevents an unauthorised user from modifying the volume mode when creating a PVC from an existing VolumeSnapshot. Was present as an alpha feature in v6.0.0; Having graduated to beta, defaults to true.

Other recognized arguments

  • --kubeconfig <path>: Path to Kubernetes client configuration that the snapshot controller uses to connect to Kubernetes API server. When omitted, default token provided by Kubernetes will be used. This option is useful only when the snapshot controller does not run as a Kubernetes pod, e.g. for debugging.

  • --resync-period <duration>: Internal resync interval when the snapshot controller re-evaluates all existing VolumeSnapshot instances and tries to fulfill them, i.e. create / delete corresponding snapshots. It does not affect re-tries of failed calls! It should be used only when there is a bug in Kubernetes watch logic. Default is 15 minutes.

  • --version: Prints current snapshot controller version and quits.

  • All glog / klog arguments are supported, such as -v <log level> or -alsologtostderr.

CSI external snapshotter sidecar command line options

Important optional arguments that are highly recommended to be used

  • --csi-address <path to CSI socket>: This is the path to the CSI driver socket inside the pod that the external-snapshotter container will use to issue CSI operations (/run/csi/socket is used by default).

  • --leader-election: Enables leader election. This is useful when there are multiple replicas of the same external-snapshotter running for one CSI driver. Only one of them may be active (=leader). A new leader will be re-elected when current leader dies or becomes unresponsive for ~15 seconds.

  • --leader-election-namespace <namespace>: The namespace where the leader election resource exists. Defaults to the pod namespace if not set.

  • --leader-election-lease-duration <duration>: Duration, in seconds, that non-leader candidates will wait to force acquire leadership. Defaults to 15 seconds.

  • --leader-election-renew-deadline <duration>: Duration, in seconds, that the acting leader will retry refreshing leadership before giving up. Defaults to 10 seconds.

  • --leader-election-retry-period <duration>: Duration, in seconds, the LeaderElector clients should wait between tries of actions. Defaults to 5 seconds.

  • --kube-api-qps <num>: QPS for clients that communicate with the kubernetes apiserver. Defaults to 5.0.

  • --kube-api-burst <num>: Burst for clients that communicate with the kubernetes apiserver. Defaults to 10.

  • --timeout <duration>: Timeout of all calls to CSI driver. It should be set to value that accommodates majority of CreateSnapshot, DeleteSnapshot, and ListSnapshots calls. 1 minute is used by default.

  • snapshot-name-prefix: Prefix to apply to the name of a created snapshot. Default is snapshot.

  • snapshot-name-uuid-length: Length in characters for the generated uuid of a created snapshot. Defaults behavior is to NOT truncate.

  • --worker-threads: Number of worker threads for running create snapshot and delete snapshot operations. Default value is 10.

  • --node-deployment: Enables deploying the sidecar controller together with a CSI driver on nodes to manage node-local volumes. Off by default. This should be set to true along with the --enable-distributed-snapshotting in the snapshot controller parameters to make use of distributed snapshotting. See https://github.com/kubernetes-csi/external-snapshotter/blob/master/README.md#distributed-snapshotting for details.

  • --retry-interval-start: Initial retry interval of failed volume snapshot creation or deletion. It doubles with each failure, up to retry-interval-max. Default value is 1 second.

  • --retry-interval-max: Maximum retry interval of failed volume snapshot creation or deletion. Default value is 5 minutes.

Other recognized arguments

  • --kubeconfig <path>: Path to Kubernetes client configuration that the CSI external-snapshotter uses to connect to Kubernetes API server. When omitted, default token provided by Kubernetes will be used. This option is useful only when the external-snapshotter does not run as a Kubernetes pod, e.g. for debugging.

  • --resync-period <duration>: Internal resync interval when the CSI external-snapshotter re-evaluates all existing VolumeSnapshotContent instances and tries to fulfill them, i.e. update / delete corresponding snapshots. It does not affect re-tries of failed CSI calls! It should be used only when there is a bug in Kubernetes watch logic. Default is 15 minutes.

  • --version: Prints current CSI external-snapshotter version and quits.

  • All glog / klog arguments are supported, such as -v <log level> or -alsologtostderr.

HTTP endpoint

The external-snapshotter optionally exposes an HTTP endpoint at address:port specified by --http-endpoint argument. When set, these two paths are exposed:

  • Metrics path, as set by --metrics-path argument (default is /metrics).

  • Leader election health check at /healthz/leader-election. It is recommended to run a liveness probe against this endpoint when leader election is used to kill external-provisioner leader that fails to connect to the API server to renew its leadership. See kubernetes-csi/csi-lib-utils#66 for details.

Upgrade

Upgrade from v1alpha1 to v1beta1

The change from v1alpha1 to v1beta1 snapshot APIs is not backward compatible.

If you have already deployed v1alpha1 snapshot APIs and external-snapshotter sidecar controller and want to upgrade to v1beta1, you need to do the following:

  • Note: The underlying snapshots on the storage system will be deleted in the upgrade process!!!
  1. Delete volume snapshots created using v1alpha1 snapshot CRDs and external-snapshotter sidecar controller.
  2. Uninstall v1alpha1 snapshot CRDs, external-snapshotter sidecar controller, and CSI driver.
  3. Install v1beta1 snapshot CRDs, snapshot controller, CSI external-snapshotter sidecar and CSI driver.

Upgrade from v1beta1 to v1

Validation webhook should be installed before upgrading to v1. Potential impacts of not installing the validation webhook before upgrading to v1 include being unable to delete invalid snapshot objects. See the section on Validation Webhook for details.

  • When upgrading to 4.0, change from v1beta1 to v1 is backward compatible because both v1 and v1beta1 are served while the stored API version is still v1beta1. Future releases will switch the stored version to v1 and gradually remove v1beta1 support.
  • When upgrading from 3.x to 4.1, change from v1beta1 to v1 is no longer backward compatible because stored API version is changed to v1 although both v1 and v1beta1 are still served. v1beta1 is deprecated in 4.1.
  • v1beta1 support will be removed in a future release. It is recommended for users to switch to v1 as soon as possible. Any previously created invalid v1beta1 objects have to be deleted before upgrading to version 4.1.

Testing

Running Unit Tests:

go test -timeout 30s  github.com/kubernetes-csi/external-snapshotter/pkg/common-controller

go test -timeout 30s  github.com/kubernetes-csi/external-snapshotter/pkg/sidecar-controller

CRDs and Client Library

Volume snapshot APIs and client library are now in a separate sub-module: github.com/kubernetes-csi/external-snapshotter/client/v4.

Use the command go get -u github.com/kubernetes-csi/external-snapshotter/client/[email protected] to get the client library.

Setting Quota limits with Snapshot custom resources

ResourceQuotas are namespaced objects that can be used to set limits on objects of a particular Group.Version.Kind. Before we set resource quota, make sure that snapshot CRDs are installed in the cluster. If not please follow this guide.

kubectl get crds | grep snapshot

Now create a ResourceQuota object which sets the limits on number of volumesnapshots that can be created:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: snapshot-quota
spec:
  hard:
    count/volumesnapshots.snapshot.storage.k8s.io: "10"

If you try to create more snapshots than what is allowed, you will see error like the following:

Error from server (Forbidden): error when creating "csi-snapshot.yaml": volumesnapshots.snapshot.storage.k8s.io "new-snapshot-demo" is forbidden: exceeded quota: snapshot-quota, requested: count/volumesnapshots.snapshot.storage.k8s.io=1, used: count/volumesnapshots.snapshot.storage.k8s.io=10, limited: count/volumesnapshots.snapshot.storage.k8s.io=10

Dependency Management

external-snapshotter uses go modules.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

external-snapshotter's People

Contributors

andrewsykim avatar andyzhangx avatar bells17 avatar chrishenzie avatar dependabot[bot] avatar f10atin9 avatar huffmanca avatar humblec avatar jingxu97 avatar jsafrane avatar k8s-ci-robot avatar leonardoce avatar lpabon avatar madhu-1 avatar mauriciopoppe avatar mowangdk avatar msau42 avatar namrata-ibm avatar nixpanic avatar pohly avatar raunakshah avatar saad-ali avatar sameshai avatar sneha-at avatar sunnylovestiramisu avatar wackxu avatar windayski avatar xing-yang avatar yuxiangqian avatar zhucan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

external-snapshotter's Issues

Remove snapshotter argument

There's no reason for snapshotter name to be different from the plugin name. We should remove this argument and make it match the plugin name.
/help

external-snapshotter should not allow annotation in template for snapshotter-secret-name

external-snapshotter supports ${volumesnapshot.annotations['ANNOTATION_KEY']} as a template for csi.storage.k8s.io/snapshotter-secret-name. It should not. It is ok for all other secrets (e.g. ListSnapshot secret) but not for the CreateSnapshot secret.

// supported tokens for name resolution:
// - ${volumesnapshotcontent.name}
// - ${volumesnapshot.namespace}
// - ${volumesnapshot.name}
// - ${volumesnapshot.annotations['ANNOTATION_KEY']} (e.g. ${pvc.annotations['example.com/snapshot-create-secret-name']})

CC @msau42 @jingxu97 @xing-yang

external-snapshotter start failed, because of the nil pointer

storageclassName := *pvc.Spec.StorageClassName

The pvc yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: test-sc
name: test-pvc
namespace: default
spec:
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 30G

if we set the storageclass with the metadata.annotations.volume.beta.kubernetes.io/storage-class not spec.storageClassName, when we rebuilding the external-snapshotter sidecar that it will throw nil pointer.
image

Update client-go to kubernetes-1.16.3

We use VolumeSnapshot CRD api and clients in our Stash project. We are trying to use the 1.16.3 client libraries. But we are blocked on this project, since it uses 1.14 client-go. Can you please update the client-go to 1.16.3 ?

Thanks.

Add pvcLister to snapshot controller

We should add pvcLister to the snapshot controller, and after that changes should be made to snapshot controller to use pvcLister instead of using the client. This is more efficient.

This was brought up during code reviews of #39

Problems dealing with snapshot create requests timing out

When a CSI plugin is passed a CreateSnapshot request and the caller (snapshotter sidecar) times out, the snapshotter sidecar marks this as an error and does not retry the snapshot. Further, as the call only timed out and did not fail, the storage provider may have actually created the said snapshot (although delayed).

When such an snapshot is deleted, there are no requests to the CSI plugin to delete the same, which cannot be issued by the sidecar as it does not have the SnapID.

The end result of this is that the snapshot is leaked on the storage provider.

The question/issue hence is as follows,

Should the snapshot be retried on timeouts from the CreateSnapshot call?

Based on the ready_to_use parameter in the CSI spec [1] and possibilities of application freeze as the snapshot is taken, I would assume this operation cannot be done indefinitely. But, also as per the spec timeout errors, the behavior should be a retry, as implemented for volume create and delete operations in the provisioner sidecar [2].

So to fix the potential snapshot leak by the storage provider, should the snapshotter sidecar retry till it gets an error from the plugin or a success with a SnapID, but mark the snapshot as bad/unusable as it was not completed in time (to honor the application freeze times and such)?

[1] CSI spec ready_to_use section: https://github.com/container-storage-interface/spec/blob/master/spec.md#the-ready_to_use-parameter

[2] timeout handling in provisioner sidecar: https://github.com/kubernetes-csi/external-provisioner#csi-error-and-timeout-handling

Take advantage of `csi_secret` in CSI 1.0

CSI 1.0 decorates senstive fields with csi_secret. Let's take advantage of this feature to programmatically ensure no sensitive fields are ever logged by this side car container.

Update Snapshot CRD version to v1beta1

Change in-tree feature gate to Beta and enable by default; Increment Snapshot CRD version to v1beta1; Update e2e tests snapshot CRD version to v1beta1

"dep ensure -add" does not work

I may miss out something simple, but I couldn't import this package via dep ensure -add github.com/kubernetes-csi/external-snapshotter command.

Following is the output:

Fetching sources...

Solving failure: No versions of github.com/kubernetes-csi/external-snapshotter met constraints:
	v1.0.1: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v0.4.1: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v0.4.0: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v1.0.1-rc1: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v1.0.0-rc4: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v1.0.0-rc3: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v1.0.0-rc2: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v0.5.0-alpha.0: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	v0.4.0-rc.1: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	master: Could not introduce github.com/kubernetes-csi/external-snapshotter@master, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	errorhandling: Could not introduce github.com/kubernetes-csi/external-snapshotter@errorhandling, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	k8s_1.12.0-beta.1: Could not introduce github.com/kubernetes-csi/external-snapshotter@k8s_1.12.0-beta.1, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	release-0.4: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	release-1.0: Could not introduce github.com/kubernetes-csi/[email protected], as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	revert-72-pvclister: Could not introduce github.com/kubernetes-csi/external-snapshotter@revert-72-pvclister, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	saad-ali-patch-1: Could not introduce github.com/kubernetes-csi/external-snapshotter@saad-ali-patch-1, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	saad-ali-patch-2: Could not introduce github.com/kubernetes-csi/external-snapshotter@saad-ali-patch-2, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	test-yang: Could not introduce github.com/kubernetes-csi/external-snapshotter@test-yang, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)
	updateSize: Could not introduce github.com/kubernetes-csi/external-snapshotter@updateSize, as its subpackage github.com/kubernetes-csi/external-snapshotter does not contain usable Go code (*build.NoGoError).. (Package is required by (root).)

Any documentation / guidance will be much appreciated!

csi-address CLI argument does not accept `unix:///` prefixed sock files

When you specify a sock file to the snapshotter binary's --csi-address argument it can't be prefixed with unix:///. If it does contain this prefix you get errors like this:

I1003 17:26:54.724892       1 connection.go:111] Still trying, connection is CONNECTING
I1003 17:26:54.725159       1 connection.go:111] Still trying, connection is TRANSIENT_FAILURE
I1003 17:26:55.725272       1 connection.go:111] Still trying, connection is CONNECTING
I1003 17:26:55.725314       1 connection.go:111] Still trying, connection is TRANSIENT_FAILURE
I1003 17:26:56.767152       1 connection.go:111] Still trying, connection is CONNECTING
I1003 17:26:56.767388       1 connection.go:111] Still trying, connection is TRANSIENT_FAILURE
I1003 17:26:57.943559       1 connection.go:111] Still trying, connection is CONNECTING
I1003 17:26:57.943781       1 connection.go:111] Still trying, connection is TRANSIENT_FAILURE
I1003 17:26:59.009602       1 connection.go:111] Still trying, connection is CONNECTING

Omitting the prefix works as expected. This is inconsistent with csi-sanity and csi-provisioner which can both accept the prefix without issue.

Example yaml deployment file

        - name: csi-snapshotter
          image: quay.io/k8scsi/csi-snapshotter:v0.4.0
          args:
            - "--csi-address=$(DAT_SOCKET)"
            - "--v=5"
          env:
            - name: DAT_SOCKET
              #value: unix:///var/lib/csi/io.daterainc.csi.dsp/csi.sock  #<--- This doesn't work
              value: /var/lib/csi/io.daterainc.csi.dsp/csi.sock  # <--- This works
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/

Import existing, externally created snapshot into cluster

Hi everybody,

I have started playing around with snapshotting. My usecase is to import an existing snapshot into the cluster (it has been created externaly, not from a pv in the cluster). My current attempt with these two configurations fails with the snapshot source is not specified when creating the VolumeSnapshot:

apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotContent
metadata:
  name: my-own-snapshot-content
  namespace: default
spec:
  csiVolumeSnapshotSource:
    driver: pd.csi.storage.gke.io
    snapshotHandle: projects/my-project/global/snapshots/my-snapshot
  deletionPolicy: Retain
  volumeSnapshotRef:
    kind: VolumeSnapshot
    name: my-own-snapshot-source-pvc
    namespace: default
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: my-own-snapshot-source-pvc
  namespace: default
spec:
  snapshotClassName: default-snapshot-class
  snapshotContentName: my-own-snapshot-content

( taken from this blog post )

Looking at this code it seems logical.

However, reading through the spec it says for the VolumeSnapshot struct:

// Source has the information about where the snapshot is created from.
// In Alpha version, only PersistentVolumeClaim is supported as the source.
// If not specified, user can create VolumeSnapshotContent and bind it with VolumeSnapshot manually.

This is the part that is confusing me, because the last comment describes what I am trying to do above. I assumed that the PVC created from the referenced VolumeSnapshot could be created from the snapshotHandle defined in the VolumeSnapshotContent. Instead, it seems I need to define an existing PVC in the VolumeSnapshot.

Will my approach become possible in the future (i.e. currently not yet possible, since this is an alpha feature) or is this usecase not intended at all? If not, how else could I do it in Kubernetes (it would of course be possible using the cloud provider sdk).

Best regards,
stiller-leser

P.S. This is related to this question: kubernetes-sigs/gcp-compute-persistent-disk-csi-driver#224

Static volume snapshot binding failure

I was playing with static volume snapshot binding by first creating a VolumeSnapshotContent object and then a VolumeSnapshot object that is supposed to be bound to the VolumeSnapshotContent object by the external-snapshotter. However, I'm getting the following error:

Failed to check and update snapshot: failed to get input parameters to create snapshot <volume-snapshot-name>: "the snapshot source is not specified."

The VolumeSnapshot was created with the source field unspecified because the intention is to statically bind it to the VolumeSnapshotContent object. My understanding is that the source field is optional in this scenario, according to the comment of the field copied and pasted below.

// If not specified, user can create VolumeSnapshotContent and bind it with VolumeSnapshot manually.

The VolumeSnapshot object looks like the following:

apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: mongo-persistent-storage-mongo-0-vs
  namespace: default
spec:
  snapshotClassName: default-snapshot-class
  snapshotContentName: snapcontent-31dffc80-073a-11ea-830c-42010a800239

I'm using the version v1.1.0 of external-snapshotter.

@xing-yang @jingxu97 @yuxiangqian

Separate common controller logic from the sidecar

Investigate how to separate common controller logic from other logic that belongs to the sidecar. Can be in the same external-snapshotter repo. The common controller should not be deployed with the driver. It should be deployed by the cluster deployer, or we can provide a way to deploy it as a separate Statefulset, not together with the driver.

'SnapshotContentMissing' VolumeSnapshotContent is missing

I'm trying to import existing snapshot from ceph using following yamls:

apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotContent
metadata:
  name: vsc-test
spec:
  snapshotClassName: csi-rbdplugin-snapclass
  volumeSnapshotSource:
    csiVolumeSnapshotSource:
      driver: rbd.csi.ceph.com
      snapshotHandle:  snapname
  volumeSnapshotRef:
    apiVersion: snapshot.storage.k8s.io/v1alpha1
    kind: VolumeSnapshot
    name: test
    namespace: test
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: test
  namespace: test
spec:
  snapshotClassName: csi-rbdplugin-snapclass
  snapshotContentName: vsc-test

but get following error:

  Type     Reason                  Age                  From                              Message
  ----     ------                  ----                 ----                              -------
  Warning  SnapshotContentMissing  46s (x6 over 4m47s)  csi-snapshotter rbd.csi.ceph.com  VolumeSnapshotContent is missing

I0822 11:49:48.704819       1 util.go:135] storeObjectUpdate updating snapshot "test/test" with version 2972558
I0822 11:49:48.704894       1 snapshot_controller.go:252] synchronizing unready snapshot[test/test]: snapshotcontent "vsc-test" requested and not found, will try again next time
E0822 11:49:48.704922       1 snapshot_controller_base.go:392] could not sync volume "test/test": snapshot test/test is bound to a non-existing content vsc-test
I0822 11:49:48.705025       1 event.go:221] Event(v1.ObjectReference{Kind:"VolumeSnapshot", Namespace:"test", Name:"test", UID:"1746e253-3a9e-46c8-8339-11f63591034e", APIVersion:"snapshot.storage.k8s.io/v1alpha1", ResourceVersion:"2972558", FieldPath:""}): type: 'Warning' reason: 'SnapshotContentMissing' VolumeSnapshotContent is missing

why?

Error creating snapshot with ceph-csi rbd plugin

When trying to create as snapshot using examples, modified to match my environment I'm getting this error: snapshot_controller.go:310] createSnapshot [create-default/rbd-pvc-snapshot[ba056701-d142-11e8-813b-525400123456]]: error occurred in createSnapshotOperation: failed to take snapshot of the volume, pvc-b064c0f5d06d11e8: "rpc error: code = Unknown desc = RBD key for ID: admin not found".
I can create PVCs with csi-rbdplugin so I think everything is fine on this side. ceph-csi rbd plugin has been deployed as-is from here

Here's my deployment files, I'm not sure if i'm doing something wrong here, if so please let me know. I can provide logs and more if needed.

csi-rbdplugin:

storageclass.yaml
secret.yaml
pvc.yaml
pod.yaml

external-snapshotter:

setup.yaml
snapshotclass.yaml
snapshot.yaml

Thanks in advance for your help

Requires CRD create permissions?

Hey there,

I was reviewing this PR and noticed that apparently, the "create" verb on "customresourcedefinitions" is required to run this program, seemingly because it tries to install itself on boot and doesn't handle IsUnauthorized errors when doing so.

I think it would be possible to have the necessary CRDs already be created ahead of time, and be able to run this program without the "create" verb on "customresourcedefinitions", so that the program runs with less permissions.

What do you think?

cannot update snapshot metadata

My attempt of creating a new VolumeSnapshot from a PVC source resulted in following error message in external-snapshotter:

I0717 18:09:11.377904       1 connection.go:180] GRPC call: /csi.v1.Controller/CreateSnapshot
I0717 18:09:11.377925       1 connection.go:181] GRPC request: {"name":"snapshot-bd0540a0-1912-44f8-a04c-2e69ab1c21c6","secrets":"***stripped***","source_volume_id":"b44ffa38-5d65-4346-9265-807d9c966d6f"}
I0717 18:09:11.439795       1 reflector.go:235] github.com/kubernetes-csi/external-snapshotter/pkg/client/informers/externalversions/factory.go:117: forcing resync
I0717 18:09:11.577934       1 connection.go:183] GRPC response: {"snapshot":{"creation_time":{"seconds":1560416046},"ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"bfe22f76-c3af-48a8-a326-6fdc3e5a747d","source_volume_id":"b44ffa38-5d65-4346-9265-807d9c966d6f"}}
I0717 18:09:11.579576       1 connection.go:184] GRPC error: <nil>
I0717 18:09:11.584450       1 snapshotter.go:81] CSI CreateSnapshot: snapshot-bd0540a0-1912-44f8-a04c-2e69ab1c21c6 driver name [nfs.manila.csi.openstack.org] snapshot ID [bfe22f76-c3af-48a8-a326-6fdc3e5a747d] time stamp [&{1560416046 0 {} [] 0}] size [1073741824] readyToUse [true]
I0717 18:09:11.584530       1 snapshot_controller.go:640] Created snapshot: driver nfs.manila.csi.openstack.org, snapshotId bfe22f76-c3af-48a8-a326-6fdc3e5a747d, creationTime 2019-06-13 08:54:06 +0000 UTC, size 1073741824, readyToUse true
I0717 18:09:11.584563       1 snapshot_controller.go:645] createSnapshot [default/new-nfs-share-snap]: trying to update snapshot creation timestamp
I0717 18:09:11.584604       1 snapshot_controller.go:825] updating VolumeSnapshot[]default/new-nfs-share-snap, readyToUse true, timestamp 2019-06-13 08:54:06 +0000 UTC
I0717 18:09:11.588727       1 snapshot_controller.go:650] failed to update snapshot default/new-nfs-share-snap creation timestamp: snapshot controller failed to update default/new-nfs-share-snap on API server: the server could not find the requested resource (put volumesnapshots.snapshot.storage.k8s.io new-nfs-share-snap)

The snapshot is successfully created by the driver, but external-snapshotter is having trouble updating the snapshot object metadata.

I'm using external-snapshotter v1.2.0-0-gb3f591d8 in k8s 1.15.0 running with VolumeSnapshotDataSource=true feature gate. The previous version of external-snapshotter, 1.1.0, works just fine though. Is this a regression or rather mis-configuration on my part? Always happy to debug more or provide more logs! Thanks!

Improve Deletion Secret Handling

This is the same underlying issue as kubernetes-csi/external-provisioner#330 for external-provisioner. This bug is to track the work for the external-snapshotter. We should make sure the annotations used for both solution have the same name.

Today, the external-snapshotter will look at VolumeSnapshotClass for deletion secret. If the secret does not exist, delete is called any way.

The problem is that the VolumeSnapshotClass may be deleted or mutated (deleted and recreated with different parameters). This will result in snapshots being unable to delete. Ideally we want the deletion secret on the VolumeSnapshotContent object. However, @liggitt pointed out that this would result in asymmetry of the API (provision is handled by a higher layer controller and specified in VolumeSnapshotClass, having that controller then look at VolumeSnapshotContent object for the delete secret seems wrong). That said, we do want to better handle this case.

So as a compromise, the proposal is to 1) add a reference to the deletion secret as an annotation on the VolumeSnapshotContent object (instead of a first class field), and 2) to better document why you shouldn't have deletion secrets.

For 1, the proposed change is to add a new flag to external-snapshotter that says controller requires deletion secret, if this is set by SP, the external-snapshotter should store a reference to the provision secret in an annotation on the PV object, and when deleting, if the flag is set and the VolumeSnapshotContent object has the annotation, fetch and pass the secret in the CSI DeleteSnapshot call.

And 2 is being tacked in kubernetes-csi/docs#189 (comment).

CC @jingxu97 @xing-yang

Need to create a repo for ExecutionHook

We need to create a new repo for ExecutionHook under kubernetes-sigs. Open this issue to track it. We are targeting k8s release 1.15. Code freeze for 1.15 in-tree code is May 30th. ExecutionHook code is out-of-tree so we have more time, but we still need to have the repo created as soon as possible so we can submit code there.

Hi @jingxu97, can we finalize on the repo name? Should we call it execution-hook or something else? Thanks.

Snapshotter uses ListSnapshots on drivers which do not advertise it.

Snaphotter uses a function called GetSnapshotStatus() to do the binding from a VolumeSnapshot to a VolumeSnapshotContent. GetSnapshotStatus() then uses the csi.ListSnapshot() call even when the driver does not support it. Therefore this causes an error and the binding does not occur.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Error when importing snapshots

Hi,

I have a problem with importing snapshots to Kubernetes. I managed to create a custom Kubernetes Cluster (version 1.13 and the nodes are VMs from Google Cloud) with GPD CSI driver (version DEV where it uses external provisioner, snapshotter v1.0.1) installed. I was able to provision volumes, create snapshots and restore volumes from snapshots.

Next, I decided to import existing snapshots that I am keeping in Google Cloud. After reading https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/, the section Importing an existing snapshot with Kubernetes, I started to import my existing snapshots in Google Cloud, with the method explained in the blog post.

Here are the steps that I applied:

I created a VolumeSnapshotContent with:

$ cat <<EOF | kubectl create -f - 
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotContent
metadata:
  name: imported-snap-content
spec:
  csiVolumeSnapshotSource:
    driver: pd.csi.storage.gke.io
    snapshotHandle: {GCP_SNAP_ID}
  volumeSnapshotRef:
    kind: VolumeSnapshot
    name: imported-snap
    namespace: temp-namespace
EOF

After that, I created a VolumeSnapshot with the same reference I used in VolumeSnapshotContent:

$ cat << EOF | kubectl create -f -
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: imported-snap
  namespace: temp-namespace
spec:
  snapshotClassName: default-snapshot-class
  snapshotContentName: imported-snap-content
EOF

default-snapshot-class is the VolumeSnapshotClass from https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/examples/kubernetes/demo-defaultsnapshotclass.yaml

So, I checked the VolumeSnapshotContent first:

$ kubectl get volumesnapshotcontent imported-snap-content -oyaml
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotContent
metadata:
  creationTimestamp: 2019-02-15T18:31:47Z
  finalizers:
  - snapshot.storage.kubernetes.io/volumesnapshotcontent-protection
  generation: 3
  name: imported-snap-content
  resourceVersion: "475063"
  selfLink: /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotcontents/imported-snap-content
  uid: f399dfcc-314f-11e9-b4da-42010a8a004e
spec:
  csiVolumeSnapshotSource:
    driver: pd.csi.storage.gke.io
    snapshotHandle: {GCP-SNAP-ID}
  deletionPolicy: null
  persistentVolumeRef: null
  snapshotClassName: default-snapshot-class
  volumeSnapshotRef:
    kind: VolumeSnapshot
    name: imported-snap
    namespace: temp-namespace
    uid: f861fa81-314f-11e9-b4da-42010a8a004e

It looks like the snapshot controller managed to bind it to a VolumeSnapshot since uid field in volumeSnapshotRef is filled.

Then, I looked the details of VolumeSnapshot that I created before:

$ kubectl get volumesnapshot  imported-snap -n temp-namespace -oyaml 
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  creationTimestamp: 2019-02-15T18:31:55Z
  finalizers:
  - snapshot.storage.kubernetes.io/volumesnapshot-protection
  generation: 3
  name: imported-snap
  namespace: temp-namespace
  resourceVersion: "475064"
  selfLink: /apis/snapshot.storage.k8s.io/v1alpha1/namespaces/temp-namespace/volumesnapshots/imported-snap
  uid: f861fa81-314f-11e9-b4da-42010a8a004e
spec:
  snapshotClassName: default-snapshot-class
  snapshotContentName: imported-snap-content
  source: null
status:
  creationTime: null
  error:
    message: 'Failed to check and update snapshot: failed to get input parameters
      to create snapshot imported-snap: "the snapshot source is not specified."'
    time: 2019-02-15T18:31:55Z
  readyToUse: false
  restoreSize: null

The UID matches with the one in the reference. However, the controller registers a problem saying that source is not specified. When I look at the API detail, I see that source is a field for recording the originated volume in Kubernetes Cluster. It is interesting since I didn’t create a snapshot from a PVC, but I was trying to import a snapshot. In this case, I expect a null source is the normal thing.

What can I do to solve this problem? Are there some steps that I couldn’t do properly or am I missing some steps?

GetSnapshotStatus is never called

In snapshot_controller.go, the function"CreateSnapshot" is called over and over again during my development and test`. Thus, a lot of snapshots were created in my test environment.

Finally, I found the function "GetSnapshotStatus" is never called is external-snapshotter. So "ListSnapshots" function is never called.

I wonder if this is a bug? If not , I do not think it is a good idea to call "CreateSnapshot" in the function "checkandUpdateBoundSnapshotStatusOperation".

Update README to indicate controller architecture changes

  1. Common controller will watch VolumeSnapshotContent and bind VolumeSnapshotContent with VolumeSnapshot.
  2. Snapshotter will watch VolumeSnapshot to invoke corresponding gRPC call, create VolumeSnapshotContent.
  3. Deletion of a snapshot should follow similar pattern

Missing 'groupName' comment in doc.go - wrong group for fake objects

The group name of the APIs is "snapshot.storage.k8s.io" as specified here:

https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/register.go#L23

However, "fake" objects are generated with "volumesnapshot"

https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/client/clientset/versioned/typed/volumesnapshot/v1alpha1/fake/fake_volumesnapshot.go#L37

It seems that doc.go is missing the comment
// +groupName=snapshot.storage.k8s.io

Due to this issue, we cannot use fake entities to do unit tests.

transport is closing

hey there , can anybody tell me what problem is this ? any information would be appreciated.

I1018 13:25:25.835377 1 snapshot_controller.go:304] createSnapshot[default/new-snapshot-demo]: started
I1018 13:25:25.835403 1 snapshot_controller.go:279] scheduleOperation[create-default/new-snapshot-demo[45f5dfea-d2d9-11e8-b369-5254008caa0c]]
I1018 13:25:25.835507 1 snapshot_controller.go:447] createSnapshot: Creating snapshot default/new-snapshot-demo through the plugin ...
I1018 13:25:25.835531 1 snapshot_controller.go:455] createSnapshotOperation [new-snapshot-demo]: VolumeSnapshotClassName [csi-smtx-snapclass]
I1018 13:25:25.835548 1 snapshot_controller.go:769] getSnapshotClass: VolumeSnapshotClassName [csi-smtx-snapclass]
I1018 13:25:25.879883 1 snapshot_controller.go:738] getVolumeFromVolumeSnapshot: snapshot [new-snapshot-demo] PV name [pvc-18289bde-d2d9-11e8-b369-5254008caa0c]
I1018 13:25:25.879971 1 connection.go:192] CSI CreateSnapshot: snapshot-45f5dfea-d2d9-11e8-b369-5254008caa0c
I1018 13:25:25.880031 1 connection.go:259] GRPC call: /csi.v0.Identity/GetPluginInfo
I1018 13:25:25.881293 1 connection.go:261] GRPC response: name:"com.csi" vendor_version:"0.1"
I1018 13:25:25.881412 1 connection.go:262] GRPC error:
I1018 13:25:25.881432 1 connection.go:259] GRPC call: /csi.v0.Controller/CreateSnapshot
I1018 13:25:27.934872 1 connection.go:261] GRPC response:
I1018 13:25:27.935230 1 connection.go:262] GRPC error: rpc error: code = Unavailable desc = transport is closing

Probe timeout is too low

Analogous to the external-attacher issue, we have a timeout for the probe request that is set to 1 second, which is too low for many plugins that want to do a thorough health check.

We should increase this timeout to 1 minute like we did on the external-attacher.

snapshotter 1.x support for configmap leadership election type

Since leases leadership election is only supported in k8s 1.14+, the snapshotter leadership election will not work in k8s 1.13.

Attacher/provisioner sidecars allow non-lease leadership election types in their latest 1.x releases:
https://github.com/kubernetes-csi/external-provisioner/blob/v1.3.0/cmd/csi-provisioner/csi-provisioner.go#L205-L213

https://github.com/kubernetes-csi/external-attacher/blob/release-1.2/cmd/csi-attacher/main.go#L203-L211

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.