GithubHelp home page GithubHelp logo

kubernetes-csi / csi-test Goto Github PK

View Code? Open in Web Editor NEW
145.0 15.0 143.0 22.85 MB

CSI test frameworks

License: Apache License 2.0

Go 60.06% Makefile 5.89% Shell 31.96% Python 2.08%
grpc csi kubernetes containers storage container-storage-interface ginkgo tests golang gomock

csi-test's People

Contributors

abhijeetgauravm avatar alexanderkhaustov avatar avalluri avatar avorima avatar bswartz avatar carlory avatar chrishenzie avatar darkowlzz avatar davidz627 avatar dependabot[bot] avatar dobsonj avatar edisonxiang avatar gnufied avatar humblec avatar jarrpa avatar jsafrane avatar k8s-ci-robot avatar lpabon avatar madhu-1 avatar mlmhl avatar msau42 avatar namrata-ibm avatar okartau avatar pohly avatar saad-ali avatar sbezverk avatar spiffxp avatar timoreimann avatar wackxu avatar xing-yang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-test's Issues

Tests for Probe

Tests do not currently exist for Probe. They should be added in pkg/sanity/identity.go

This TODO was introduced by #43

Missing readonly field on ControllerPublishVolume call

Using csi-sanity-v0.2.0-1.linux.amd64.tar.gz to run csi-sanity against the cinderlib-CSI plugin results in the following error:

NodeUnpublishVolume [Node Server] 
  should return appropriate values (no optional values added)
  /home/lpabon/git/golang/csi-test/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:218
[BeforeEach] NodeUnpublishVolume [Node Server]
  /home/lpabon/git/golang/csi-test/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:179
[It] should return appropriate values (no optional values added)
  /home/lpabon/git/golang/csi-test/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:218
STEP: creating a single node writer volume
STEP: getting a node id
STEP: controller publishing volume

• Failure [0.187 seconds]
NodeUnpublishVolume [Node Server]
/home/lpabon/git/golang/csi-test/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:171
  should return appropriate values (no optional values added) [It]
  /home/lpabon/git/golang/csi-test/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:218

  Expected error:
      <*status.statusError | 0xc4201b1b30>: {
          Code: 3,
          Message: "Missing required fields: readonly",
          Details: nil,
      }
      rpc error: code = InvalidArgument desc = Missing required fields: readonly
  not to have occurred

I can confirm, looking at the TCP dump of the communication, that on the GRPC call to ControllerPublishVolume only fields number 1, 2, and 3 are present. So field number 4, that corresponds to readonly, is not present.

According to the CSI spec v0.2.0 the readonly field is REQUIRED and should be present.

message ControllerPublishVolumeRequest {
  ...

  // Whether to publish the volume in readonly mode. This field is
  // REQUIRED.
  bool readonly = 4;

Proposed E2E test for sanity check

Below is the list of tests proposed to implement for E2E sanity check, please review and provide your feedback:

CreateVolume:

  1. Create volume with specific name and then create volume with the same name (Should not fail)
  2. Create volume with specific name and then create volume with the same name but different capacity (Should fail with 6 ALREADY_EXISTS)
  3. Create volume with incompatible size (should fail with 11 OUT_OF_RANGE)

DeleteVolume:

  1. Delete Volume and no volume id provided (should fail with 3 INVALID_ARGUMENT)
  2. Delete volume with non existing volume id (should not fail)
  3. Delete Volume with existing id (should not fail)

ControllerPublishVolume:

  1. Publish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Publish volume with empty node id (should fail with 3 INVALID_ARGUMENT)
  3. Publish volume with empty VolumeCapability (should fail with 3 INVALID_ARGUMENT)
  4. Publish volume with empty readonly (should fail with 3 INVALID_ARGUMENT)
  5. Publish volume with non existing volume id (should fail with 5 NOT_FOUND)
  6. Publish volume with non existing node id (should fail with 5 NOT_FOUND)
  7. (If code permits) Publish already published volume with compatible capabilities (should not fail)
  8. (If code permits) Publish already published volume with incompatible capabilities (should fail with 6 ALREADY_EXISTS)
  9. (If code permits) Publish already published volume with different node id (should fail with 9 FAILED_PRECONDITION)
    10.(If code permits) Publish volume to node id with reached "Max volumes attached" (should fail with 8 RESOURCE_EXHAUSTED)

ControllerUnpublishVolume:

  1. Unpublish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Unpublish volume with empty node id (should not fail)
  3. Unpublish volume with non existing volume id (should fail with 5 NOT_FOUND)
  4. Unpublish volume with non existing node id (should fail with 5 NOT_FOUND)
  5. (if code permits) Unpublish volume with unpublish node id mismatching published node id (should not fail)

ValidateVolumeCapabilities:

  1. Validate with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Validate with empty VolumeCapability (should fail with 3 INVALID_ARGUMENT)
  3. Validate with non existing volume id (should fail with 5 NOT_FOUND)

ListVolumes:

  1. Check if list volume capabilities is available (should not fail)
  2. If 1 is true then create a couple of volumes and request to list them. (Should not fail)

GetCapacity:

  1. Check GetCapacity (Should not fail)

ControllerGetCapabilities:

  1. Check ControllerGetCapabilities (should not fail)

NodeStageVolume:

  1. Stage volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Stage volume with empty staging_target_path (should fail with 3 INVALID_ARGUMENT)
  3. Stage volume with empty volume_capability (should fail with 3 INVALID_ARGUMENT)
  4. Stage volume non existing volume id (should fail with 5 NOT_FOUND)
  5. (if code permits) Stage volume_id has already been published at the specified staging_target_path but is incompatible with the specified volume_capability flag (Should fail with 6 ALREADY_EXISTS)
  6. Publish volume with non existing node id (should fail with 5 NOT_FOUND)

NodeUnstageVolume:

  1. Ustage volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Ustage volume with empty staging_target_path (should fail with 3 INVALID_ARGUMENT)
  3. Ustage volume non existing volume id (should fail with 5 NOT_FOUND)
  4. (if code permits) Unstage volume id with not matching staging_target_path (Should not fail)

NodePublishVolume:

  1. Publish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Publish volume with empty target_path (should fail with 3 INVALID_ARGUMENT)
  3. Publish volume with empty VolumeCapability (should fail with 3 INVALID_ARGUMENT)
  4. Publish volume with empty readonly (should fail with 3 INVALID_ARGUMENT)
  5. Publish volume with non existing volume id (should fail with 5 NOT_FOUND)
  6. (If code permits) Publish already published volume with incompatible capabilities (should fail with 6 ALREADY_EXISTS)

NodeUnpublishVolume:

  1. Unpublish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Unpublish volume with empty target_path (should fail with 3 INVALID_ARGUMENT)
  3. Unpublish volume with non existing volume id (should fail with 5 NOT_FOUND)

NodeGetId:

  1. Check NodeGetId (should not fail )

NodeGetCapabilities:

  1. Check NodeGetCapabilities (should not fail)

sanity: avoid defining unusable tests

I'm using the sanity testing in a mode where the driver gets deployed anew for each individual test. It's annoying and slow that many tests then just get skipped after querying the driver's capabilities.

This could be avoided like this:

  • extend sanity.Config so that it has static driver capabilities
  • check those capabilities before defining a test that depends on them and then don't define the tests that cannot run
  • add a test that verifies that the actual driver capabilities match

sanity: more flexible mount and staging directory creation

When testing against a driver that was deployed in a real cluster, creating the mount and staging directories has to happen inside the cluster on the node on which the driver under testing is running. Doing that is specific to the test, so the sanity package should leave the work to some callbacks in sanity.Config if they are defined (must remain optional).

Need test cases for unicode volume/snapshot names

Although it's not clearly stated, the CSI spec allows Unicode names to be specified for volumes and snapshot. A naively implement plugin might take the suggested name and use it somewhere where Unicode characters aren't supported, and the bug might not get caught if all of the test cases use ASCII names. We should send some non-ASCII names to the plugin and expect success.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Starting v0.3.4 release asset binaries don't work in Alpine container

Hi, working on our CI, I noticed this.

Release v0.3.0-4 works fine but starting from release v0.3.4 asset binaries on releases page doesn't work in an Alpine container. The same is true for 1.0.2.

It seems like assets have been built with different flags than Makefile has because if I build the image from sources (using Makefile) it works in Alpine.

There are two one-liners to demonstrate the issue (I used Dockerfile.mock from the repo).
v0.3.0-4 (works):

$ curl -L https://github.com/kubernetes-csi/csi-test/releases/download/v0.3.0-4/csi-sanity-v0.3.0-4.linux.amd64.tar.gz | tar -xvz && mv csi-sanity/csi-sanity bin/mock && docker build -f Dockerfile.mock -t csi-sanity-local --no-cache . && docker run --rm -it csi-sanity-local
... curl output ...
Sending build context to Docker daemon  11.83MB
Step 1/5 : FROM alpine
... docker output ...
Successfully tagged csi-sanity-local:latest
--- FAIL: TestSanity (0.00s)
	sanity_test.go:53: --csi.endpoint must be provided with an CSI endpoint <--- IT WORKS
FAIL

v0.3.4 (doesn't work):

$ curl -L https://github.com/kubernetes-csi/csi-test/releases/download/v0.3.4/csi-sanity-v0.3.4.linux.amd64.tar.gz | tar -xvz && mv csi-sanity/csi-sanity bin/mock && docker build -f Dockerfile.mock -t csi-sanity-local --no-cache . && docker run --rm -it csi-sanity-local
... curl output ...
Sending build context to Docker daemon  11.89MB
Step 1/5 : FROM alpine
... docker output ...
Successfully tagged csi-sanity-local:latest
standard_init_linux.go:190: exec user process caused "no such file or directory" <--- DOESN'T WORK

Solutions that helped me:

  • build binaries using Makefile
  • use ubuntu container
  • manually link missed libraries in my container RUN mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2

Not sure that asset binaries supposed to work in Alpine, or manual image build is a recommended way to go.

This issue is similar to #127.

Thank you.

[EDIT]

  • removed quay.io/k8scsi/mock-driver mention, thought it is csi-sanity container.

ListSnapshots test makes invalid assumption about NextToken

https://github.com/kubernetes-csi/csi-test/blob/master/pkg/sanity/controller.go#L1352
says
Expect(nextToken).To(Equal(strconv.Itoa(maxEntries)))

The ListSnapshots tests assumes that the NextToken field of the ListSnapshotsResponse message is a string representation of an integer. It also assumes that integer is equal to MaxEntries. The CSI spec says neither of these things, so this is a bad test.

According to the CSI spec, NextToken is merely an opaque string that should be passed back to another invocation of ListSnapshots, so a better test would be to call ListSnapshots with the returned token and validate that the first page of snapshots plus the second page of snapshots equals the whole list of unpaginated snapshots.

At a minimum we need to remove the bad line from the test because it encourages CSI driver authors to implement snapshot list pagination in a dumb way.

Sanity test requires NodeServiceCapability but CSI doesn't

CSI spec says the NodeServiceCapabilities is optional:

message NodeGetCapabilitiesResponse {
  // All the capabilities that the node service supports. This field
  // is OPTIONAL.
  repeated NodeServiceCapability capabilities = 1;
}

However, the sanity test fails if the node service returns 0 or nil node capabilities:

NodeUnpublishVolume [Node Server]
/usr/local/google/home/msau/workspace/go/src/sigs.k8s.io/gcp-filestore-csi-driver/vendor/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:232
  should fail when no volume id is provided [BeforeEach]
  /usr/local/google/home/msau/workspace/go/src/sigs.k8s.io/gcp-filestore-csi-driver/vendor/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:253

  Expected
      <[]*csi.NodeServiceCapability | len:0, cap:0>: nil
  not to be nil

Need to test other access modes

Currently we only test the SINGLE_NODE_WRITER access mode. I suggest we add positive and negative test cases for every supported access mode:

SINGLE_NODE_WRITER

  • should succeed with single readonly publish
  • should succeed with single writable publish
  • should fail with multiple publishes

SINGLE_NODE_READER_ONLY

  • should succeed with single readonly publish
  • should fail with single writable publish
  • should fail with multiple readonly publishes

MULTI_NODE_READER_ONLY

  • should succeed with single readonly publish
  • should fail with single writable publish
  • should succeed with multiple readonly publishes

MULTI_NODE_SINGLE_WRITER

  • should succeed with single writable publish
  • should succeed with multiple readonly publishes
  • should fail with writable publish plus any other publish

MULTI_NODE_MULTI_WRITER

  • should succeed with multiple writable publishes
  • no negative tests case possible?

Is this repo actively maintained?

I'm a sr dev with Docker, and could make some time to help on this repo; it seems to me that having a simple, canonical way of testing CSI drivers' compliance would be of great value to the community.

However, it's not obvious what the state of this repo is; is it still actively maintained? And/or are more maintainers welcome? Where could I best start helping?

Thanks!

ValidateVolumeCapabilities Fails Because It Calls Non-existing Volume

Reproduction:
v1.1.0 branch

func TestDriver(t *testing.T) {
	go func() {
		d := getDriver(t)
		if err := d.Run(); err != nil {
			t.Fatal(err)
		}
	}()
	sc := &sanity.Config{
		TargetPath:  "/mnt/my-publish",
		StagingPath: "/mnt/my-staging",
		Address:     Endpoint,
	}
	sanity.Test(t, sc)
}

All tests pass except ValidateVolumeCapabilities which fails for the below reason.

STEP: reusing connection to CSI driver at unix:///tmp/test-csi.sock
STEP: creating mount and staging directories
STEP: creating a single node writer volume
STEP: validating volume capabilities
cleanup: deleting sanity-controller-validate-053C02BC-18C78554 = CSI-sanity-controller-validate-053C02BC-18C78554

------------------------------
• Failure [2.087 seconds]
Controller Service
/home/ubuntu/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:44
  ValidateVolumeCapabilities
  /home/ubuntu/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:591
    should return appropriate values (no optional values added) [It]
    /home/ubuntu/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:618

    Expected
        <[]*csi.VolumeCapability | len:0, cap:0>: nil
    not to be empty

    /home/ubuntu/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:671
------------------------------
2018-12-05T18:12:46Z INFO GRPC -- request: /csi.v1.Controller/ValidateVolumeCapabilities -- volume_id:"some-vol-id" volume_capabilities:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > 

You can see that the volume created for this test was sanity-controller-validate-053C02BC-18C78554, but the volume called for the test in the ValidateVolumeCapabilities step is some-vol-id.

For some reason, the original volume-id isn't being used during subsequent steps of the test. My ValidateVolumeCapabilities function is returning an error because the volume_id is invalid.

Splitting CSI Sanity unit and E2E tests

Looking at CSI Sanity, it seems that it is running unit and e2e tests in the same framework.

For example in one suite "NodeUnpublishVolume" it will have unit tests like "should fail when no volume id is provided." That could be exercised on the driver with mocked out back-ends. This makes sure there are no side-effects or other unrelated errors that arise from this test hitting real back-ends.

And in the same suite "NodeUnpublishVolume" there is an e2e test "should return appropriate values (no optional values added)" which is exercising the entire workflow of creating, controller publish, node publish, node unpublish, controller unpublish, and deletevolume. This exact same e2e test is also duplicated in the "NodePublishVolume" suite as the "test should return appropriate values (no optional values added)" test.

I propose that we split the existing sanity testing framework into two parts, one for unit tests to exercise single RPC calls for their various error conditions and sanity check their basic functionality with mocked out backends. And a second e2e sanity testing framework designed to be run with a production backend that can exercise end to end workflows of creating actual volumes and various error cases with respect to that.

This would solve a few problems:

  1. Unit testing is sped up because it will not hit production backends
  2. E2E Tests will no longer be duplicated across different suites ("should return appropriate values (no optional values added)")
  3. Unit/e2e tests can be run in different environments (unit with mock backend, e2e with production backend for example)
  4. Frees up room to write more extensive e2e tests failure modes like "controller publish before createvolume" or "Node stage capability specified but NodeStageVolume not called"

Sanity tests may not always cleanup

There are two problems with the tests:

  1. The CreateVolume request could succeed, but subsequent validation could fail. DeleteVolume() is skipped in those cases so the volume never gets cleaned up.
  2. Almost all the tests call CreateVolume with volume id "sanity". It would be better to use unique names for every test case to avoid potential false positives and issues where a previously failing test causes a subsequent test to fail because the volume already exists.

Strict check of pagination in listsnapshot() test code when pagination is an optional value in the spec.

  // A token to specify where to start paginating. Set this field to
  // `next_token` returned by a previous `ListSnapshots` call to get the
  // next page of entries. This field is OPTIONAL.
  // An empty string is equal to an unspecified field value.
  string starting_token = 2;

  // Identity information for the source volume. This field is OPTIONAL.
  // It can be used to list snapshots by volume.
  string source_volume_id = 3;

The pagination is an optional param in CSI spec, however CSI tests are failing if driver dont implement support for pagination.

For ex:

[Fail] ListSnapshots [Controller Server] [It] should return next token when a limited number of entries are requested
/../src/github.com/gluster/gluster-csi-driver/vendor/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1352

Expected result: The test should not fail if CSI driver dont have capabilities to support pagination.

Flaky test `ListSnapshots [Controller Server] should return empty when the specify source volume id is not exist`

The following test fails in travis randomly:

•••••••••••••••••[32m••••••••••••••••••••••••••STEP: connecting to CSI driver
------------------------------
• Failure in Spec Setup (BeforeEach) [60.002 seconds]
ListSnapshots [Controller Server]
/home/travis/gopath/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:44
  should return empty when the specify source volume id is not exist [BeforeEach]
  /home/travis/gopath/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1195
  Expected error:
      <*errors.errorString | 0xc420327130>: {
          s: "Connection timed out",
      }
      Connection timed out
  not to have occurred
  /home/travis/gopath/src/github.com/kubernetes-csi/csi-test/pkg/sanity/sanity.go:97
------------------------------
••••••••••••••

Need to delete snapshot before deleting volume

Hi,
It is better to delete snapshot before deleting volume at cleaning up stage of ListSnapshots testcase. So, we could change the cleaner:

By("cleaning up deleting the volume")
delVolReq := MakeDeleteVolumeReq(sc, volume.GetVolume().GetId())
_, err = c.DeleteVolume(context.Background(), delVolReq)
Expect(err).NotTo(HaveOccurred())
By("cleaning up deleting the snapshot")
delSnapReq := MakeDeleteSnapshotReq(sc, snapshot.GetSnapshot().GetId())
_, err = c.DeleteSnapshot(context.Background(), delSnapReq)
Expect(err).NotTo(HaveOccurred())

to:

By("cleaning up deleting the snapshot")
delSnapReq := MakeDeleteSnapshotReq(sc, snapshot.GetSnapshot().GetId())
_, err = c.DeleteSnapshot(context.Background(), delSnapReq)
Expect(err).NotTo(HaveOccurred())
By("cleaning up deleting the volume")
delVolReq := MakeDeleteVolumeReq(sc, volume.GetVolume().GetId())
_, err = c.DeleteVolume(context.Background(), delVolReq)
Expect(err).NotTo(HaveOccurred())

Need tests around string limits

The CSI spec says that strings should not exceed 128 bytes and string maps should not exceed 4096 bytes. It's not clear where this enforcement should happen, but it would be nice to have tests that validate that plugin don't return strings that exceed these limits and negative tests that exceed these limits.

More importantly, we need tests that create volumes/snapshots using the maximum length names, to catch plugins which are using the name somewhere that can't handle 128 bytes.

Change the name of the e2e test program currently called mock

There are two things called mock. One is the Golang mock driver automatically generated and used by clients of CSI for unit testing. The other is the program used by csi-sanity for unit tests. This other program can have the name changed so that the confusion is removed.

sanity: support different addresses for controller and node operations

Currently the sanity pkg connects to the CSI driver at a single address and expects to find all services there. Not all CSI drivers are deployed like that and some might not even support all services in the same binary. Therefore I propose to have Config.Address (as it is now) and to add an optional Config.ControllerAddress that gets used if set when connecting to the controller.

pkg/sanity usage as non-root user

When following https://github.com/kubernetes-csi/csi-test/blob/master/pkg/sanity/README.md and running "go test" as normal user (!), tests involving mounting/unmounting fail because those are privileged operations.

Running "sudo go test" isn't a good solution, because then building also runs as root. This is not desirable and can also fail because the environment variables for Go end up being different.

My workaround was to have mount/unmount wrapper scripts that run just the mount and unmount command under pseudo. I can submit a PR with just those scripts (see below), but then one still has to remember to update PATH.

I wonder whether this isn't something that the test framework itself should take care of. Any thoughts?

#!/bin/sh

# Get rid of our wrapper script by setting a standard search PATH.
PATH=/usr/bin:/bin:/usr/sbin:/sbin

if [ $(id -u) != 0 ]; then
   exec sudo $(basename $0) "$@"
else
   exec $(basename $0) "$@"
fi

Test fails if CapacityBytes is always 0

According to the CreateVolume specs a volume should not set CapacityBytes (or set it to 0) to indicate that the capacity is unknown.

  // The capacity of the volume in bytes. This field is OPTIONAL. If not
  // set (value of 0), it indicates that the capacity of the volume is
  // unknown (e.g., NFS share).
  // The value of this field MUST NOT be negative.
  int64 capacity_bytes = 1;

When running the test CreateVolume [Controller Server] [It] should fail when requesting to create a volume with already exisiting name and different capacity. it fails as it is not possible to detect this if the volume capacity is always 0.

The test CreateVolume [Controller Server] [It] should not fail when requesting to create a volume with already exisiting name and same capacity. also fails but as far as I can see this can be fixed by setting TestVolumeSize: 0.

Images on quay.io can't run

$ docker run -ti quay.io/k8scsi/mock-driver:canary
standard_init_linux.go:190: exec user process caused "no such file or directory"

The reason is that mock binary inside the container can't be dynamically linked:

$  docker run -ti --entrypoint /bin/sh quay.io/k8scsi/mock-driver:canary
(inside the container)#  ldd mock
        /lib64/ld-linux-x86-64.so.2 (0x7fa22001d000)
        libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7fa22001d000)
        libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fa22001d000)

(inside the container)# ls /lib64/ld-linux-x86-64.so.2 
ls: /lib64/ld-linux-x86-64.so.2: No such file or directory

I've no clue how the image was built, perhaps some hiccup on travis.

Support volume name generate interface

There are some cloud provider like Linode, has some constraints on volume (like volume id is integer number). So, It will be helpful to add a VolumeNameGenerate interface{}, then pass it to the controller.

Tests for Secrets

There are currently no tests for v0.2.0 secrets. Tests for these should be added for all secrets in the following calls:

CreateVolume
DeleteVolume
ControllerPublishVolume
ControllerUnpublishVolume
NodePublishVolume
NodeStageVolume

NodePublishVolume test doesn't set volume attributes returned by CreateVolume

The CSI spec is a little confusing. It says that the VolumeAttributes field is optional, but must match the attributes of the volume:

       // Attributes of the volume to publish. This field is OPTIONAL and  
        // MUST match the attributes of the Volume identified by                               
        // `volume_id`.                   
        VolumeAttributes     map[string]string 

Broken release `v0.3.0-3`

Hi,

I was using v0.3.0-2 for https://github.com/digitalocean/csi-digitalocean with the CSI Spec v0.3.0 tagged release (see: https://github.com/container-storage-interface/spec/releases/tag/v0.3.0).

I decided to upgrade to v0.3.0-3 as it was including the latest changes. However when I upgrade it, my test are failing due the following error:

# github.com/kubernetes-csi/csi-test/pkg/sanity
../../go/pkg/mod/github.com/kubernetes-csi/[email protected]/pkg/sanity/node.go:122:10: undefined: csi.NodeServiceCapability_RPC_GET_VOLUME_STATS

When I check the CSI v0.3.0 version, there is no such identifier. This was added later to CSI master and is part of the currently released v1.0.0-rc1 release (https://github.com/container-storage-interface/spec/releases/tag/v1.0.0-rc1).

When I update my CSI spec to v1.0.0-rc1 from v0.3.0, it still fails, albeit with a different error:

go: import "github.com/digitalocean/csi-digitalocean/driver" ->
        test ->
        import "github.com/kubernetes-csi/csi-test/pkg/sanity" ->
        import "github.com/container-storage-interface/spec/lib/go/csi/v0": looping trying to add package

The reason is, CSI v1.0.0-rc1 get rid of the /v0 suffix from their import path, but the CSI-sanity tests still uses the /v0 suffix, example: https://github.com/kubernetes-csi/csi-test/blob/master/driver/driver.go#L32

Because of this, the v0.3.0-3 is not compatible with either v0.3.0 or v1.0.0-rc1 of the CSI spec.

I think a new v0.3.0-4 version should be released that removes the csi.NodeServiceCapability_RPC_GET_VOLUME_STATS identifier because it's not part of v0.3.0, or we should release a csi-test v1.0.0-rc1 release that is compatible with CSI spec v1.0.0-rc1 release that was released today.

support CreateVolumeRequest parameters

Some CSI drivers, like ceph-csi, depend on volume parameters. Unless I have missed something, those parameters currently cannot be set when using the csi-sanity command and pkg and thus ceph-csi cannot be tested with it.

We should add a config file for parameters similar to what is already supported for secrets.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.