GithubHelp home page GithubHelp logo

python-hpedockerplugin's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-hpedockerplugin's Issues

Volume Prune feature does not work in Legacy and managed V2 plugin.

  1. Create few volumes. Dont mount any volume.

docker@csimbe13-b13:~$ docker volume ls
DRIVER VOLUME NAME
hpe:latest sum1
hpe:latest sum2
hpe:latest sum3

  1. Run volume prune command.

docker@csimbe13-b13:~$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0 B

  1. List all volumes.

docker@csimbe13-b13:~$ docker volume ls
DRIVER VOLUME NAME
hpe:latest sum1
hpe:latest sum2
hpe:latest sum3

Expected result: All the volumes must be removed.

Observed result: No volume was removed.

Note: It was tested and failed on both legacy plugin as well as v2 plugin.

FC plugin: Volume mount operation fails intermittently with single path when multipath flags (use_multipath & enforce_multipath) are configured to TRUE in hpe.conf.

OS platform: CentOS 7.3
Plugin tag: nilangekarswapnil/hpe3parplugin:2.0

All the performed steps are mentioned below on how the issue was observed:

  1. Created 4 volumes with different properties using FC plugin .
  2. Mounted and wrote data on them.
  3. Kept all the containers in detached state.
  4. Created a binary file in all the volumes.
  5. Got a md5 checksum for all the binary files of respective volumes.
  6. Unmount all volumes and disable the plugin.
  7. Configure iSCSI plugin in hpe.conf
  8. Enable the i
    mount_failure.txt
    SCSI plugin.
  9. Mounted all the volumes.
  10. Verified the checksum persistence.
  11. Created one new volume and mounted it.
  12. Unmounted all 5 volumes.
  13. Disable iSCSI plugin and enable FC plugin.
  14. Mounted all the 5 volumes.
  15. Two mount requests failed with below error:

HPE Docker Volume Plugin Makedir Failed: (u'Make directory failed exception is : %s', u'list index out of range')

Please find the detailed logs for one volume mount failure in the attachments here.
mount_failure.txt

Sample configs should show example of urls for 3par and VSA

The current sample config files provide no guidance on the format of the expected URLs:

[root@localhost config]# grep url hpe*
hpe.conf.sample.3par:hpe3par_api_url = <3par WSAPI URL>
hpe.conf.sample.lefthand:hpelefthand_api_url =

It would be clearer if actual URL examples were provided

3par sample:
hpe3par_api_url = https://<3par IP address>:8080/api/v1

VSA sample:
hpelefthand_api_url = https://<VSA cluster VIP address>:8081/lhos

Docker volume creation is intermediately failing with volume options size & dedup are specified together.

Docker volume creation is intermediately failing with volume options size & dedup are specified together

root@cld6b3:~# docker volume create -d hpe --name vol-01 -o size=15 -o provisioning=dedup
Error response from daemon: create vol-01: a more specific error could not be determined
root@cld6b3:~#
root@cld6b3:~# docker volume create -d hpe --name vol-01 -o size=15 -o provisioning=dedup
vol-01
root@cld6b3:~#
root@cld6b3:~# docker volume inspect vol-01
[
    {
        "Driver": "hpe:latest",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/plugins/1123d27954d33d26190a18e8007ba4871225a41d9a5a6821779cd7a8978fab71/rootfs",
        "Name": "vol-01",
        "Options": {
            "provisioning": "dedup",
            "size": "15"
        },
        "Scope": "local"
    }
]
root@cld6b3:~#

create_dcv.txt

Mount fails when multipathing is enabled

When mounting a volume, the plug-in is trying to use mkfs to create a file system on “/dev/sdf”, but it fails. I reproduced this manually also. Mkfs succeeds manually when using the multipath device (i.e. “dm-3”), but fails when I attempt to use the individual devices (i.e. sda, sdb, etc.). I don’t know if this is new behavior in RHEL 7.2, but it seems that we should be using the multipath device when formatting the LUN.

Multipathing would be critical to any production environment, so I see this as an important enhancement.

Repro steps:

  • Enable multipathing on RHEL 7.2.
  • Execute "docker run -it -v : --volume-driver hpe bash"

Log Output:
_hpeplugin | 2016-09-16T14:58:42+0000 [twisted.python.log#info] "-" - - [16/Sep/2016:14:58:42 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"
hpeplugin | 2016-09-16T14:59:26+0000 [twisted.python.log#info] "-" - - [16/Sep/2016:14:59:26 +0000] "POST /VolumeDriver.Path HTTP/1.1" 200 29 "-" "Go-http-client/1.1"
hpeplugin | 2016-09-16 14:59:33.397 11 INFO os_brick.initiator.connectors.iscsi [-] Multipath discovery for iSCSI not enabled.
hpeplugin | 2016-09-16 14:59:33.398 11 INFO os_brick.initiator.connectors.iscsi [-] Trying to connect to iSCSI portal 192.0.2.1:3260
hpeplugin | 2016-09-16 14:59:33.430 11 INFO sh.command [-] <Command '/bin/mkdir -p /etc/h...(101 more)' call_args {'bg': False, 'timeo...(476 more)>: starting process
hpeplugin | 2016-09-16 14:59:33.445 11 INFO sh.command [-] <Command '/sbin/blkid -p -u fi...(17 more)' call_args {'bg': False, 'timeo...(476 more)>: starting process
hpeplugin | 2016-09-16 14:59:33.460 11 INFO sh.command [-] <Command '/sbin/mkfs -F /dev/s...(2 more)' call_args {'bg': False, 'timeo...(476 more)>: starting process
hpeplugin | 2016-09-16 14:59:33.475 11 ERROR hpedockerplugin.fileutil [-](u'create file system failed exception is : %s', u"nn RAN: '/sbin/mkfs -F /dev/sdf'nn STDOUT:nnn STDERR:nmke2fs 1.42.13 %2817-May-2015)\n/dev/sdf is apparently in use by the system; will not make a filesystem here!\n")
hpeplugin | 2016-09-16T14:59:33+0000 [_GenericHTTPChannelProtocol,1,] Unhandled Error
hpeplugin | Traceback (most recent call last):
hpeplugin | File "/usr/lib/python2.7/site-packages/twisted/web/server.py", line 241, in render
hpeplugin | body = resrc.render(self)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/resource.py", line 201, in render
hpeplugin | d = defer.maybeDeferred(_execute)
hpeplugin | File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
hpeplugin | result = f(_args, *_kw)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/resource.py", line 195, in _execute
hpeplugin | *_kwargs)
hpeplugin | --- ---
hpeplugin | File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
hpeplugin | result = f(_args, *_kw)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/app.py", line 108, in execute_endpoint
hpeplugin | return endpoint_f(self._instance, *args, *_kwargs)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/app.py", line 193, in _f
hpeplugin | return _call(instance, f, request, _a, *_kw)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/app.py", line 35, in _call
hpeplugin | return f(instance, _args, *_kwargs)
hpeplugin | File "/python-hpedockerplugin/hpedockerplugin/hpe_storage_api.py", line 445, in volumedriver_mount
hpeplugin | fileutil.create_filesystem(path.path)
hpeplugin | File "/python-hpedockerplugin/hpedockerplugin/fileutil.py", line 60, in create_filesystem
hpeplugin | raise exception.HPEPluginFileSystemException(reason=msg)
hpeplugin | hpedockerplugin.exception.HPEPluginFileSystemException: HPE Docker Volume Plugin File System error: (u'create file system failed exception is : %s', u"\n\n RAN: '/sbin/mkfs -F /dev/sdf'\n\n STDOUT:\n\n\n STDERR:\nmke2fs 1.42.13 (17-May-2015)\n/dev/sdf is apparently in use by the system; will not make a filesystem here!\n")
hpeplugin |
hpeplugin | 2016-09-16T14:59:33+0000 [twisted.python.log#info] "-" - - [16/Sep/2016:14:59:27 +0000] "POST /VolumeDriver.Mount HTTP/1.1" 500 6529 "-" "Go-http-client/1.1"
hpeplugin | 2016-09-16 14:59:33.790 11 INFO hpedockerplugin.hpe_storage_api [-] In volumedriver_unmount
hpeplugin | 2016-09-16 14:59:33.792 11 ERROR hpedockerplugin.hpe_storage_api [-](u'Volume unmount path info not found %s', u'testvol')
hpeplugin | 2016-09-16T14:59:33+0000 [_GenericHTTPChannelProtocol,1,] Unhandled Error
hpeplugin | Traceback (most recent call last):
hpeplugin | File "/usr/lib/python2.7/site-packages/twisted/web/server.py", line 241, in render
hpeplugin | body = resrc.render(self)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/resource.py", line 201, in render
hpeplugin | d = defer.maybeDeferred(_execute)
hpeplugin | File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
hpeplugin | result = f(_args, *_kw)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/resource.py", line 195, in _execute
hpeplugin | *_kwargs)
hpeplugin | --- ---
hpeplugin | File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
hpeplugin | result = f(_args, *_kw)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/app.py", line 108, in execute_endpoint
hpeplugin | return endpoint_f(self._instance, *args, *_kwargs)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/app.py", line 193, in _f
hpeplugin | return _call(instance, f, request, _a, *_kw)
hpeplugin | File "/usr/lib/python2.7/site-packages/klein/app.py", line 35, in _call
hpeplugin | return f(instance, _args, *kwargs)
hpeplugin | File "/python-hpedockerplugin/hpedockerplugin/hpe_storage_api.py", line 195, in volumedriver_unmount
hpeplugin | raise exception.HPEPluginUMountException(reason=msg)
hpeplugin | hpedockerplugin.exception.HPEPluginUMountException: HPE Docker Volume Plugin Unmount Failed: (u'Volume unmount path info not found %s', u'testvol')
hpeplugin |
hpeplugin | 2016-09-16T14:59:33+0000 [twisted.python.log#info] "-" - - [16/Sep/2016:14:59:33 +0000] "POST /VolumeDriver.Unmount HTTP/1.1" 500 5283 "-" "Go-http-client/1.1"

iscsiadm logout errors when attempting to unmount volume

When stopping a container with a mounted volume errors are seen trying to log out of the iscsi session such as the one below. It does appear the volume was unpresented from the 3par perspective (no longer shows as exported), but it is unclear if something was left behind on the host system itself.

exceptions.TypeError: <twisted.python.failure.Failure oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
Command: iscsiadm -m node -T iqn.2000-05.com.3pardata:20210002ac019cc1 -p 172.28.30.1:3260 --logout
Exit code: 2
Stdout: u'Logging out of session [sid: 5, target: iqn.2000-05.com.3pardata:20210002ac019cc1, portal: 172.28.30.1,3260]\n'
Stderr: u'iscsiadm: Could not logout of [sid: 5, target: iqn.2000-05.com.3pardata:20210002ac019cc1, portal: 172.28.30.1,3260].\niscsiadm: initiator reported error (2 - session not found)\niscsiadm: Could not logout of all requested sessions\n'> is not JSON serializable

This was observed with a RHEL 7.2 host running the following compose file:

[root@rheldocker1 quick-start]# cat docker-compose.yml
hpedockerplugin:
image: hub.docker.hpecorp.net/hpe-storage/hpedockerplugin:0.5.4.4-1
container_name: hpeplugin
net: host
privileged: true
volumes:
- /dev:/dev
- /run/docker/plugins:/run/docker/plugins
- /etc/iscsi/initiatorname.iscsi:/etc/iscsi/initiatorname.iscsi
- /lib/modules:/lib/modules
- /var/lib/docker/:/var/lib/docker
- /etc/hpedockerplugin/data:/etc/hpedockerplugin/data:shared
- /etc/hpedockerplugin:/etc/hpedockerplugin
- /var/run/docker.sock:/var/run/docker.sock

See attached log:
hpeplugin.log.txt

No error is displayed when volume (with child snapshot) is tried for deletion more than once.

Plugin: dockerciuser/hpedockerplugin:test

Steps to Reproduce:

  1. Create a volume.
    $ docker volume create -d dockerciuser/hpedockerplugin:test --name test_volume -o size=5
    test_volume

  2. Create a snapshot of the above created volume.
    $ docker volume create -d dockerciuser/hpedockerplugin:test -o snapshotOf=test_volume test_snapshot
    test_snapshot

  3. Delete the volume and observe the proper error.
    $ docker volume rm test_volume
    Error response from daemon: unable to remove volume: remove test_volume: Err: Volume test_volume has one or more child snapshots - volume cannot be deleted!

  4. Delete the volume again. No error was displayed.
    $ docker volume rm test_volume
    test_volume

  5. Delete the volume again (third time). No error was displayed.
    $ docker volume rm test_volume
    test_volume

When a second volume is created with same prefix of a first volume in name parameter, mounting and inspection of first volume points to second volume intermittently.

1. Listing all created volumes.
stack@cld13b14: ~ $ docker volume ls
DRIVER VOLUME NAME
hpe volume
hpe volume-1

2. Inspecting first volume. Here, it pointed to correct volume.
stack@cld13b14:~$ docker volume inspect volume
[

{
"Driver": "hpe",
"Labels": {},
"Mountpoint": "/etc/hpedockerplugin/data/hpedocker-ip-10.50.3.59:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac019d52-lun-0",
"Name": "volume",
"Options": {
"provisioning": "full",
"size": "5"
},
"Scope": "local"
}
]

3. Inspecting first volume again. Here, it pointed to incorrect volume with name volume-1.
stack@cld13b14:~$ docker volume inspect volume
[
{
"Driver": "hpe",
"Labels": {},
"Mountpoint": "/etc/hpedockerplugin/data/hpedocker-ip-10.50.3.59:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac019d52-lun-0",
"Name": "volume-1",
"Options": {
"provisioning": "full",
"size": "5"
},
"Scope": "local"
}
]

stack@cld13b14:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45046ea0c5cc hub.docker.hpecorp.net/hpe-storage/hpedockerplugin:v1.1.1.rc3 "/bin/sh -c ./plug..." 18 hours ago Up 18 hours sumit1
586fadad7311 quay.io/coreos/etcd:v2.2.0 "/etcd -name etcd0..." 5 days ago Up 5 days 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd

4. Logs of plugin container.
stack@cld13b14:~$ docker logs 45046ea0c5cc

2017-05-10 06:17:44.454 20 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-10 06:17:44.456 20 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-10 06:17:44.456 20 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-10 06:17:44.459 20 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-10T06:17:44+0000 [twisted.python.log#info] "-" - - [10/May/2017:06:17:44 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 218 "-" "Go-http-client/1.1"
2017-05-10T06:17:44+0000 [twisted.python.log#info] "-" - - [10/May/2017:06:17:44 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"
2017-05-10 06:17:51.286 20 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-10 06:17:51.290 20 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-10 06:17:51.290 20 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-10 06:17:51.292 20 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-10T06:17:51+0000 [twisted.python.log#info] "-" - - [10/May/2017:06:17:51 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 221 "-" "Go-http-client/1.1"
2017-05-10T06:17:51+0000 [twisted.python.log#info] "-" - - [10/May/2017:06:17:51 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"

5. Mounting second volume and writing some data in file on this volume..
stack@cld13b14:~$ sudo docker run -it -v volume-1:/data1/ --volume-driver hpe busybox sh
/ # cd data1/
/data1 # ls
sumit
/data1 # vi sumit
/data1 # cat sumit
this is the data in volume-1
/data # exit

6. Mounting first volume and reading the data in file on this volume. Here, it displayed the data from second volume.
stack@cld13b14:~$ sudo docker run -it -v volume:/data/ --volume-driver hpe busybox sh
/ # cd data/
/data # ls
sumit
/data # cat sumit
this is the data in volume-1
/data # exit

stack@cld13b14:~$ docker logs 45046ea0c5cc > logs.txt

Please refer to different file "logs.txt" for container logs.
logs.txt

two docker environment (eg 2 swarms) using the same 3par, targetting their own 3par domain cause conflicts with volume names. multipath is enabled

swarm1: plugin configured with CPG test_r5 (in domain TEST)
swarm2: plugin configured with CPG ucpa_r5 (in domain ucpa)

volume names mpath1 is created in swarm1: ok
volume named mpath1 is created in swarm2: ok

[root@linux etcda]# ssh 3paradm@3par1 showvv -showcols Name,Comment
Name Comment
.srdata --
admin --
dcv-6w2baSpdRkKS7T6onSjZDQ {"display_name": "plugin_vol_test2", "type": "Docker", "name": "eb0d9b69-2a5d-4642-92ed-3ea89d28d90d", "volume_id": "eb0d9b69-2a5d-4642-92ed-3ea89d28d90d"}
dcv-7isiIkXgQdWibsDMAT-Ssw {"display_name": "ucpa-mpath1", "type": "Docker", "name": "ee2b2222-45e0-41d5-a26e-c0cc013fd2b3", "volume_id": "ee2b2222-45e0-41d5-a26e-c0cc013fd2b3"}
dcv-FCe4VFcXQnGKHmsZmuj3dQ {"display_name": "clh-test06", "type": "Docker", "name": "1427b854-5717-4271-8a1e-6b199ae8f775", "volume_id": "1427b854-5717-4271-8a1e-6b199ae8f775"}
dcv-fHqwsI-DTFGvm5oGNlHMrA {"display_name": "mpath1", "type": "Docker", "name": "7c7ab0b0-8fc3-4c51-af9b-9a063651ccac", "volume_id": "7c7ab0b0-8fc3-4c51-af9b-9a063651ccac"}
dcv-LG-h.aq6TvmGgyHgCmjmVQ {"display_name": "clh-test07", "type": "Docker", "name": "2c6fe1f9-aaba-4ef9-8683-21e00a68e655", "volume_id": "2c6fe1f9-aaba-4ef9-8683-21e00a68e655"}
dcv-P7q3Df.MSDuW1vh1cmwwiQ {"display_name": "mpath2", "type": "Docker", "name": "3fbab70d-ff8c-483b-96d6-f875726c3089", "volume_id": "3fbab70d-ff8c-483b-96d6-f875726c3089"}
dcv-Rhd9W5QvSu65M6lUMyV6BQ {"display_name": "mpath1", "type": "Docker", "name": "46177d5b-942f-4aee-b933-a95433257a05", "volume_id": "46177d5b-942f-4aee-b933-a95433257a05"}
shared1 --
shared01 --
shared02 --
shared03 --

attempting to mount mpath1 in swarm2
docker run -it -v mpath1:/data1 --rm busybox sh

hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'connection info retrieval failed, error is: %s', u'Forbidden (HTTP 403) 118 - Volume not in same domain')

the error is also true if I create a unique volume name (eg swarm2-mpath1) in swarm2 an try to mount the volume. This produces the same error as above

if I delete mpath1 on swarm1 (the volume with the duplicate name) I can then mount swarm2-mpath1 as expected

Volume mount fails for old/new volumes after performing upgrade of managed v2 plugin.

Few volumes were mounted to containers and wrote data on them.
Performed upgrade of plugin from Store 1.1 plugin to Store 2.0 plugin.
Unmounted volumes and containers were in DEAD state which can't be removed.
Created new volumes and observed none of the volumes (old as well as new volumes) were able to mount.
Please refer the comment for detailed error and logs.

Root cause of this issue: moby/moby#27381

The workaround is to restart the docker service or reboot the system and start etcd daemon again.

The current working steps to upgrade the plugin is mentioned in this following document:
https://github.com/hpe-storage/python-hpedockerplugin/blob/master/docs/migrate-volume-from-legacy-to-managed-plugin.md

Idle VLUN templates are present in 3par array intermittently if unmount is performed with multipath.

Steps to Reproduce:

  1. Create 4 volumes.
  2. Mount all 4 volumes parallely via 4 different sessions .
  3. Write data on all 4 volumes.
  4. Unmount all 4 volumes.

Observation 1 : One of the volumes (volume-1) was unmounted successfully on docker client but VLUN templates were still present in 3par for that volume-1 but VLUNs were removed.

  1. Delete volume-1. It failed

Observation 2: HPE Docker Volume plugin Remove volume failed: (u'Err: Failed to remove volume %s, error is %s', u'clh-test07', u'resource in use')
This error is similar to what was observed by Chris.

  1. Mount volume-1 again. It failed.

Observation 3: hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'OS Brick connect volume failed, error is: %s', u'Could not login to any iSCSI portal.')

  1. Unexport volume-1 manually from 3par array. Also, remove host entry for volume-1. This removed VLUN templates present in 3par.
  2. Mount all 4 volume again. It was successful.
  3. Read data from all 4 volumes.
  4. Unmount all 4 volumes.

Observation 4: Observation 1 was seen for different volume i.e. volume-2. One of the volumes (volume-2) was unmounted successfully on docker client but VLUN templates were still present in 3par for that volume-2 but VLUNs were removed.

  1. Unexport volume-2 manually from 3par array. Also, remove host entry for volume-2. This removed VLUN templates present in 3par.
  2. Delete all 4 volumes.
  3. All the volumes were deleted successfully.

Volume and its snapshots does not get deleted even after retention period when snapshot is tried for deletion within retention hours.

Plugin: dockerciuser/hpedockerplugin:test

Steps to Reproduce:

  1. Create a volume.
    $ docker volume create -d hpe --name svol2 -o size=5
    svol2

  2. Create a snapshot of the above created volume with expiration and retention hours.
    $ docker volume create -d hpe -o snapshotOf=svol2 ssnap2 -o expirationHours=1 -o retentionHours=1
    ssnap2

  3. Try deleting the volume within retention period and observe the proper error.
    $ docker volume rm svol2
    Error response from daemon: unable to remove volume: remove svol2: Err: Volume svol2 has one or more child snapshots - volume cannot be deleted!

    Delete the snapshot within retention period. No error was displayed.
    $ docker volume rm svol2/ssnap2
    svol2/ssnap2

  4. Wait for the retention period to complete and perform the delete operation after 1 hour.

$ dvol rm svol2/ssnap2
svol2/ssnap2

  1. Deletion of snapshot was not successful.

$ docker@worker3:~$ dvol inspect svol2
[
{
"Driver": "hpe:latest",
"Labels": null,
"Mountpoint": "/var/lib/docker/plugins/f83b0cf58e88c5d1f30ac1cb258a1cdacb028611f020d704786fa740df8b8173/rootfs",
"Name": "svol2",
"Options": {},
"Scope": "local",
"Status": {
"Snapshots": [
{
"Name": "ssnap2",
"ParentName": "svol2"
}
]
}
}
]

Shared Volume Feature (not supported yet): VLUNs are still exported to second host in 3par when volume is unmounted from both the docker worker nodes.

Setup details: 3 manager and 3 workers nodes, multiple etcd host address with secured connection
Docker version: 17.06.1-ee-1

  1. Create volume1 from one of the worker nodes.

  2. Mount volume1 on worker1 node.

stack@worker1:~$ docker run -it -d -v test2:/data1 --volume-driver hpe --rm --name mounter2 busybox /bin/sh
4903bebf83369771d7cd9a79c2772e68ed54fca79333ab079e55688eda05124a

  1. Mount volume2 on worker2 node.

stack@worker2:~$ docker run -it -d -v test2:/data1 --volume-driver hpe --rm --name mounter2 busybox /bin/sh
bace3b6503ff617e6ee57fdac6bc812652f42b4620cf78322dfe0c1576011e47

  1. Unmount volume1 from both the worker nodes.

stack@worker1:~$ docker attach mounter2
/ # exit

stack@worker2:~$ docker attach mounter2
/ # exit

  1. LUNs were still present in worker2 node.

stack@worker1:~$ ls -lrt /dev/disk/by-path
total 0
lrwxrwxrwx 1 root root 9 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:0 -> ../../sda
lrwxrwxrwx 1 root root 9 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:2 -> ../../sdc
lrwxrwxrwx 1 root root 10 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 9 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:1 -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:1-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:2-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Aug 17 04:52 pci-0000:03:00.0-scsi-0:1:0:0-part1 -> ../../sda1

stack@worker2:~$ ls -lrt /dev/disk/by-path
total 0
lrwxrwxrwx 1 root root 9 Aug 17 04:36 pci-0000:03:00.0-scsi-0:1:0:0 -> ../../sda
lrwxrwxrwx 1 root root 9 Aug 17 04:36 pci-0000:03:00.0-scsi-0:1:0:1 -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 17 04:36 pci-0000:03:00.0-scsi-0:1:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Aug 17 04:36 pci-0000:03:00.0-scsi-0:1:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Aug 17 04:36 pci-0000:03:00.0-scsi-0:1:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 9 Aug 17 05:05 ip-10.50.3.59:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac019d52-lun-0 -> ../../sdc
lrwxrwxrwx 1 root root 9 Aug 17 05:05 ip-10.50.3.60:3260-iscsi-iqn.2000-05.com.3pardata:23210002ac019d52-lun-0 -> ../../sdd

  1. Observe VLUNs in 3par array. VLUNs were not export to worker1 host but they were still exported to worker1

CSIM-8K02_MXN6072AC7 cli% showvlun -host worker1
Active VLUNs
Invalid host name: worker1

CSIM-8K02_MXN6072AC7 cli% showvlun -host worker2
Active VLUNs
Lun VVName HostName ---------Host_WWN/iSCSI_Name--------- Port Type Status ID
0 dcv-E92yRK.rQNCVZHbtYTGVnQ worker2 iqn.1993-08.org.debian:01:a436c68292a 2:2:1 matched set active 1
0 dcv-E92yRK.rQNCVZHbtYTGVnQ worker2 iqn.1993-08.org.debian:01:a436c68292a 3:2:1 matched set active 1

2 total_

VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 dcv-E92yRK.rQNCVZHbtYTGVnQ worker2 ---------------- 2:2:1 matched set
0 dcv-E92yRK.rQNCVZHbtYTGVnQ worker2 ---------------- 3:2:1 matched set

2 total

Please find the logs from worker1 and worker2 nodes attached here.

issue_worker1.txt
issue_worker2.txt

Dangling snapshot entry is displayed in volume inspect even after its expiration period.

Plugin: dockerciuser/hpedockerplugin:test

Steps to Reproduce:

  1. Create a volume.
    $ docker volume create -d hpe --name svol2 -o size=5
    svol2

  2. Create a snapshot of the above created volume with expiration hours.
    $ docker volume create -d hpe -o snapshotOf=svol2 ssnap2 -o expirationHours=1
    ssnap2

  3. Wait for the expiration period to complete and verify the snapshot is wiped out from 3par array.

  4. Inspect the volume and observe the dangling entry of snapshot. (Etcd is not updated with the expiration of snapshot from 3par array)
    $ docker@worker3:~$ dvol inspect svol2
    [
    {
    "Driver": "hpe:latest",
    "Labels": null,
    "Mountpoint": "/var/lib/docker/plugins/f83b0cf58e88c5d1f30ac1cb258a1cdacb028611f020d704786fa740df8b8173/rootfs",
    "Name": "svol2",
    "Options": {},
    "Scope": "local",
    "Status": {
    "Snapshots": [
    {
    "Name": "ssnap2",
    "ParentName": "svol2"
    }
    ]
    }
    }
    ]

tox failure in jenkins build

The tox command in jenkins build is failing.
The detailed errors are given below -

10:07:00 pep8 runtests: PYTHONHASHSEED='1625958770'
10:07:00 pep8 runtests: commands[0] | flake8 hpedockerplugin test
10:07:01 hpedockerplugin/hpe_storage_api.py:41:1: F401 'time' imported but unused
10:07:01 import time
10:07:01 ^
10:07:01 hpedockerplugin/hpe_plugin_service.py:122:17: E999 SyntaxError: invalid syntax
10:07:01 print cfg
10:07:01 ^
10:07:01 hpedockerplugin/hpe/san_driver.py:82:50: E999 SyntaxError: invalid syntax
10:07:01 print "Error from iscsiadm -m discovery: ", targetip
10:07:01 ^
10:07:01 test/test_hpe_plugin.py:73:37: E999 SyntaxError: invalid syntax
10:07:01 print 'self.twistd_pid: ', self.twistd_pid
10:07:01 ^
10:07:01 test/test_hpe_plugin.py:146:9: E722 do not use bare except'
10:07:01 except:
10:07:01 ^
10:07:01 ERROR: InvocationError: '/home/jenkins/jenkins_slave/workspace/docker-3par-iscsi-plugin-build/.tox/pep8/bin/flake8 hpedockerplugin test'
10:07:01 ___________________________________ summary ____________________________________
10:07:01 ERROR: pep8: commands failed

Regards,
Tushar

Only one snapshot gets deleted when multiple snapshots are deleted simultaneously.

Plugin name: dgavand/hpedockerplugin:snap_v03

Steps to Reproduce:

  1. Create a volume (volume-1) with optional parameters.
    docker@worker3: $ docker volume create -d hpe --name volume-1 -o size=10 -o provisioning=full
    volume-1

  2. Create a snapshot from volume-1.
    docker@worker3:~$ docker volume create -d hpe --name snap-volume-1 -o snapshotOf=volume-1
    snap-volume-1

  3. Create another snapshot from volume-1.
    docker@worker3:~$ docker volume create -d hpe --name snap-volume-2 -o snapshotOf=volume-1
    snap-volume-2

docker@worker3:~$ dvol inspect volume-1
[
{
"Driver": "hpe:latest",
"Labels": {},
"Mountpoint": "/var/lib/docker/plugins/9b280bd79fa40a7928a31c4ee971436487248e7612926ee8ab013e8e8dca0950/rootfs",
"Name": "volume-1",
"Options": {
"provisioning": "full",
"size": "10"
},
"Scope": "local",
"Status": {
"Snapshots": [
{
"Name": "snap-volume-1",
"ParentName": "volume-1"
},
{
"Name": "snap-volume-2",
"ParentName": "volume-1"
}
]
}
}
]

CSIM-8K02_MXN6072AC7 cli% showvv -cpg cpg_test2
-Rsvd(MiB)- -(MiB)-
Id Name Prov Compr Dedup Type CopyOf BsId Rd -Detailed_State- Snp Usr VSize
71634 dcv-tqL9yQKZSq.h8Dlas00i4g cpvv NA NA base --- 71634 RW normal 512 10240 10240
71635 dcs-igxE0mJ6QFO2jzPSWR4z5Q snp NA NA vcopy dcv-tqL9yQKZSq.h8Dlas00i4g 71634 RO normal -- -- 10240
71639 dcs-FRoYIO10RwqCTesjvLODsw snp NA NA vcopy dcv-tqL9yQKZSq.h8Dlas00i4g 71634 RO normal -- -- 10240

  1. Delete both the snapshots together.

docker@worker3:~$ docker volume rm volume-1/snap-volume-1 volume-1/snap-volume-2
volume-1/snap-volume-1
volume-1/snap-volume-2

  1. Only one snapshot got deleted snap-volume-2.

docker@worker3: $ dvol inspect volume-1
[
{
"Driver": "hpe:latest",
"Labels": {},
"Mountpoint": "/var/lib/docker/plugins/9b280bd79fa40a7928a31c4ee971436487248e7612926ee8ab013e8e8dca0950/rootfs",
"Name": "volume-1",
"Options": {
"provisioning": "full",
"size": "10"
},
"Scope": "local",
"Status": {
"Snapshots": [
{
"Name": "snap-volume-1",
"ParentName": "volume-1"
}
]
}
}
]
docker@worker3:~$

CSIM-8K02_MXN6072AC7 cli% showvv -cpg cpg_test2
-Rsvd(MiB)- -(MiB)-
Id Name Prov Compr Dedup Type CopyOf BsId Rd -Detailed_State- Snp Usr VSize
71634 dcv-tqL9yQKZSq.h8Dlas00i4g cpvv NA NA base --- 71634 RW normal 512 10240 10240
71635 dcs-igxE0mJ6QFO2jzPSWR4z5Q snp NA NA vcopy dcv-tqL9yQKZSq.h8Dlas00i4g 71634 RO normal -- -- 10240

Expected Result:
Both the snapshots must be deleted.

Multipath doesn't work in HPE Docker plugin V2

Installed multipathd, open-iscsi and xfsprogs

Disabled plugin, turned on multipath and enabled plugin

stack@csimbe13-b13:~$ sudo docker volume create -d hpestorage/hpedockervolumeplugin:1.0 --name svol2 -o size=30 -o flash-cache=true
svol2

stack@csimbe13-b13:~$ docker volume ls
DRIVER VOLUME NAME
hpestorage/hpedockervolumeplugin:1.0 svol2

stack@csimbe13-b13:~$ sudo docker run -it -v svol2:/data1/ --rm busybox sh
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/3880d433b4abc371a04467ef0b749328a0159cb19e2aaccbb5e9201896e3b0ed/rootfs': VolumeDriver.Mount
oslo_concurrency.processutils.ProcessExecutionError: [Errno 2] No such file or directory
Command: multipathd show status
Exit code: -
Stdout: None
Stderr: None


Uninstalled both multipathd and open-iscsi both

Found below error while enabling plugin:

stack@csimbe13-b13:~$ docker plugin enable hpestorage/hpedockervolumeplugin:1.0
Error response from daemon: rpc error: code = 2 desc = oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused "rootfs_linux.go:54: mounting \"/sbin/iscsiadm\" to rootfs \"/var/lib/docker/plugins/3880d433b4abc371a04467ef0b749328a0159cb19e2aaccbb5e9201896e3b0ed/rootfs\" at \"/sbin/ia\" caused \"stat /sbin/iscsiadm: no such file or directory\"""


Installed open-iscsi again

Enabled plugin

stack@csimbe13-b13:~$ sudo docker volume create -d hpestorage/hpedockervolumeplugin:1.0 --name svol3 -o size=30
svol3

stack@csimbe13-b13:~$ docker volume ls
DRIVER VOLUME NAME
hpestorage/hpedockervolumeplugin:1.0 svol3

stack@csimbe13-b13:~$ sudo docker run -it -v svol3:/data1/ --rm busybox sh
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/3880d433b4abc371a04467ef0b749328a0159cb19e2aaccbb5e9201896e3b0ed/rootfs': VolumeDriver.Mount:
oslo_concurrency.processutils.ProcessExecutionError: [Errno 2] No such file or directory
Command: multipathd show status
Exit code: -
Stdout: None
Stderr: None

Sample configs should provide driver entry

With the move to delivering the plugin in a container the driver path should be directly specified for both the 3par and VSA config file samples.

For example, instead of this in the 3par sample config:

hpedockerplugin_driver = hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver

hpedockerplugin_driver =

The actual standard path should be provided for 3par:

hpedockerplugin_driver = hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver

And the standard path should be provided for VSA/lefthand:

hpedockerplugin_driver = hpedockerplugin.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver

With the move to containers I would expect this setting will not need to be modified 99.9% of the time and should be kept in sync the plugin container. Normal users won't be snooping inside the container to try to specify an alternate driver path.

Docker copy operation fails when files are copied from container to Docker host and vice-versa

Attempts to mount a volume multiple times will result in an error from the plugin.

One example of where this occurs is when a 'docker cp ...' is used to copy a file to a volume that is already attached to a container.

The plugin will throw an error stating that the volume is already in use and then be unmounted by the cleanup stage (caused by the exception being thrown).

Can't create volume using size attribute with Mesos/Marthon UI

When I add the size parameter to dvdi options I get a marathon error:
There was a problem with your configuration
general: must be undefined for Docker containers

I am able to create a volume of specific size with dvdcli:
dvdcli create --volumedriver=hpe --volumename=testvol10 --volumeopts=size=5

This appears to be a potential issue with the Mesosphere/Marthon UI, but I'm opening here so that we can track and resolve appropriately.

Bad hpe3par_api_url takes ~11 minutes to timeout. Error message not user friendly.

If a bad entry is provided for hpe3par_api_url (perhaps good IP but wrong port) the plugin should quickly report a clear error message.

I tried an entry like the following that is otherwise correct with the exception that the port was lacking:
hpe3par_api_url = https://172.28.6.80/api/v1

Issue details

That resulted in several symptoms that need to be improved:

  1. Plugin initialization just sat there for 11 minutes without any clear progress
  2. The error message wasn't provided as a clean "error connecting to xxx" message

See below for the output from docker logs in this case where the true root cause of there being a connection error is somewhat obscured by the way the original exception is caught handled. Instead of having errors like this "'ConnectionError' object has no attribute 'get_description'") it should just be a simple ConnectionError message:

full log below:
2016-09-09 17:40:29.051 11 INFO hpedockerplugin.hpe_storage_api [-] Initialize Volume Plugin 2016-09-09 17:51:31.443 11 ERROR hpedockerplugin.hpe_storage_api [-] (u'hpeplugin_driver do_setup failed, error is: %s', u"'ConnectionError' object has no attribute 'get_description'") 2016-09-09 17:51:31.444 11 CRITICAL hpe_storage_api [-] HPEPluginNotInitializedException: HPE Docker Volume plugin not ready. 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api Traceback (most recent call last): 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/usr/bin/twistd", line 18, in <module> 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api run() 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/usr/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 29, in run 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api app.run(runApp, ServerOptions) 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 643, in run 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api runApp(config) 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/usr/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 25, in runApp 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api _SomeApplicationRunner(config).run() 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 374, in run 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api self.application = self.createOrGetApplication() 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/usr/lib/python2.7/site-packages/twisted/application/app.py", line 434, in createOrGetApplication 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api ser = plg.makeService(self.config.subOptions) 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/python-hpedockerplugin/twisted/plugins/hpedockerplugin_plugin.py", line 25, in makeService 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api return HpeFactory(options["cfg"]).start_service() 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/python-hpedockerplugin/hpedockerplugin/hpe_plugin_service.py", line 130, in start_service 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api hpepluginservice = hpedockerplugin.setupservice() 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/python-hpedockerplugin/hpedockerplugin/hpe_plugin_service.py", line 114, in setupservice 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api VolumePlugin(self._reactor, hpedefaultconfig).app.resource())) 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api File "/python-hpedockerplugin/hpedockerplugin/hpe_storage_api.py", line 83, in __init__ 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api raise exception.HPEPluginNotInitializedException(reason=msg) 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api HPEPluginNotInitializedException: HPE Docker Volume plugin not ready. 2016-09-09 17:51:31.444 11 ERROR hpe_storage_api

Recommended solution

Discrepancy in the output of inspect between source and cloned docker volume

Cloned volume does not fetches source volume's options or volume properties.

docker volume create -d hpe --name vol -o size=10 -o flash-cache=True

vol

docker volume inspect vol

[
{
"Driver": "hpe:latest",
"Labels": {},
"Mountpoint": "/var/lib/docker/plugins/ac0298e054e4769ad622557dac226ba246079c187a05f26317d12a52ffa72150/rootfs",
"Name": "vol",
"Options": {
"flash-cache": "True",
"size": "10"
},
"Scope": "local"
}
]

After cloned operation, it shows only "cloneOf" options instead of all options / properties of source volume in docker inspect :

docker volume create -d hpe --name vol-clone -o cloneOf=vol

vol-clone

docker volume inspect vol-clone

[
{
"Driver": "hpe:latest",
"Labels": {},
"Mountpoint": "/var/lib/docker/plugins/ac0298e054e4769ad622557dac226ba246079c187a05f26317d12a52ffa72150/rootfs",
"Name": "vol-clone",
"Options": {
"cloneOf": "vol"
},
"Scope": "local"
}
]

Error installing plugin: "/lib/x86_64-linux-gnu: no such file or directory"

Installing the new plugin from Docker Store for the first time, getting the following error:

[root@XXX` ~]# docker plugin install store/hpestorage/hpedockervolumeplugin:1.0
Plugin "store/hpestorage/hpedockervolumeplugin:1.0" is requesting the following privileges:
 - network: [host]
 - mount: [/dev]
 - mount: [/run/lock]
 - mount: [/var/lib]
 - mount: [/etc]
 - mount: [/var/run/docker.sock]
 - mount: [/root/.ssh]
 - mount: [/sys]
 - mount: [/sbin/iscsiadm]
 - mount: [/lib/modules]
 - mount: [/lib/x86_64-linux-gnu]
 - allow-all-devices: [true]
 - capabilities: [CAP_SYS_ADMIN CAP_SYS_RESOURCE CAP_MKNOD CAP_SYS_MODULE]
Do you grant the above permissions? [y/N] y
1.0: Pulling from store/hpestorage/hpedockervolumeplugin
ca38e679a5a5: Download complete
Digest: sha256:b117dce7cc3463bc20f37d09bf3beb51b6bd5a48a15318701c5514aef883134b
Status: Downloaded newer image for store/hpestorage/hpedockervolumeplugin:1.0
Error response from daemon: rpc error: code = 2 desc = oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/lib/x86_64-linux-gnu\\\" to rootfs \\\"/var/lib/docker/plugins/59b740867863352afe743746e8ca261f44bc40d95680c549c22fc85ab7bfe04b/rootfs\\\" at \\\"/lib64\\\" caused \\\"**stat /lib/x86_64-linux-gnu: no such file or directory**\\\"\""

When a second volume is created with the same name of first volume, it does not throw any error/exception.

1. Created a volume with name "volume".
stack@csimbe13-b14:~$ sudo docker volume create -d hpe --name volume
volume

stack@csimbe13-b14:~$ sudo docker volume ls
DRIVER VOLUME NAME
hpe volume

2. Created a second volume with the same name as "volume" but with different size. It didn't throw any error, only volume name was displayed. It should throw error or exception as "volume with name 'volume' already exists".
stack@csimbe13-b14:~$ sudo docker volume create -d hpe --name volume -o size=10
volume

stack@csimbe13-b14:~$ sudo docker volume ls
DRIVER VOLUME NAME
hpe volume

3. Observed that the size was not changed. Volume properties were not changed. Volume name was still mapped to the first volume which was expected.
stack@csimbe13-b14:~$ sudo docker volume inspect volume
[
{
"Driver": "hpe",
"Labels": {},
"Mountpoint": "/",
"Name": "volume",
"Options": {},
"Scope": "local"
}
]

stack@csimbe13-b14:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cce6a21dc8b9 sandanar/hpe-storage:v1.1.1.rc4 "/bin/sh -c ./plug..." 9 minutes ago Up 9 minutes sumit1
4aca101ac63b quay.io/coreos/etcd:v2.2.0 "/etcd -name etcd0..." 10 minutes ago Up 10 minutes 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd

4. Please refer below container logs.
stack@csimbe13-b14:~$ sudo docker logs cce6a21dc8b9

2017-05-24 11:28:49.086 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24 11:28:49.089 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-24 11:28:49.089 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24 11:28:49.091 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-24T11:28:49+0000 [twisted.python.log#info] "-" - - [24/May/2017:11:28:48 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 104 "-" "Go-http-client/1.1"
2017-05-24 11:29:03.347 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24T11:29:03+0000 [twisted.python.log#info] "-" - - [24/May/2017:11:29:03 +0000] "POST /VolumeDriver.List HTTP/1.1" 200 107 "-" "Go-http-client/1.1"
2017-05-24T11:29:03+0000 [twisted.python.log#info] "-" - - [24/May/2017:11:29:03 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"
2017-05-24 11:29:14.146 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24 11:29:14.149 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-24 11:29:14.149 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24 11:29:14.151 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-24T11:29:14+0000 [twisted.python.log#info] "-" - - [24/May/2017:11:29:13 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 104 "-" "Go-http-client/1.1"
2017-05-24T11:29:14+0000 [twisted.python.log#info] "-" - - [24/May/2017:11:29:13 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"
2017-05-24 11:29:14.156 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24 11:29:14.157 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-24 11:29:14.158 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-24 11:29:14.159 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is volume
2017-05-24T11:29:14+0000 [twisted.python.log#info] "-" - - [24/May/2017:11:29:13 +0000] "POST /VolumeDriver.Path HTTP/1.1" 200 29 "-" "Go-http-client/1.1"
2017-05-24T11:30:14+0000 [-] Timing out client: UNIXAddress(None)

LUN ID is different for every VLUN in 3par if mounting is performed on volume with multipath.

Enabled multipath in hpe.conf

  1. Create a volume.
    stack@csimbe13-b13:~$ sudo docker volume create -d hpe --name svol1
    svol1

  2. Mount this volume.
    stack@csimbe13-b13:~$ sudo docker run -it -v svol1:/data1/ --rm busybox sh
    svol1 is mounted successfully.

  3. Observe vlun created in 3par.

CSIM-EOS07_1611168 cli% showvlun
Active VLUNs
Lun VVName HostName --------Host_WWN/iSCSI_Name-------- Port Type Status ID
0 dcv-a..tqo32SYynMhU84A-4zw csimbe13-b05 iqn.1994-05.com.redhat:be258d546d5d 0:2:1 matched set active 1
1 dcv-a..tqo32SYynMhU84A-4zw csimbe13-b05 iqn.1994-05.com.redhat:be258d546d5d 0:2:2 matched set active 1

2 total

Mounting second volume fails when first volume is still mounted with multipath.

multiple_mounts.txt

1. Create volume1 and mount it.

[docker@csimbe13b05 ~]$ docker run -it -v mpath1:/data1 --rm busybox sh
/ #

2. Create volume2 from a different session. Don't unmount volume1.

3. Mount volume2 also.

[docker@csimbe13b05 ~]$ docker run -it -v mpath2:/data1 --rm busybox sh

hpedockerplugin.exception.HPEPluginFileSystemException: HPE Docker Volume Plugin File System error: (u'create file system failed exception is : %s', u'\n\n RAN: /sbin/mkfs -F /dev/sde\n\n STDOUT:\n\n\n STDERR:\nmke2fs 1.43 (17-May-2016)\n/dev/sde is apparently in use by the system; will not make a filesystem here!\n')

Notes:

  1. Logs are attached with filename multiple_mounts.txt
  2. Please look for volume1 as mpath1 and volume2 as mpath2.
  3. This issue was reproduced on RHEL 7.3 with managed plugin V2.

quick-start: containerize.sh exists with error code (image not created)

436 git clone https://github.com/hpe-storage/python-hpedockerplugin.git
437 cd python-hpedockerplugin/
440 ./containerize.sh 2>&1 | tee ~/containerize.log

last lines
Processing requests-2.12.5.tar.gz
Writing /tmp/easy_install-pbAoxz/requests-2.12.5/setup.cfg
Running requests-2.12.5/setup.py -q bdist_egg --dist-dir /tmp/easy_install-pbAoxz/requests-2.12.5/egg-dist-tmp-g_FTII
warning: no files found matching 'test_requests.py'
creating /usr/lib/python2.7/site-packages/requests-2.12.5-py2.7.egg
Extracting requests-2.12.5-py2.7.egg to /usr/lib/python2.7/site-packages
Adding requests 2.12.5 to easy-install.pth file

Installed /usr/lib/python2.7/site-packages/requests-2.12.5-py2.7.egg
error: amqp 1.4.9 is installed but amqp!=2.1.4,>=2.1.0 is required by set(['oslo.messaging'])
The command '/bin/sh -c apk add --virtual /tmp/.temp --no-cache --update build-base g++ gcc libffi-dev linux-headers make openssl openssh-client openssl-dev python-dev && wget https://pypi.python.org/packages/69/17/eec927b7604d2663fef82204578a0056e11e0fc08d485fdb3b6199d9b590/pyasn1-0.2.3.tar.gz#md5=79f98135071c8dd5c37b6c923c51be45 && tar xvzf pyasn1-0.2.3.tar.gz && cd pyasn1-0.2.3 && python setup.py install && rm -rf pyasn1-0.2.3 && cd /python-hpedockerplugin && pip install --upgrade . && python setup.py install && apk del /tmp/.temp && rm -rf /var/cache/apk/*' returned a non-zero code: 1

this is RHEL-7.3 (Red Hat Enterprise Linux Server release 7.3 (Maipo))

containerize.zip

iSCSI plugin: Volume mount request fails when host entry is already present in 3PAR array.

Install iSCSI plugin with hpestorage/hpedockervolumeplugin:2.0 tag and perform volume mount after volume creation.

This issue is only reproducible when there is a host entry already present in 3par array.

stack@worker2:~$ docker volume create -d hpe --name sum2
sum2

[docker@csimbe13-b03 ~]$ docker run -it -v sum2:/data1 --volume-driver hpe --rm busybox sh
hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'OS Brick connect volume failed, error is: %s', u'Could not login to any iSCSI portal.')

Please find the logs attached here.
recreate.txt

QoS : Cloned volume is not added as a member into VVSet of source volume.

OS platform: Ubuntu 16.04 with FC Backend
Plugin tag: dockerciuser/hpedockerplugin:test (Date : 12/12/2017)
Docker Version : 17.06.1-ee-2 / API version: 1.30

All the performed steps are mentioned below on how the issue was observed:

  1. Created a docker volume with QoS option:
    docker volume create -d hpe --name qos_vol1 -o qos-name=vvk_vvset

  2. Verified docker volume as member of respective VVset in 3PAR.

  3. Created clone of the same volume:
    docker volume create -d hpe --name clone_qos_vol1 -o cloneOf=qos_vol1

  4. Verified volume as created but it was not added as member into the respective VVset (in 3PAR).

Usage guide has no information about driver option 'mount-volume'

Driver option 'mount-volume' is displayed as supported option from HPE 3PAR plugin.

docker@worker3:~$ dvol create -d hpe -o help
Error response from daemon: create 1ac12c608434ab026c3136d5adb27b2f4419575131ae994e0a9da84e680ae40d: create volume failed, error is: help is not a valid option. Valid options are: ['mount-volume', 'size', 'provisioning', 'flash-cache', 'cloneOf', 'snapshotOf', 'snapCloneOf', 'expirationHours', 'retentionHours']

But, there is no instruction provided about 'mount-volume' in usage guide and the use case is still not known.

Workflow which mounts the volume, writes/reads data, unmounts, removes it fails

Docker Plugin Details -
[root@csimbe13-b03 shashi]# docker plugin inspect hpe | grep 2.0
"sha256:afc35e8f4cd31fa1613afe49fc46957defcc553ed9d40219bd1b2f2c2b0dc966"
"Id": "c2e280f215cf7aa3b60963d6b52982963b9eba7008e3a103aa067ec202613b77",
"PluginReference": "docker.io/hpestorage/hpedockervolumeplugin:2.0.1",
[root@csimbe13-b03 shashi]#

Array Details -
Component Name Version
CLI Server 3.3.1.215
CLI Client 3.3.1.215
System Manager 3.3.1.215
Kernel 3.3.1.215
TPD Kernel Code 3.3.1.215
CSIM-EOS07_1611168 cli%

Steps to reproduce

  1. "Create a shell script ""test_script.sh"" with below commands:

volume=<volume_name>
docker volume create -d hpe $volume
docker run -it -v $volume:/data1 --rm --volume-driver hpe ubuntu bash -c 'echo ""data"" > /data1/test.txt '
docker run -it -v $volume:/data1 --rm --volume-driver hpe ubuntu bash -c 'cat /data1/test.txt'
docker volume remove $volume

  1. On Docker 17.06 or later, run below command:

volume=<volume_name>
docker volume create -d hpe $volume
docker run -it --mount src=$volume,dst=/data1,volume-driver=hpe --rm ubuntu bash -c 'echo ""data"" > /data1/test.txt '
docker run -it --mount src=$volume,dst=/data1,volume-driver=hpe --rm ubuntu bash -c 'cat /data1/test.txt'
docker volume remove $volume"

  1. "Run test_script.sh:

$ sh -x ./test_script.sh" Script run must be successful.
4. "Run test_script.sh script for more than 10 times.

Expected Results - Script run must be successful.
Actual Result - Script failed once out of 10 RUNs.

Error Details -
hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'OS Brick connect volume failed, error is: %s', u"Unexpected error while running command.\nCommand: /lib/udev/scsi_id --page 0x83 --whitelisted /dev/disk/by-path/pci-0000:08:00.2-fc-0x20110002ac002ba0-lun-1\nExit code: 1\nStdout: u''\nStderr: u''")

Note:-
With 3.2.2.MU4 array version, below error is seen.
hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'connection info retrieval failed, error is: %s', u'Not found (HTTP 404) 17 - host does not exist')

TC_Docker_3PAR_FC_017.log

Volume mount operation fails intermittently with iSCSI with Creating filesystem error.

OS platform: Ubuntu 16.04 with iSCSI Backend
Plugin tag: sandanar/hpedockerplugin:2.0.2
Docker Version : 17.06.1-ee-2 / API version: 1.30

All the performed steps are mentioned below on how the issue was observed:

  1. Performed CHO Script on Ubuntu system with iSCSI multipath backend for 12 hours.

Observed multiple failures with below error:

hpedockerplugin.exception.HPEPluginFileSystemException: HPE Docker Volume Plugin File System error: (u'create file system failed exception is : %s', u'\n\n RAN: /sbin/mkfs -F /dev/sdb\n\n STDOUT:\nDiscarding device blocks: 4096/2359296\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08 \x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08\x08failed - Remote I/O error\nCreating filesystem with 2359296 4k blocks and 589824 inodes\nFilesystem UUID: 2ea67058-366d-4f09-a25d-2bdab089c222\nSuperblock backups stored on blocks: \n\t32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632\n\nAllocating group tables: 0/72\x08\x08\x08\x08\x08 \x08\x08\x08\x08\x08done \nWriting inode tables: 0/72\x08\x08\x08\x08\x08 \x08\x08\x08\x08\x08done \nWriting superblocks and filesystem accounting information: 0/72\x08\x08\x08\x08\x08\n\n STDERR:\nmke2fs 1.43.6 (29-Aug-2017)\n\nWarning, had trouble writing out superblocks.\n')

Observed stale entries in Docker host and same numbers of vlun entries in 3PAR as well.

ls -lrt /dev/disk/by-path/

total 0
lrwxrwxrwx 1 root root 9 Nov 10 04:02 pci-0000:03:00.0-scsi-0:1:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Nov 10 04:02 pci-0000:03:00.0-scsi-0:1:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Nov 10 04:02 pci-0000:03:00.0-scsi-0:1:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Nov 10 04:02 pci-0000:03:00.0-scsi-0:1:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 9 Nov 13 23:57 ip-10.50.1.20:3260-iscsi-iqn.2000-05.com.3pardata:20220002ac002b9f-lun-0 -> ../../sdc
lrwxrwxrwx 1 root root 9 Nov 13 23:57 ip-10.50.1.19:3260-iscsi-iqn.2000-05.com.3pardata:20210002ac002b9f-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root 9 Nov 13 23:59 ip-10.50.1.20:3260-iscsi-iqn.2000-05.com.3pardata:20220002ac002b9f-lun-1 -> ../../sde
lrwxrwxrwx 1 root root 9 Nov 14 00:00 ip-10.50.1.19:3260-iscsi-iqn.2000-05.com.3pardata:20210002ac002b9f-lun-1 -> ../../sdd
lrwxrwxrwx 1 root root 9 Nov 14 11:50 ip-10.50.1.19:3260-iscsi-iqn.2000-05.com.3pardata:20210002ac002b9f-lun-3 -> ../../sdh
lrwxrwxrwx 1 root root 9 Nov 14 11:51 ip-10.50.1.20:3260-iscsi-iqn.2000-05.com.3pardata:20220002ac002b9f-lun-3 -> ../../sdi
lrwxrwxrwx 1 root root 9 Nov 14 11:51 ip-10.50.1.20:3260-iscsi-iqn.2000-05.com.3pardata:20220002ac002b9f-lun-2 -> ../../sdg
lrwxrwxrwx 1 root root 9 Nov 14 11:51 ip-10.50.1.19:3260-iscsi-iqn.2000-05.com.3pardata:20210002ac002b9f-lun-2 -> ../../sdf

lsscsi

[0:0:0:0] storage HP P220i 7.02 -
[0:0:0:4194240]storage HP P220i 7.02 -
[0:1:0:0] disk HP LOGICAL VOLUME 7.02 /dev/sda
[1:0:0:254] enclosu 3PARdata SES 3310 -
[1:0:1:254] enclosu 3PARdata SES 3310 -
[1:0:2:254] enclosu 3PARdata SES 3310 -
[1:0:3:254] enclosu 3PARdata SES 3310 -
[2:0:0:254] enclosu 3PARdata SES 3310 -
[2:0:1:254] enclosu 3PARdata SES 3310 -
[2:0:2:254] enclosu 3PARdata SES 3310 -
[2:0:3:254] enclosu 3PARdata SES 3310 -
[156:0:0:0] disk 3PARdata VV 3310 /dev/sdb
[156:0:0:1] disk 3PARdata VV 3310 /dev/sdd
[156:0:0:2] disk 3PARdata VV 3310 /dev/sdf
[156:0:0:3] disk 3PARdata VV 3310 /dev/sdh
[156:0:0:254]enclosu 3PARdata SES 3310 -
[157:0:0:0] disk 3PARdata VV 3310 /dev/sdc
[157:0:0:1] disk 3PARdata VV 3310 /dev/sde
[157:0:0:2] disk 3PARdata VV 3310 /dev/sdg
[157:0:0:3] disk 3PARdata VV 3310 /dev/sdi
[157:0:0:254]enclosu 3PARdata SES 3310 -

3PAR cli% showvlun -host cld6b3

Active VLUNs

Lun VVName HostName ---------Host_WWN/iSCSI_Name---------- Port Type Status ID

0 dcv-BTnuRp-TTrSjfeGgbfoT1w cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:1 matched set active 1
1 dcv-Z3swchnVQq6gQbeiTgFZjg cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:1 matched set active 1
2 dcv-xUvntlgpQ8ajGgsI6H3HyA cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:1 matched set active 1
3 dcv-YN7g6oCCSi.38.8hVN-V7A cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:1 matched set active 1
0 dcv-BTnuRp-TTrSjfeGgbfoT1w cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:2 matched set active 1
1 dcv-Z3swchnVQq6gQbeiTgFZjg cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:2 matched set active 1
2 dcv-xUvntlgpQ8ajGgsI6H3HyA cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:2 matched set active 1
3 dcv-YN7g6oCCSi.38.8hVN-V7A cld6b3 iqn.1993-08.org.debian:01:417efac88ed9 0:2:2 matched set active 1

8 total

Even after above LUN entries available after unmounting or delete operation fails, the next mount operation for new volume(s) work as expected.

  1. Performed CHO Script on Ubuntu system with iSCSI snigle path backend for 12 hours.

Observed same error as above but there were 100+ dangling LUN entries in docker host & Same numbers of entries (as vlun) observed in 3PAR. Also observed Docker services in unresponsive state in Docker host after few hours.

Volume created in StoreVirtual array has a name without prefix when it is created from Docker host.

Volume name is displayed with prefix “dcv-” followed by UUID in 3Par array when a volume is created from Docker host. But this is not the case for Lefthand driver.
There should be some consistency in volume name when a volume is created from HPE Docker host.

CLI snippet from Storevirtual array for a volume which is created from Docker host:

CLIQ>getVolumeInfo

HPE StoreVirtual LeftHand OS Command Line Interface, v12.6.00.0155
(C) Copyright 2016 Hewlett Packard Enterprise Development LP

RESPONSE
result 0
processingTime 93
name CliqSuccess
description Operation succeeded.

VOLUME
transport iSCSI
thinProvision true
stridePages 32
size 10737418240
serialNumber a28eafb99674dd972a7e830132afc42c0000000000000162
scratchQuota 4194304
reserveQuota 536870912
replication 1
provisionedSpace 553648128
parity 0
name d7167f6e-f180-4979-8d6e-30247bac93b7
minReplication 1
md5 a28eafb99674dd972a7e830132afc42c
maxSize 145661100032
iscsiIqn iqn.2003-10.com.lefthandnetworks:ci-mg-12.5:354:d7167f6e-f180-4979-8d6e-30247bac93b7
isPrimary true
initialQuota 536870912
id 354
groupName ci-mg-12.5
description
deleting false
created 2017-05-09T10:40:21Z
consumedSpace 553648128
clusterName CI-125
clusterAdaptiveOptimizationCapable false
checkSum false
bytesWritten 0
blockSize 1024
availability online
autogrowPages 1
adaptiveOptimizationPermitted true

STATUS
 value       2
 description OK

Error message is not displayed when snapshot is tried for deletion within its retention period.

Plugin: dockerciuser/hpedockerplugin:test

Steps to Reproduce:

  1. Create a volume.
    $ docker volume create -d hpe sumit -o size=15
    sumit

  2. Create a snapshot of the above created volume with expiration and retention hours.
    $ docker volume create -d hpe snap -o snapshotOf=sumit -o retentionHours=1 -o expirationHours=2
    snap

  3. Delete the snapshot within retention period (within 1 hour). No error was displayed. User is under impression that snapshot got deleted.
    $ docker volume rm sumit/snap
    sumit/snap

Expected result: Proper error message must be displayed when snapshot is tried for deletion within retention period.

  1. Snapshot is still present which is expected.
    $ docker volume inspect sumit/snap
    [
    {
    "Driver": "hpe:latest",
    "Labels": null,
    "Mountpoint": "/var/lib/docker/plugins/2e10a3e5d057a44690fccad4e9d4675a22d6e3e909aa221533ddb80e8c392cd1/rootfs",
    "Name": "sumit/snap",
    "Options": {},
    "Scope": "local",
    "Status": {
    "Settings": {
    "expirationHours": 2,
    "retentionHours": 1
    }
    }
    }
    ]

Volume mount fails with single path when multipath parameters are not configured in hpe.conf

  1. Single Zone was created and enabled in FC switch.
  2. use_multipath and enforce_multipath parameters were omitted from hpe.conf.
  3. FC plugin was installed with tag: hpestorage/hpedockervolumeplugin:2.0
  4. Volume was created successfully with default properties (thin provisioning, 100 GB size).
  5. Mounting volume failed with below error:

hpedockerplugin.exception.HPEPluginFileSystemException: HPE Docker Volume Plugin File System error: (u'create file system failed exception is : %s', u'\n\n RAN: /sbin/mkfs -F /dev/sdb\n\n STDOUT:\n\n\n STDERR:\nmke2fs 1.43.4 (31-Jan-2017)\n/dev/sdb is apparently in use by the system; will not make a filesystem here!\n')

  1. LUN was present in docker host:

docker@csimbe06-b07:~$ ls -lrt /dev/disk/by-path
total 0
lrwxrwxrwx 1 root root 9 Aug 2 03:48 pci-0000:03:00.0-scsi-0:1:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 2 03:48 pci-0000:03:00.0-scsi-0:1:0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Aug 2 03:48 pci-0000:03:00.0-scsi-0:1:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Aug 2 03:48 pci-0000:03:00.0-scsi-0:1:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 9 Aug 2 04:03 pci-0000:21:00.2-fc-0x20020002ac019d52-lun-0 -> ../../sdb

  1. VLUN was present in 3par array:

CSIM-8K02_MXN6072AC7 cli% showvlun -host csimbe06-b07
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type Status ID
0 dcv-mO4WBGTvTievrurT6n3kXw csimbe06-b07 100038EAA730D4E9 0:0:2 matched set active 1

1 total

VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
0 dcv-mO4WBGTvTievrurT6n3kXw csimbe06-b07 ---------------- 0:0:2 matched set

1 total

  1. Deleting volume failed as vlun was present in 3par array.
  2. Removed VLUN manually from 3par array.
  3. Deletion of volume was successful after that.
  4. Disabled the plugin.
  5. use_multipath and enforce_multipath parameters were set to true in hpe.conf
  6. Enabled the plugin and new volume was created successfully after that.
  7. Mounting new volume failed with below error:

hpedockerplugin.exception.HPEPluginMakeDirException: HPE Docker Volume Plugin Makedir Failed: (u'Make directory failed exception is : %s', u'list index out of range')

  1. Further observations were similar to steps 6-10.

Unable to initialize storage via - docker plugin enable

We are trying to connect to a 3PAR Storage device via ISCSI i can see the disk via fdisk but when trying to enable i encounter a dial unix error.

The correct IQN's are assigned on the 3PAR side of things.

[root@sldv-grp-swarm5 bhood]# docker plugin enable 2c
Error response from daemon: dial unix /run/docker/plugins/2cbefc8846cc2a51a1e496fed43a03be5a730b8ca18f77664a9c7edb6058c6d1/hpe.sock: connect: no such file or directory
[root@sldv-grp-swarm5 bhood]# docker plugin list
ID NAME DESCRIPTION ENABLED
2cbefc8846cc hpe:latest HPE Docker Volume Plugin false

The config file is as follows.

[root@sldv-grp-swarm5 bhood]# cat /etc/hpedockerplugin/hpe.conf
[DEFAULT]

logging = DEBUG
suppress_requests_ssl_warnings = False

hpedockerplugin_driver = hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver

hpe3par_api_url = https://10.226.8.10:8080/api/v1
iscsi_ip_address = 10.226.8.10
hpe3par_iscsi_ips = 10.226.8.10,10.226.8.11,10.226.8.12,10.226.8.13

use_multipath = False
enforce_multipath = False

Please advise.

Brian Hood

Inconsistency is observed in naming convention of driver options.

I am observing the inconsistency in driver option's naming convention.

Naming format should be: [“cloneOf”, “snapshotOf”, “qosName”, "flashCache"]
OR it should be: [“clone-of”, “snapshot-of”, “qos-name”, "flash-cache"]

Most probably, format of 1st list will be preferred.

Older releases have flash-cache, so it can be continued and considered as exception for 1st list.

Data not flushed to disk in RHEL 7.2

We're investigating an issue where it appears the data written in a container to a mounted volume is not being flushed to the underlying disk. We are theorizing that it may have to do with the file system caching within Ext4/RHEL7.2, but we're still investigating. It's possible that some different options may be necessary when mounting and unmounts the file system to ensure the data is flushed. No activity is seen on the array when copying large files to the mounted volume within the container.

Below is the scenario:

  1. Create a volume ("docker volume create -d hpe --name vol")
  2. Mount the volume to an Ubuntu container ("docker run -it -v vol:/testvol/ --volume-driver hpe ubuntu:v2 bash")
  3. Create a file on the volume ("echo "test" > /testvol/file1")
    • The file is viewable within both the container at /testvol/ and the docker host at /etc/hpedockerplugin/data/hpedocker-ip-192.1.2.2:3260-iscsi-iqn.2000-05.com.3pardata:21220002ac012141-lun-0
  4. Exit the container
  5. Remove the container ("docker rm 123b6adab1d6")
  6. Mount the same volume to a new Ubuntu container ("docker run -it -v vol:/testvol/ --volume-driver hpe ubuntu:v2 bash")
  7. Expecting to see the previously created file at /testvol/, but the file is not there...

FC plugin: Volume mount fails when host entry is already present in 3PAR array.

Install FC plugin with hpestorage/hpedockervolumeplugin:2.0 tag and perform volume mount after volume creation.

[docker@csimbe13-b03 ~]$ docker volume create -d hpe --name vol1
vol1

[docker@csimbe13-b03 ~]$ docker volume ls
DRIVER VOLUME NAME
hpe:latest vol1

[docker@csimbe13-b03 ~]$ docker run -it -v vol1:/data1 --volume-driver hpe --rm --name mounter1 busybox sh
hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'connection info retrieval failed, error is: %s', u"'HPE3PARFCDriver' object has no attribute '_modify_3par_fibrechan_host'")
mount_failed_logs.txt

Please find the logs attached here.

Volume properties are not displayed or displayed incorrectly as "Options" parameter values when “volume inspect” is performed from different node or Docker client.

Swarm cluster with 3 nodes (1 manager and 2 worker nodes) were deployed. Hpe 3par Docker plugin was created as container on all the three nodes with common host etcd ip address.

1. Created volume with size and provisioning parameters from master node.
stack@manager1:~$ sudo docker volume create -d hpe --name 3par-volume -o size=50 -o provisioning=full -o flash-cache=True
3par-volume

stack@manager1:~$ docker volume ls
DRIVER VOLUME NAME
local 0b33e50413e77f3050445ac8974b8541a15d5398b1db65188e69d1071db5d765
hpe 3par-volume

2. Inspecting volume from master node. Volume properties are displayed correctly in Options section.
stack@manager1:~$ docker volume inspect 3par-volume
[
{
"Driver": "hpe",
"Labels": {},
"Mountpoint": "/",
"Name": "3par-volume",
"Options": {
"flash-cache": "True",
"provisioning": "full",
"size": "50"
},
"Scope": "local"
}
]

3. Inspecting volume from worker node. Volume properties are not displayed in Options section.
stack@worker1:~$ docker volume ls
DRIVER VOLUME NAME
hpe 3par-volume

stack@worker1:~$ docker volume inspect 3par-volume
[
{
"Driver": "hpe",
"Labels": null,
"Mountpoint": "/",
"Name": "3par-volume",
"Options": {},
"Scope": "local"
}
]

4. Container logs of worker node.
stack@worker1:~$ docker container logs 6f63048b0c38

2017-05-16 06:47:49.947 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-16T06:47:49+0000 [twisted.python.log#info] "-" - - [16/May/2017:06:47:49 +0000] "POST /VolumeDriver.List HTTP/1.1" 200 111 "-" "Go-http-client/1.1"
2017-05-16T06:47:49+0000 [twisted.python.log#info] "-" - - [16/May/2017:06:47:49 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"
2017-05-16T06:48:49+0000 [-] Timing out client: UNIXAddress(None)
2017-05-16 06:49:18.150 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-16 06:49:18.154 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is 3par-volume
2017-05-16 06:49:18.155 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-16 06:49:18.157 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is 3par-volume
2017-05-16T06:49:18+0000 [twisted.python.log#info] "-" - - [16/May/2017:06:49:17 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 108 "-" "Go-http-client/1.1"
2017-05-16T06:49:18+0000 [twisted.python.log#info] "-" - - [16/May/2017:06:49:17 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 404 233 "-" "Go-http-client/1.1"
2017-05-16 06:49:18.164 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-16 06:49:18.166 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is 3par-volume
2017-05-16 06:49:18.167 19 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python2.7/site-packages/etcd/client.py:582
2017-05-16 06:49:18.168 19 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is 3par-volume
2017-05-16T06:49:18+0000 [twisted.python.log#info] "-" - - [16/May/2017:06:49:17 +0000] "POST /VolumeDriver.Path HTTP/1.1" 200 29 "-" "Go-http-client/1.1"
2017-05-16T06:50:18+0000 [-] Timing out client: UNIXAddress(None)

HPE plugin is not getting enabled with multiple etcd host addresses (unsecured connection) in hpe.conf file.

  1. Created six node setups: 3 manager and 3 worker nodes where worker1 points to manager1, worker2->manager2 and worker3->manager3

stack@manager1:~$ sudo docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager1 - generic Running tcp://10.50.0.158:2376 v17.06.1-ee-1-rc2
manager2 - generic Running tcp://10.50.0.160:2376 v17.06.1-ee-1-rc2
manager3 - generic Running tcp://10.50.0.161:2376 v17.06.1-ee-1-rc2
worker1 - generic Running tcp://10.50.0.152:2376 v17.06.1-ee-1-rc2
worker2 - generic Running tcp://10.50.0.153:2376 v17.06.1-ee-1-rc2
worker3 - generic Running tcp://10.50.0.151:2376 v17.06.1-ee-1-rc2

stack@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
0m1nfl0hrvdqopacrpqwy1o6x manager3 Ready Active Reachable
8r3u0q14ewukyi5hn36373aec worker3 Ready Active
9up7fkn5a7ttadmuo9pjbz3hf worker2 Ready Active
mu0oahv39fstr095cv3e2fvsr worker1 Ready Active
pqm3kmytshyigjeyajsckf3qt manager2 Ready Active Leader
uzs1ojqfnv9oessstff2zpxa9 * manager1 Ready Active Reachable

  1. Created etcd cluster of manager nodes. Etcd containers are running in all manager nodes. Ref link: https://coreos.com/etcd/docs/latest/v2/docker_guide.html

  2. All the worker nodes (etcd clients) are configured with a list of etcd members:

docker@worker3:~$ sudo etcdctl -C http://10.50.0.158:2379,http://10.50.0.160:2379,http://10.50.0.161:2379 member list
47a90b0067012ba2: name=etcd0 peerURLs=http://10.50.0.158:2380 clientURLs=http://10.50.0.158:2379,http://10.50.0.158:4001
818db80633603eff: name=etcd1 peerURLs=http://10.50.0.160:2380 clientURLs=http://10.50.0.160:2379,http://10.50.0.160:4001
d7de76820fdffe95: name=etcd2 peerURLs=http://10.50.0.161:2380 clientURLs=http://10.50.0.161:2379,http://10.50.0.161:4001

  1. HPE volume plugin is installed in all worker nodes with disable state.

stack@worker1:~$ docker plugin ls
ID NAME DESCRIPTION ENABLED
cd5d714d44cb hpe:latest HPE Docker Volume Plugin false

  1. Configured multiple etcd host IP addresses in hpe.conf of all theworker nodes.

host_etcd_ip_address = 10.50.0.158:2379,10.50.0.160:2379,10.50.0.161:2379
host_etcd_port_number = 2379

  1. Enabling plugin fails with below error.

stack@worker1:~$ docker plugin enable hpe
Error response from daemon: dial unix /run/docker/plugins/cd5d714d44cb87f39926b1ff20d0f96e9c5ac53e846c93f12f83ab179bb34b4f/hpe.sock: connect: no such file or directory

Aug 8 05:22:13 worker1 dockerd[11151]: time="2017-08-08T05:22:13-07:00" level=info msg="2017-08-08 12:22:13.497 22 ERROR etcd.client [-] List of hosts incompatible with allow_reconnect." plugin=cd5d714d44cb87f39926b1ff20d0f96e9c5ac53e846c93f12f83ab179bb34b4f
Aug 8 05:22:13 worker1 dockerd[11151]: time="2017-08-08T05:22:13-07:00" level=info msg="2017-08-08 12:22:13.498 22 ERROR hpe_storage_api File "/usr/lib/python2.7/site-packages/etcd/client.py", line 139, in init" plugin=cd5d714d44cb87f39926b1ff20d0f96e9c5ac53e846c93f12f83ab179bb34b4f
Aug 8 05:22:13 worker1 dockerd[11151]: time="2017-08-08T05:22:13-07:00" level=info msg="2017-08-08 12:22:13.498 22 ERROR hpe_storage_api raise etcd.EtcdException("A list of hosts to connect to was given, but reconnection not allowed?")" plugin=cd5d714d44cb87f39926b1ff20d0f96e9c5ac53e846c93f12f83ab179bb34b4f
Aug 8 05:22:13 worker1 dockerd[11151]: time="2017-08-08T05:22:13-07:00" level=info msg="2017-08-08 12:22:13.498 22 ERROR hpe_storage_api EtcdException: A list of hosts to connect to was given, but reconnection not allowed?" plugin=cd5d714d44cb87f39926b1ff20d0f96e9c5ac53e846c93f12f83ab179bb34b4f
Aug 8 05:22:13 worker1 dockerd[11151]: time="2017-08-08T05:22:13-07:00" level=info msg="2017-08-08 12:22:13.498 22 ERROR hpe_storage_api " plugin=cd5d714d44cb87f39926b1ff20d0f96e9c5ac53e846c93f12f83ab179bb34b4f

Volume mount operation fails intermittently when iSCSI plugin is enabled.

OS platform: Ubuntu 16.04 / CentOS 7.3
Plugin tag: hpestorage/hpedockervolumeplugin:2.0.1

All the performed steps are mentioned below on how the issue was observed:

  1. Perform mount operations (writing data) on 30 different volumes on Ubuntu/CentOS using below command:

$ for i in seq 1 30; do echo "Writing data to volume: $i"; docker run -it -v ubuntu$i:/data1 --volume-driver hpe --rm busybox /bin/sh -c "echo 'data' > /data1/hi$i.txt" ; done

Observed couple of failures with below error:

hpedockerplugin.exception.HPEPluginMountException: HPE Docker Volume Plugin Mount Failed: (u'exception is : %s', u'\n\n RAN: /bin/mount -t ext4 /dev/sdc /opt/hpe/data/hpedocker-scsi-360002ac000000000040124ba00002ba0\n\n STDOUT:\n\n\n STDERR:\nmount: /opt/hpe/data/hpedocker-scsi-360002ac000000000040124ba00002ba0: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error.\n')

  1. Perform mount operations (reading data) on 30 different volumes on Ubuntu/CentOS using below command:

$ for i in seq 1 30; do echo "Reading data from volume: $i"; docker run -it -v ubuntu$i:/data1 --volume-driver hpe --rm busybox cat /data1/hi$i.txt ; done

Observed 5-6 failures with the same error.

Orphaned LUN entry available after unmounting the volume which was NOT expected:

[docker@csimbe13-b03 ~]$ lsscsi
[0:0:0:0] storage HP P244br 3.00 -
[0:0:0:4194240] storage HP P244br 3.00 -
[0:0:0:4194496] storage HP P244br 3.00 -
[0:1:0:0] disk HP LOGICAL VOLUME 3.00 /dev/sda
[0:1:0:1] disk HP LOGICAL VOLUME 3.00 /dev/sdb
[1:0:0:254] enclosu 3PARdata SES 3310 -
[2:0:0:254] enclosu 3PARdata SES 3310 -
[11:0:0:254] enclosu 3PARdata SES 3224 -
[193:0:0:254] enclosu 3PARdata SES 3310 -
[198:0:0:0] disk 3PARdata VV 3310 /dev/sdc
[198:0:0:254] enclosu 3PARdata SES 3310 -

[docker@csimbe13-b03 ~]$ ls -lrt /dev/disk/by-path
total 0
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 pci-0000:07:00.0-scsi-0:1:0:0-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 pci-0000:07:00.0-scsi-0:1:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 9 Sep 22 00:53 pci-0000:07:00.0-scsi-0:1:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 9 Sep 22 00:53 pci-0000:07:00.0-scsi-0:1:0:1 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Sep 22 03:26 ip-10.50.17.220:3260-iscsi-iqn.2000-05.com.3pardata:20210002ac002ba0-lun-0 -> ../../sdc

[docker@csimbe13-b03 ~]$ ls -lrt /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 wwn-0x600508b1001c00753453523c856f650b-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 scsi-3600508b1001c00753453523c856f650b-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 lvm-pv-uuid-RMJGcT-80Fa-x9HD-Q4jk-UFmr-NcZQ-mXjZM6 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 wwn-0x600508b1001c00753453523c856f650b-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 scsi-3600508b1001c00753453523c856f650b-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 9 Sep 22 00:53 wwn-0x600508b1001c00753453523c856f650b -> ../../sda
lrwxrwxrwx. 1 root root 9 Sep 22 00:53 scsi-3600508b1001c00753453523c856f650b -> ../../sda
lrwxrwxrwx. 1 root root 9 Sep 22 00:53 wwn-0x600508b1001c84476eafd7bf7b837471 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Sep 22 00:53 scsi-3600508b1001c84476eafd7bf7b837471 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-name-docker-253:0-16786440-eb861cc588796b5c1859904d8afeccf48aaa4c39198f296e6631938d3b5d2e06 -> ../../dm-5
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-uuid-LVM-0K3PtinMg6bLZFeBAsM5ryNXPvrkQa3RxMZENZTxoADmykRX150JJ0r921SEdw5m -> ../../dm-3
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-name-cl-home -> ../../dm-3
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-name-cl-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-uuid-LVM-0K3PtinMg6bLZFeBAsM5ryNXPvrkQa3RNuYqnphtbtE3G6TGZa4KeZCeruUrEfo7 -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-uuid-LVM-0K3PtinMg6bLZFeBAsM5ryNXPvrkQa3Rkkk0J0Bb7iiwiJ8dVU1XXC1gqJyuArYQ -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Sep 22 00:53 dm-name-cl-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 9 Sep 22 03:26 wwn-0x60002ac000000000040124bf00002ba0 -> ../../sdc
lrwxrwxrwx. 1 root root 9 Sep 22 03:26 scsi-360002ac000000000040124bf00002ba0 -> ../../sdc

If above LUN entries will be available after unmounting the volume, the next mount operation for same volume will work as expected but the next mount operation for different volume will cause to FAIL.

amqp version issue

Hi,
while installing on ubuntu 14.04 LTS we get the follwing error on the amqp version:
error: amqp 1.4.9 is installed but amqp<3.0,>=2.0.2 is required by set(['kombu'])

pip freeze shows that 2.0.3 is installed:
pip freeze |grep amqp
amqp==2.0.3

Installation log:
Installed /usr/local/lib/python2.7/dist-packages/python_hpedockerplugin-1.0.0-py2.7.egg
Processing dependencies for python-hpedockerplugin==1.0.0
Searching for amqp<2.0,>=1.4.0
Reading https://pypi.python.org/simple/amqp/
Best match: amqp 1.4.9
Downloading https://pypi.python.org/packages/cc/a4/f265c6f9a7eb1dd45d36d9ab775520e07ff575b11ad21156f9866da047b2/amqp-1.4.9.tar.gz#md5=df57dde763ba2dea25b3fa92dfe43c19
Processing amqp-1.4.9.tar.gz
Writing /tmp/easy_install-6dVbHt/amqp-1.4.9/setup.cfg
Running amqp-1.4.9/setup.py -q bdist_egg --dist-dir /tmp/easy_install-6dVbHt/amqp-1.4.9/egg-dist-tmp-q7oCUp
creating /usr/local/lib/python2.7/dist-packages/amqp-1.4.9-py2.7.egg
Extracting amqp-1.4.9-py2.7.egg to /usr/local/lib/python2.7/dist-packages
Adding amqp 1.4.9 to easy-install.pth file

Installed /usr/local/lib/python2.7/dist-packages/amqp-1.4.9-py2.7.egg
error: amqp 1.4.9 is installed but amqp<3.0,>=2.0.2 is required by set(['kombu'])

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.