GithubHelp home page GithubHelp logo

ibm / ibm-spectrum-scale-csi Goto Github PK

View Code? Open in Web Editor NEW
62.0 17.0 49.0 67.45 MB

The IBM Spectrum Scale Container Storage Interface (CSI) project enables container orchestrators, such as Kubernetes and OpenShift, to manage the life-cycle of persistent storage.

License: Apache License 2.0

Makefile 0.58% Shell 2.12% Dockerfile 0.18% Go 20.42% Python 76.69%
spectrum-scale openshift kubernetes docker container operator csi operatorhub storage cloud

ibm-spectrum-scale-csi's Introduction

IBM Storage Scale CSI (Container Storage Interface)

Official IBM Documentation

The IBM Storage Scale Container Storage Interface (CSI) project enables container orchestrators, such as Kubernetes and OpenShift, to manage the life-cycle of persistent storage.

This project contains an go-based operator to run and manage the deployment of the IBM Storage Scale CSI Driver.

Support

IBM Storage Scale CSI driver is part of the IBM Storage Scale offering. Please follow the IBM support procedure for any issues with the driver.

Report Bugs

For help with urgent situations, please use the IBM PMR process. All IBM Storage Scale customers using CSI, who also have ongoing support contracts, are entitled to the PMR process. Feature requests through the official RFE channels are also encouraged.

For non-urgent issues, suggestions, recommendations, feel free to open an issue in github. Issues will be addressed as team availability permits.

Contributing

We welcome contributions to this project, see Contributing for more details.

Note: This repository includes contributions from ubiquity.

Licensing

Copyright 2022, 2024 IBM Corp.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

ibm-spectrum-scale-csi's People

Contributors

amdabhad avatar aspalazz avatar badri-pathak avatar deeghuge avatar dependabot[bot] avatar drolsonibm avatar dunnevan avatar ghanshyam-11 avatar hemalathagajendran avatar jainbrt avatar kulkarnicr avatar madhuthorat avatar mavin6618 avatar mew2057 avatar nitishkumar4 avatar sam6258 avatar saurabhwani5 avatar sectorsize512 avatar shrutinipane avatar smitaraut avatar stevemar avatar tydanny avatar vrushch avatar whowutwut avatar yadaven avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ibm-spectrum-scale-csi's Issues

Lint scan yields major failures in running environment

Ran a cv lint on the output of cv scan and we got a mess of issues to address for certification in the CloudPak.

[root scan]# ~/cv/cv lint resources .
Linter version: v2.0.11

------------------------------------------------------------------------------------------------------------
==> Linting results for resources
[REVIEW] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)
[REVIEW] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)
[REVIEW] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)
[REVIEW] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)
[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-default.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)
[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-attacher.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)
[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-node.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)
[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-operator.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)
[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-provisioner.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)
[ERROR] scanned-configmap-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-operator-lock.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-configmap-ibm-spectrum-scale-csi-driver-spectrum-scale-config.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-csiscaleoperator-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: "ALL" not found in spec.template.spec.containers[1].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: neither spec.template.spec.containers[1].resources.limits.cpu nor spec.template.spec.containers[1].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostNetwork at spec.template.spec.hostNetwork not allowed (NoHostNetwork)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[1].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[2].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[3].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[5].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)
[ERROR] scanned-deployment-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-operator.yaml: "latest" tag not allowed on image at spec.template.spec.containers[0].image (NoLatestImageTags)
[ERROR] scanned-deployment-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-operator.yaml: "latest" tag not allowed on image at spec.template.spec.containers[1].image (NoLatestImageTags)
[ERROR] scanned-secret-ibm-spectrum-scale-csi-driver-secret1.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-default.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

------------------------------------------------------------------------------------------------------------
==> Lint Summary

Rule                                             Severity  Total   Reduced  Ignored
ContainerDefinesResources                        ERROR     12      0        0
RequiredMetadataLabelsDefined                    ERROR     11      0        0
NoHostPath                                       ERROR     7       0        0
ContainerHasDropAll                              ERROR     4       0        0
ContainerHasLivenessProbe                        ERROR     4       0        0
MeteringAnnotationsDefined                       ERROR     3       0        0
PodHasArchBasedNodeAffinity                      ERROR     3       0        0
NoLatestImageTags                                ERROR     2       0        0
NoHostNetwork                                    ERROR     1       0        0
ServiceAccountHasPullSecret                      WARNING   5       0        0
ContainerWithNoMatchingServiceHasReadinessProbe  REVIEW    4       0        0


See the logs in /tmp/cv/ for more details
Error: 1 products failed linting

Make clusterID optional for Storage classes of Fileset Dynamic provisioning

As for now, if you create a storage class for dynamic provisioning of gpfs filesets, you need to provide a cluster id. This is redundant. The filesystem name only should be sufficient as the names are unique in a gpfs cluster. Please make the cluster ID optional for storage classes. Sample storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: ibm-spectrum-scale-csi-fileset
provisioner: spectrumscale.csi.ibm.com
parameters:
    volBackendFs: "gpfs0"
    clusterId: "17399599334479190260"  <--- MAKE IT OPTIONAL
reclaimPolicy: Delete

csi driver not terminating when requested to delete

After running for some time with operator and driver, I attempted to remove the driver... but mistakenly removed the operator as described here: https://github.com/IBM/ibm-spectrum-scale-csi-operator/issues/68

However, it seems I'm unable to successfully remove the driver with the same yaml file creating the driver...

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-attacher-0                  1/1     Running   0          42h
ibm-spectrum-scale-csi-operator-6ff9cf6979-k2gpd   2/2     Running   0          42h
ibm-spectrum-scale-csi-provisioner-0               1/1     Running   0          42h
ibm-spectrum-scale-csi-vk7kr                       2/2     Running   0          41m
ibm-spectrum-scale-csi-vpz6v                       2/2     Running   0          41m
ibm-spectrum-scale-csi-ww78g                       2/2     Running   0          41m
+ set +x
[root@c943f4n01-pvt csi-operator-ansible]#

Then since, i've tried removing the driver multiple times:

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh removeDR
csiscaleoperator.scale.ibm.com "ibm-spectrum-scale-csi" deleted

But pods continue to run:

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                   READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-attacher-0      1/1     Running   0          42h
ibm-spectrum-scale-csi-provisioner-0   1/1     Running   0          42h
ibm-spectrum-scale-csi-vk7kr           2/2     Running   0          55m
ibm-spectrum-scale-csi-vpz6v           2/2     Running   0          55m
ibm-spectrum-scale-csi-ww78g           2/2     Running   0          55m
+ set +x

it's been around 14 minutes and no changes ... still "Running"

Here's the versions of the images being deployed:

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh version
--- POD: ibm-spectrum-scale-csi-attacher-0 ---
  ibm-spectrum-scale-csi-attacher:
    Container ID:  cri-o://6798f2aad5ddc2968368080b52e6d173e4ab6155e4a82b9da133d23c3774dee6
    Image:         quay.io/k8scsi/csi-attacher:v1.0.0
    Image ID:      quay.io/k8scsi/csi-attacher@sha256:e57bb6abf0d78e638f70d38bdb07ee30ffe42d423a14fb2f910c11afab3a5e01
--- POD: ibm-spectrum-scale-csi-provisioner-0 ---
  csi-provisioner:
    Container ID:  cri-o://5334023271ac883db03e032ecf46439c9f98c992272a31a41df4c7edd6caef60
    Image:         quay.io/k8scsi/csi-provisioner:v1.0.0
    Image ID:      quay.io/k8scsi/csi-provisioner@sha256:cd0df00950e7b50154e83e29010eb2f2bfe0b661fb7fdd65c6621e8a49cd2bc0
--- POD: ibm-spectrum-scale-csi-vk7kr ---
  driver-registrar:
    Container ID:  cri-o://39e209f1237d2a3f6344905c3eb79b35b7674349d40872d992ca90f8d23770d7
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
    Image ID:      quay.io/k8scsi/csi-node-driver-registrar@sha256:5ad51d40e6de6762ae9bf2acbcf7117d46c87bc50048a337ce5a5cd6697498b4
--
  ibm-spectrum-scale-csi:
    Container ID:  cri-o://5773bf605f616207c5f8e5674d191ffb1b2fa50ced709dc1e2bd3424adab1893
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808
--- POD: ibm-spectrum-scale-csi-vpz6v ---
  driver-registrar:
    Container ID:  cri-o://d71f2b8b96edd5e48cb56ec60843471675ac9efcc7b7615c9fa20baa34810e0e
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
    Image ID:      quay.io/k8scsi/csi-node-driver-registrar@sha256:5ad51d40e6de6762ae9bf2acbcf7117d46c87bc50048a337ce5a5cd6697498b4
--
  ibm-spectrum-scale-csi:
    Container ID:  cri-o://bded886e5b2e3aedc0b008bda5499c6572ee6e51681659c9ceb05d3cd2579432
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808
--- POD: ibm-spectrum-scale-csi-ww78g ---
  driver-registrar:
    Container ID:  cri-o://e59f0ad8029ae46b4023e28e76226446335fefaccfa8a4cbf2b3265a7409f3a3
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
    Image ID:      quay.io/k8scsi/csi-node-driver-registrar@sha256:5ad51d40e6de6762ae9bf2acbcf7117d46c87bc50048a337ce5a5cd6697498b4
--
  ibm-spectrum-scale-csi:
    Container ID:  cri-o://bce2077dbcca30c17c782b1bb991b2bdef1d6cec645dc91c849eb6554008c689
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808

Placeholder, discuss the image Pull policy and the build process

Just wanted to create this issue placeholder to discuss with @mew2057 after the US holiday this week. Currently the pull policy is imagePullPolicy: IfNotPresent and we will not be pulling down later images of the container builds, if we already pulled it down at some point.

image

I think we have to decide:

  • Is this the right pull policy?
  • Should we not be tagging over itself?

I don't think we want to checkout a tag, but then also modify the yamls to point to the right tag to get..I'd like to be able to check out tags of this project and re-lauch the container with what is in the source code. If we are building over tags, and it doesn't get incremented, we will not be able to checkout and re-create the env..

Example:

git checkout 1.0.0 -> then `apply -f` files to deploy 
git checkout 1.0.1 -> then `apply -f` files to deploy  
git checkout 0.9.2 -> then `apply -f` files to deploy  

Or even have scripts that will curl down based on the tag version... , for example the v0.9.0 tag... https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi-operator/v0.9.0/stable/ibm-spectrum-scale-csi-operator-bundle/operators/ibm-spectrum-scale-csi-operator/deploy/olm-scripts/operator-source.yaml

Removing the operator first, should it automatically remove the driver?

Describe the bug

Not really a bug, but just wanted to log this for completeness.. will also open a issue in driver ...

After running for some time, I was planning to remove the driver first, then operator (in a good flow) but accidentally removed the operator first.

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-attacher-0                  1/1     Running   0          42h
ibm-spectrum-scale-csi-operator-6ff9cf6979-k2gpd   2/2     Running   0          42h
ibm-spectrum-scale-csi-provisioner-0               1/1     Running   0          42h
ibm-spectrum-scale-csi-vk7kr                       2/2     Running   0          41m
ibm-spectrum-scale-csi-vpz6v                       2/2     Running   0          41m
ibm-spectrum-scale-csi-ww78g                       2/2     Running   0          41m
+ set +x
[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh removeOP
Removing secret before namespace
secret "spectrum-scale-gui-secret" deleted
NAME                                               READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-attacher-0                  1/1     Running   0          42h
ibm-spectrum-scale-csi-operator-6ff9cf6979-k2gpd   2/2     Running   0          42h
ibm-spectrum-scale-csi-provisioner-0               1/1     Running   0          42h
ibm-spectrum-scale-csi-vk7kr                       2/2     Running   0          43m
ibm-spectrum-scale-csi-vpz6v                       2/2     Running   0          43m
ibm-spectrum-scale-csi-ww78g                       2/2     Running   0          43m
deployment.apps "ibm-spectrum-scale-csi-operator" deleted
serviceaccount "ibm-spectrum-scale-csi-operator" deleted
serviceaccount "ibm-spectrum-scale-csi-attacher" deleted
serviceaccount "ibm-spectrum-scale-csi-node" deleted
serviceaccount "ibm-spectrum-scale-csi-provisioner" deleted
role.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-operator" deleted
clusterrole.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-operator" deleted
clusterrole.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-node" deleted
clusterrole.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-attacher" deleted
clusterrole.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-provisioner" deleted
rolebinding.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-operator" deleted
clusterrolebinding.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-operator" deleted
clusterrolebinding.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-node" deleted
clusterrolebinding.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-provisioner" deleted
clusterrolebinding.rbac.authorization.k8s.io "ibm-spectrum-scale-csi-attacher" deleted
customresourcedefinition.apiextensions.k8s.io "csiscaleoperators.scale.ibm.com" deleted

Whoops, noticed this when I only saw operator terminating...

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-attacher-0                  1/1     Running       0          42h
ibm-spectrum-scale-csi-operator-6ff9cf6979-k2gpd   2/2     Terminating   0          42h
ibm-spectrum-scale-csi-provisioner-0               1/1     Running       0          42h
ibm-spectrum-scale-csi-vk7kr                       2/2     Running       0          43m
ibm-spectrum-scale-csi-vpz6v                       2/2     Running       0          43m
ibm-spectrum-scale-csi-ww78g                       2/2     Running       0          43m
+ set +x

Then decided oh crap, let me remove the driver...

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh removeDR
csiscaleoperator.scale.ibm.com "ibm-spectrum-scale-csi" deleted

Operator then removes fine, but driver is left running...

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                   READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-attacher-0      1/1     Running   0          42h
ibm-spectrum-scale-csi-provisioner-0   1/1     Running   0          42h
ibm-spectrum-scale-csi-vk7kr           2/2     Running   0          46m
ibm-spectrum-scale-csi-vpz6v           2/2     Running   0          46m
ibm-spectrum-scale-csi-ww78g           2/2     Running   0          46m
+ set +x

More of a question about whether operator terminating should remove the driver as well.

To Reproduce

Not sure.... will see if i can...

Expected behavior

Unknown, looking for some answers with this issue

Environment

--- POD: ibm-spectrum-scale-csi-attacher-0 ---
  ibm-spectrum-scale-csi-attacher:
    Container ID:  cri-o://6798f2aad5ddc2968368080b52e6d173e4ab6155e4a82b9da133d23c3774dee6
    Image:         quay.io/k8scsi/csi-attacher:v1.0.0
    Image ID:      quay.io/k8scsi/csi-attacher@sha256:e57bb6abf0d78e638f70d38bdb07ee30ffe42d423a14fb2f910c11afab3a5e01
--- POD: ibm-spectrum-scale-csi-provisioner-0 ---
  csi-provisioner:
    Container ID:  cri-o://5334023271ac883db03e032ecf46439c9f98c992272a31a41df4c7edd6caef60
    Image:         quay.io/k8scsi/csi-provisioner:v1.0.0
    Image ID:      quay.io/k8scsi/csi-provisioner@sha256:cd0df00950e7b50154e83e29010eb2f2bfe0b661fb7fdd65c6621e8a49cd2bc0
--- POD: ibm-spectrum-scale-csi-vk7kr ---
  driver-registrar:
    Container ID:  cri-o://39e209f1237d2a3f6344905c3eb79b35b7674349d40872d992ca90f8d23770d7
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
    Image ID:      quay.io/k8scsi/csi-node-driver-registrar@sha256:5ad51d40e6de6762ae9bf2acbcf7117d46c87bc50048a337ce5a5cd6697498b4
--
  ibm-spectrum-scale-csi:
    Container ID:  cri-o://5773bf605f616207c5f8e5674d191ffb1b2fa50ced709dc1e2bd3424adab1893
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808
--- POD: ibm-spectrum-scale-csi-vpz6v ---
  driver-registrar:
    Container ID:  cri-o://d71f2b8b96edd5e48cb56ec60843471675ac9efcc7b7615c9fa20baa34810e0e
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
    Image ID:      quay.io/k8scsi/csi-node-driver-registrar@sha256:5ad51d40e6de6762ae9bf2acbcf7117d46c87bc50048a337ce5a5cd6697498b4
--
  ibm-spectrum-scale-csi:
    Container ID:  cri-o://bded886e5b2e3aedc0b008bda5499c6572ee6e51681659c9ceb05d3cd2579432
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808
--- POD: ibm-spectrum-scale-csi-ww78g ---
  driver-registrar:
    Container ID:  cri-o://e59f0ad8029ae46b4023e28e76226446335fefaccfa8a4cbf2b3265a7409f3a3
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
    Image ID:      quay.io/k8scsi/csi-node-driver-registrar@sha256:5ad51d40e6de6762ae9bf2acbcf7117d46c87bc50048a337ce5a5cd6697498b4
--
  ibm-spectrum-scale-csi:
    Container ID:  cri-o://bce2077dbcca30c17c782b1bb991b2bdef1d6cec645dc91c849eb6554008c689
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Operator serviceaccount needs cluster-admin role on Docker EE

Describe the bug
If the Spectrum Scale CSI operator is used to deploy Scale CSI driver on Kubernetes cluster with Docker Enterprise, the statefulsets and daemonset do not come up and below errors are seen in operator logs-

TASK [csi-scale : Ensure csi-scale objects are present] ************************

task path: /opt/ansible/roles/csi-scale/tasks/main.yml:66

changed: [localhost] => (item={'name': 'spectrum_scale.yaml.j2'}) => {"ansible_loop_var": "item", "changed": true, "item": {"name": "spectrum_scale.yaml.j2"}, "method": "create", "result": {"apiVersion": "v1", "data": {"spectrum-scale-config.json": "{ \"clusters\":  [{\"id\": \"< Primary Cluster ID - WARNING: THIS IS A STRING NEEDS YAML QUOTES!>\", \"primary\": {\"primaryFs\": \"< Primary Filesystem >\", \"primaryFset\": \"< Fileset in Primary Filesystem >\"}, \"restApi\": [{\"guiHost\": \"< Primary cluster GUI IP/Hostname >\"}], \"secrets\": \"secret1\", \"secureSslMode\": false}] }"}, "kind": "ConfigMap", "metadata": {"creationTimestamp": "2020-01-07T02:07:44Z", "name": "spectrum-scale-config", "namespace": "ibm-spectrum-scale-csi-driver", "resourceVersion": "51753691", "selfLink": "/api/v1/namespaces/ibm-spectrum-scale-csi-driver/configmaps/spectrum-scale-config", "uid": "7e5a226c-30f2-11ea-b2d7-0242ac110009"}}}

failed: [localhost] (item={'name': 'csi-plugin-attacher.yaml.j2'}) => {"ansible_loop_var": "item", "changed": false, "error": 403, "item": {"name": "csi-plugin-attacher.yaml.j2"}, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"**Failure\",\"message\":\"statefulsets.apps \\\\\"ibm-spectrum-scale-csi-attacher\\\\\" is forbidden: user \\\\\"system:serviceaccount:ibm-spectrum-scale-csi-driver:ibm-spectrum-scale-csi-operator\\\\\" is not an admin and does not have permissions to use host bind mounts for resource** \",\"reason\":\"Forbidden\",\"details\":{\"name\":\"ibm-spectrum-scale-csi-attacher\",\"group\":\"apps\",\"kind\":\"statefulsets\"},\"code\":403}\\n'", "reason": "Forbidden", "status": 403}

failed: [localhost] (item={'name': 'csi-plugin-provisioner.yaml.j2'}) => {"ansible_loop_var": "item", "changed": false, "error": 403, "item": {"name": "csi-plugin-provisioner.yaml.j2"}, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"statefulsets.apps \\\\\"ibm-spectrum-scale-csi-provisioner\\\\\" is forbidden: user \\\\\"system:serviceaccount:ibm-spectrum-scale-csi-driver:ibm-spectrum-scale-csi-operator\\\\\" is not an admin and does not have permissions to use host bind mounts for resource \",\"reason\":\"Forbidden\",\"details\":{\"name\":\"ibm-spectrum-scale-csi-provisioner\",\"group\":\"apps\",\"kind\":\"statefulsets\"},\"code\":403}\\n'", "reason": "Forbidden", "status": 403}

failed: [localhost] (item={'name': 'csi-plugin.yaml.j2'}) => {"ansible_loop_var": "item", "changed": false, "error": 403, "item": {"name": "csi-plugin.yaml.j2"}, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"daemonsets.apps \\\\\"ibm-spectrum-scale-csi\\\\\" is forbidden: user \\\\\"system:serviceaccount:ibm-spectrum-scale-csi-driver:ibm-spectrum-scale-csi-operator\\\\\" is not an admin and does not have permissions to use host bind mounts, host networking for resource \",\"reason\":\"Forbidden\",\"details\":{\"name\":\"ibm-spectrum-scale-csi\",\"group\":\"apps\",\"kind\":\"daemonsets\"},\"code\":403}\\n'", "reason": "Forbidden", "status": 403}

To Reproduce
Easy to reproduce on docker EE. Normal deployment using CSI operator would reproduce it

Expected behavior
Driver deployment should be successful and statefulsets, daemonset and driver pods should come up in running/ready state

Environment
Docker EE

Additional context
Adding cluster-admin role to ibm-spectrum-scale-csi-operator serviceaccount made it work. Reference: rook/rook#3356

Have looks at section:

For users running Docker enterprise, this may be the cuprit.
For cluster security, only UCP admin users and service accounts that are granted the cluster-admin ClusterRole for all Kubernetes namespaces via a ClusterRoleBinding can deploy pods with privileged options.
From: https://docs.docker.com/ee/ucp/authorization/
Part of the error text from the description includes "... is not an admin...", which I don't recall seeing in my PSP work, but this text shows up here in the Docker EE docs.

Connection Test

In the cluster_check playbook we should run a connection test to the GUI server specified in the cluster object.

  • 200 - ๐Ÿ‘
  • Anything else - ๐Ÿ‘Ž

Scan Remediation: csi-plugin-attacher.yaml.j2

Template

roles/csi-scale/templates/csi-plugin-attacher.yaml.j2

Raw Scan

[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-attacher.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

Action Items

  • "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
  • ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
  • ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels
  • metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
  • neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined
  • spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
  • spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
  • spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
  • use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
  • value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

ansible-playbook fails

[root@worker01 IBM]# ansible-playbook $GOPATH/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/dev-env-playbook.yaml
 [WARNING]: Could not match supplied host pattern, ignoring: all

 [WARNING]: provided hosts list is empty, only localhost is available


PLAY [Prepare environment for development.] **********************************************************************************************************

TASK [Ensure common tasks are run] *******************************************************************************************************************
included: /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/dev-env.yaml for localhost

TASK [Set environment facts] *************************************************************************************************************************
ok: [localhost]

TASK [Ensure 'python3' requirements are installed] ***************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/pip3 install -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt", "msg": "stdout: Requirement already satisfied: sphinx in /usr/local/lib64/python3.6/site-packages (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinx_rtd_theme in /usr/local/lib/python3.6/site-packages (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 2))\nRequirement already satisfied: recommonmark in /usr/local/lib/python3.6/site-packages (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 3))\nRequirement already satisfied: operator-courier==2.0.1 in /usr/local/lib/python3.6/site-packages (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 4))\nRequirement already satisfied: pyyaml in /usr/local/lib64/python3.6/site-packages (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 5))\nCollecting molecule (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\n  Using cached https://files.pythonhosted.org/packages/af/be/3084cbedc051e179062cf887fd5933c1c5f1031200f62f718048f36dc604/molecule-2.22-py2.py3-none-any.whl\nCollecting jmespath (from -r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 7))\n  Using cached https://files.pythonhosted.org/packages/83/94/7179c3832a6d45b266ddb2aac329e101367fbdb11f425f13771d27f225bb/jmespath-0.9.4-py2.py3-none-any.whl\nRequirement already satisfied: imagesize in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinxcontrib-htmlhelp in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: alabaster<0.8,>=0.7 in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: Pygments>=2.0 in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: babel!=2.0,>=1.3 in /usr/local/lib64/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinxcontrib-serializinghtml in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: snowballstemmer>=1.1 in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinxcontrib-qthelp in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: requests>=2.5.0 in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: docutils>=0.12 in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: setuptools in /usr/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinxcontrib-jsmath in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinxcontrib-devhelp in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: Jinja2>=2.3 in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: sphinxcontrib-applehelp in /usr/local/lib/python3.6/site-packages (from sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: commonmark>=0.8.1 in /usr/local/lib/python3.6/site-packages (from recommonmark->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 3))\nRequirement already satisfied: validators in /usr/local/lib/python3.6/site-packages (from operator-courier==2.0.1->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 4))\nRequirement already satisfied: semver in /usr/local/lib/python3.6/site-packages (from operator-courier==2.0.1->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 4))\nRequirement already satisfied: testinfra<4,>=3.0.6 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: yamllint<2,>=1.15.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: cerberus>=1.3.1 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: click-completion>=0.3.1 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: ansible-lint<5,>=4.0.2 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: flake8>=3.6.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: six>=1.11.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: click>=6.7 in /usr/local/lib64/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: sh>=1.12.14 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: tabulate>=0.8.3 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: paramiko<3,>=2.5.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pexpect<5,>=4.6.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: cookiecutter>=1.6.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nCollecting psutil<6,>=5.4.6; sys_platform != \"win32\" and sys_platform != \"cygwin\" (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\n  Using cached https://files.pythonhosted.org/packages/73/93/4f8213fbe66fc20cb904f35e6e04e20b47b85bee39845cc66a0bcf5ccdcb/psutil-5.6.7.tar.gz\nRequirement already satisfied: tree-format>=0.1.2 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: colorama>=0.3.9 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: ansible>=2.5 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: anyconfig==0.9.7 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: python-gilt<2,>=1.2.1 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pre-commit<2,>=1.17.0 in /usr/local/lib/python3.6/site-packages (from molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pytz>=2015.7 in /usr/local/lib/python3.6/site-packages (from babel!=2.0,>=1.3->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/site-packages (from requests>=2.5.0->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/site-packages (from requests>=2.5.0->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/site-packages (from requests>=2.5.0->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/site-packages (from requests>=2.5.0->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/site-packages (from packaging->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib64/python3.6/site-packages (from Jinja2>=2.3->sphinx->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 1))\nRequirement already satisfied: decorator>=3.4.0 in /usr/local/lib/python3.6/site-packages (from validators->operator-courier==2.0.1->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 4))\nRequirement already satisfied: pytest!=3.0.2 in /usr/local/lib/python3.6/site-packages (from testinfra<4,>=3.0.6->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pathspec>=0.5.3 in /usr/local/lib/python3.6/site-packages (from yamllint<2,>=1.15.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: shellingham in /usr/local/lib/python3.6/site-packages (from click-completion>=0.3.1->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: ruamel.yaml in /usr/local/lib/python3.6/site-packages (from ansible-lint<5,>=4.0.2->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pycodestyle<2.6.0,>=2.5.0 in /usr/local/lib/python3.6/site-packages (from flake8>=3.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: entrypoints<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/site-packages (from flake8>=3.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pyflakes<2.2.0,>=2.1.0 in /usr/local/lib/python3.6/site-packages (from flake8>=3.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: mccabe<0.7.0,>=0.6.0 in /usr/local/lib/python3.6/site-packages (from flake8>=3.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: cryptography>=2.5 in /usr/local/lib64/python3.6/site-packages (from paramiko<3,>=2.5.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: bcrypt>=3.1.3 in /usr/local/lib64/python3.6/site-packages (from paramiko<3,>=2.5.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pynacl>=1.0.1 in /usr/local/lib64/python3.6/site-packages (from paramiko<3,>=2.5.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/site-packages (from pexpect<5,>=4.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: whichcraft>=0.4.0 in /usr/local/lib/python3.6/site-packages (from cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: jinja2-time>=0.1.0 in /usr/local/lib/python3.6/site-packages (from cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: binaryornot>=0.2.0 in /usr/local/lib/python3.6/site-packages (from cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.6/site-packages (from cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: poyo>=0.1.0 in /usr/local/lib/python3.6/site-packages (from cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pbr in /usr/local/lib/python3.6/site-packages (from python-gilt<2,>=1.2.1->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: fasteners in /usr/local/lib/python3.6/site-packages (from python-gilt<2,>=1.2.1->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: git-url-parse in /usr/local/lib/python3.6/site-packages (from python-gilt<2,>=1.2.1->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: toml in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: cfgv>=2.0.0 in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: importlib-resources; python_version < \"3.7\" in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: aspy.yaml in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: nodeenv>=0.11.1 in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: identify>=1.0.0 in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: virtualenv>=15.2 in /usr/local/lib/python3.6/site-packages (from pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/site-packages (from pytest!=3.0.2->testinfra<4,>=3.0.6->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/site-packages (from pytest!=3.0.2->testinfra<4,>=3.0.6->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/site-packages (from pytest!=3.0.2->testinfra<4,>=3.0.6->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/site-packages (from pytest!=3.0.2->testinfra<4,>=3.0.6->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.6/site-packages (from pytest!=3.0.2->testinfra<4,>=3.0.6->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == \"CPython\" and python_version < \"3.8\" in /usr/local/lib64/python3.6/site-packages (from ruamel.yaml->ansible-lint<5,>=4.0.2->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: cffi!=1.11.3,>=1.8 in /usr/local/lib64/python3.6/site-packages (from cryptography>=2.5->paramiko<3,>=2.5.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: arrow in /usr/local/lib/python3.6/site-packages (from jinja2-time>=0.1.0->cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: monotonic>=0.1 in /usr/local/lib/python3.6/site-packages (from fasteners->python-gilt<2,>=1.2.1->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/site-packages (from importlib-metadata; python_version < \"3.8\"->pre-commit<2,>=1.17.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: pycparser in /usr/local/lib/python3.6/site-packages (from cffi!=1.11.3,>=1.8->cryptography>=2.5->paramiko<3,>=2.5.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nRequirement already satisfied: python-dateutil in /usr/local/lib/python3.6/site-packages (from arrow->jinja2-time>=0.1.0->cookiecutter>=1.6.0->molecule->-r /root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/common/requirements-dev.txt (line 6))\nInstalling collected packages: psutil, molecule, jmespath\n  Running setup.py install for psutil: started\n    Running setup.py install for psutil: finished with status 'error'\n    Complete output from command /usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-eveoju1b/psutil/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record /tmp/pip-zppg6_jv-record/install-record.txt --single-version-externally-managed --compile:\n    running install\n    running build\n    running build_py\n    creating build\n    creating build/lib.linux-x86_64-3.6\n    creating build/lib.linux-x86_64-3.6/psutil\n    copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil\n    copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil\n    creating build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests\n    running build_ext\n    building 'psutil._psutil_linux' extension\n    creating build/temp.linux-x86_64-3.6\n    creating build/temp.linux-x86_64-3.6/psutil\n    gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=567 -DPSUTIL_LINUX=1 -I/usr/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o\n    psutil/_psutil_common.c:9:20: fatal error: Python.h: No such file or directory\n     #include <Python.h>\n                        ^\n    compilation terminated.\n    error: command 'gcc' failed with exit status 1\n    \n    ----------------------------------------\n\n:stderr: WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.\nCommand \"/usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-eveoju1b/psutil/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record /tmp/pip-zppg6_jv-record/install-record.txt --single-version-externally-managed --compile\" failed with error code 1 in /tmp/pip-build-eveoju1b/psutil/\n"}
	to retry, use: --limit @/root/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/ansible/dev-env-playbook.retry

PLAY RECAP *******************************************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1   

I have python3 and pip3 installed. Why do I have these 2 warnings and error?

Also, you should mention in the readme that ansible package needs to be preinstalled. I had to install it manually before with yum install ansible.

Molecule test fails due to SSL error on local machine

While running molecule test -s test-local in my personal development environment I hit the following molecule error in the Build Operator Image step:

{
	"changed": true,
	"cmd": ["docker", "build", "-f", "/build/build/Dockerfile", "-t", "ibm.scale.com/csi-scale-operator:testing", "/build"],
	"delta": "0:00:01.123022",
	"end": "2019-11-12 16:20:37.560418",
	"msg": "non-zero return code",
	"rc": 1,
	"start": "2019-11-12 16:20:36.437396",
	"stderr": "Get https://quay.io/v2/: x509: certificate is valid for jdunham-rh72-k8s-scale-master.com, not quay.io",
	"stderr_lines": ["Get https://quay.io/v2/: x509: certificate is valid for jdunham-rh72-k8s-scale-master.ibm.com, not quay.io"],
	"stdout": "Sending build context to Docker daemon  44.16MB\r\r\nStep 1/7 : FROM quay.io/operator-framework/ansible-operator:v0.11.0",
	"stdout_lines": ["Sending build context to Docker daemon  44.16MB", "", "Step 1/7 : FROM quay.io/operator-framework/ansible-operator:v0.11.0"]
}

This error is non blocking and not indicative of an issue with the operator, however, I wanted to track the issue here for the sake of problem resolution for future developers and reduce the risk of others opening this as an issue.

For search: x509: certificate is valid for . not .

Installing docker on openshift worker node makes the drivers unusable

On worker01, Openshift 4 node, running podman, I installed docker for testing purposes. Now I cannot run the CSI driver on this node:

[root@worker01 ibm-spectrum-scale-csi-driver]# oc get pods
NAME                               READY   STATUS             RESTARTS   AGE
csi-spectrum-scale-attacher-0      1/1     Running            0          8m5s
csi-spectrum-scale-m4rfr           2/2     Running            0          8m4s
csi-spectrum-scale-nfhrm           0/2     ImagePullBackOff   4          8m4s
csi-spectrum-scale-provisioner-0   1/1     Running            0          8m4s
jenkins-1-6xrvl                    1/1     Running            2          9d
jupyter-f7779d9-8m2pf              1/1     Running            2          9d
Failed to pull image "localhost/csi-spectrum-scale:v0.9.0": rpc error: code = Unknown desc = pinging docker registry returned: error pinging registry localhost, response code 503 (Service Unavailable)

I think this is because I have docker now on the node. You should ignore docker as it is not a default container utility on openshift 4 (it's podman).
Anyway, I removed the docker.

yum remove docker

But the pod is not being updated anymore.

[root@worker01 ibm-spectrum-scale-csi-driver]# oc get pods
NAME                               READY   STATUS             RESTARTS   AGE
csi-spectrum-scale-attacher-0      1/1     Running            0          11m
csi-spectrum-scale-m4rfr           2/2     Running            0          11m
csi-spectrum-scale-nfhrm           0/2     ImagePullBackOff   5          11m
csi-spectrum-scale-provisioner-0   1/1     Running            0          11m
jenkins-1-6xrvl                    1/1     Running            2          9d
jupyter-f7779d9-8m2pf              1/1     Running            2          9d

I destroyed (and waited a couple of seconds till pods disappear) and deployed the driver again.

[root@worker01 ibm-spectrum-scale-csi-driver]# deploy/destroy.sh 
[root@worker01 ibm-spectrum-scale-csi-driver]# deploy/create.sh 
[root@worker01 ibm-spectrum-scale-csi-driver]# oc get pods -owide
NAME                               READY   STATUS             RESTARTS   AGE     IP             NODE                      NOMINATED NODE   READINESS GATES
csi-spectrum-scale-attacher-0      1/1     Running            1          7m22s   10.131.1.175   worker01.ocp4.scale.com   <none>           <none>
csi-spectrum-scale-d9w8p           0/2     ImagePullBackOff   4          7m21s   192.168.1.15   worker01.ocp4.scale.com   <none>           <none>
csi-spectrum-scale-mgplh           2/2     Running            0          7m21s   192.168.1.16   worker02.ocp4.scale.com   <none>           <none>
csi-spectrum-scale-provisioner-0   1/1     Running            0          7m21s   10.131.1.176   worker01.ocp4.scale.com   <none>           <none>
jenkins-1-6xrvl                    1/1     Running            2          9d      10.131.1.152   worker01.ocp4.scale.com   <none>           <none>
jupyter-f7779d9-8m2pf              1/1     Running            2          9d      10.131.1.148   worker01.ocp4.scale.com   <none>           <none>

Still, no change. I cannot use the driver anymore on my worker01. How to fix it?

Make primaryFset, clusterId and scaleHostpath params optional in driver configuration yaml file

When I create an instance of a CSI driver, I need to edit a yaml file. There, I need to specify primaryFset and scaleHostpath.

I would expect that if I do not specify primaryFset then the driver/operator would create automatically a generic fset in my filesystem.

For scaleHostpath, the operator/ driver could obtain it from the Spectrum Scale GUI via REST API, so I think you should not require the user to remember and insert the path.

Can you make both of the parameters optional ? This would simplify the configuration.

Cert Checklist

Checklist

{Check #} [{Catalog Min}, {Certified}] : {Condition}

Key Action
R Required
A If Applicable
P Preferred
'' Optional

1. Production Grade Config and Topology

  • ๏ปฟ1.1 [,R] :ย Does your workload follow best practices for running the respective product in a real-life production environment ?
  • 1.2 [,R] ย :ย Does your workload maintain docker images on a frequent timeline consistent with other non-container product deliveries ?ย 
  • 1.3 [,R/A] :ย Does your workload manage persistent volumes correctly, to avoid loss of data when pods are re-scheduled when disruptions and recovery occurs ?
  • 1.4 [,R/A] :ย Does your workload provide clients with backup/recovery and DR data procedures ?ย 
  • 1.5 [P,P] :ย Does your workload support Multi-Architectures (intel, power, z) ?ย 
  • 1.6 [P,P] :ย Does your workload support Portability for IKS and ICP ?
  • 1.7 [,P] :ย Does your workload provide a way to list, provision and bind your workload to other consumers/services without needing detailed knowledge of how your workload is created or managed ?ย 
  • 1.8 [,R/A] :ย Does your workload provide RedHat OpenShift Certified Operators to deploy and / or manage instances of your workload ?
  • 1.9 [R/A,R/A] :ย Does your Operator embed your existing helm chart ?
  • 1.10 [R/A,R/A] : Do all of the resources deployed via your operator meet our existing certification guidelines ?
  • 1.11 [R/A,R/A] : Do all of the resources that deploy your operator and CRDโ€™s meet our existing certification guidelines ?
  • 1.12 [R/A,R/A] : If you have an ansible based operator, do all of the k8s jinja templates meet our existing certification guidelines?

2. Self Healing / Automatic Failover

  • 2.1 [,R/A] : Does your workload support multiple active replicas (same image individually addressable) ?
  • 2.2 [,R/A] : Does your workload manage networking topologies that intelligently route requests when recovery/disruption occurs ?
  • 2.3 [R ,R] : Does your workload monitor its health and understand how it reacts to these events ?
  • 2.4 [,P/A] : Does your workload monitor support and active/standby failover model and have you made it work in a native Kubernetes method ?

3. Self Healing / Automatic Failover

  • 3.1 [,R/A] : Does your workload consider controlling scheduling of the pods to ensure maximum resiliency ?
  • 3.2 [,R] : Does your workload test for resiliency ?
  • 3.3 [,R] : Does your workload test for performance ?
  • 3.4 [,P] : Does your workload run in multiple failure zones in a single cluster ?

4. Ability to scale up/down

  • 4.1 [P,R] : Does your workload support being manually scaled to add instance transparent to the end user ?
  • 4.2 [,P] : Does your workload support horizontal pod autoscaling ?
  • 4.3 [,R/A] : Does your workload support scaling a StatefulSet (if using StatefulSets) ? Can you add members into the StatefulSet for a stateful workload without service interruption ?
  • 4.4 [,R/A] : Does your workload controlled scale down ?
  • 4.5 [,R/A] : Does your workload support being deployed more than once in the same cluster

5. Image Vulnerability Scanning / Mgmt

  • 5.1 [R,R] : Does your workload manage all image vulnerabilities on an ongoing basis on the timelines established with the Product Security Incident Response Team (PSIRT) ?
  • 5.2 [R,R] : Does your workload ensure integration with the content CICD continuous scanning for vulnerabilities for ongoing visibility of image vulnerabilities ?
  • 5.3 [R,P] : Does your workload ensure that if you are shipping images on Docker Hub (or any image repository) that the repository's vulnerability detector does not show critical exposures for your shipped images ?
  • 5.4 [P,R] : Does your workload deliver RedHat certified UBI based images ?

6. Limits Privileges/Context

  • 6.1 [P,P] :ย Does your workload run with privileges that allow it to meet CIS controls ?
  • 6.2 [R,R] : Does your workload require elevated (non-CIS compliant) privileges and deliver security policies to run the workload with the least privilege required ?

7. Secure Access Considerations

  • 7.1 [R,R] :ย Does your workloadย require special user privileges to install (team admin or cluster admin) ?
  • 7.2 [P,R] :ย Does your workloadย ensure fine grained separation of roles for admin activities vs operator activities ?
  • 7.3 [P/R,R] :ย Does your workload avoid exposing sensitive information in the Helm Release ?
  • 7.4 [,R/A] :ย Does your workloadย use TLS for encryption in motion ?
  • 7.5 [,P] :ย Does your workloadย have the ability to operate using customer certificates ?
  • 7.6 [,R/A] :ย Does your workloadย support the ability to have encryption at rest if required by a customer ?
  • 7.7 [P,ย R] :ย Does your workloadย support provide authentication / authorization ?
  • 7.8 [,P] :ย Does your workload provide audit capabilities for data access ?
  • 7.9 [R/A,R/A] :ย Does your Operator support being configured with different scopes ?

8. BISO Compliance

  • 8.1 [P/R,R] :ย Does your workload follow the Secure Release process ?

9. Upgrade/Rollback Using Platform Experience

  • 9.1 [P,R] :ย Does your workloadย use the platform functions in the UI to upgrade and rollback all charts ?
  • 9.2 [P,R] :ย Does your workloadย test Upgrades prior to shipment and validate that your release notes match the upgrade support from version to version ?
  • 9.3 [,R] :ย Does your workloadย support non-disruptive rolling upgrades within patch/minor release levels ?

10. Standard Version Management

  • 10.1 [R,R] :ย Does your workload honor and follow the helm chart sem-version standard ?
  • 10.2 [R,R] :ย Does your workload maintain your docker versions / tags separate from the Helm Chart version ?

11. Maintain Consistency/Consumability Guidelines

  • 11.1 [R,R] :ย Does your workload deliver all content to the content CICD and compatiblity before a chart is shipped ?ย 
  • 11.2 [R,R] :ย Does your workload adhere to the Linter recommendations for both errors and warnings and address them for each product shipment ?
  • 11.3 [R,R] :ย Does your workload make the updates as recommended by the content squad focal based on the code reviews or PRs to merge the code ?
  • 11.4 [R/A,R/A] :ย Does your workload provide Operators packaged as part of a CASE bundle ?

12. Maintain Content Currency with Versions (kube/helm/platform)

  • 12.1 [R,R] :ย Does your workload stay current with the Platform and ensure it supports each release within 30 days of the platform GA ?
  • 12.2 [R,R] :ย Does your workload deliver regular releases of helm content every 120 days or less ?

13. Integration with Catalog Experience

  • 13.1 [R,R] :ย Does your workload use the platform catalog features such as metadata, keywords, release notes, launch links, readme conventions, etc.. visible in the platform UI ?
  • 13.2 [R,R] :ย Does your workload have a "consumable" experience in the platform catalog UI deployment process ?
  • 13.3 [P,R/A] :ย Does your workload ensure all resources that your workload deploys (for embedded helm charts/objects) are associated together via helm-releases or tied information ?

14. Integration with Logging Service

  • 14.1 [,R] :ย Does your workload output logs to standard out for consumption by the platform common service ?ย 
  • 14.2 [,P/A] :ย Does your workload deliver custom Kibana dashboards that are personalized for custom logging of your workload ?

15. Integration with Monitoring Service

  • 15.1 [,P] :ย Does your workload support workload specific monitoring enabled in the platform common service ?ย 
  • 15.2 [,P/A] :ย Does your workload deliver custom Grafana dashboards that are personalized for custom monitoring of your workload ?

16. Integration with Metering Services

  • 16.1 [R,R] :ย Does your workload use the mandatory metering annotations ?ย 
  • 16.2 [,] :ย Does your workload use custom metering metrics ?ย 

17. Use production/supported Helm/Kube features

  • 17.1 [P,R] :ย Does your workload avoid use of alpha functions ?ย 
  • 17.2 [R,R] :ย Does your workload avoid using hard-coded kubernetes capabilities by "version" , e.g. kubernetes 1.8 ?ย 

18. Full Sack Compatibility Verification

  • 18.1 [ย R,R] :ย Does your workload maintain current code in the compatibility Testing in synch with PPA and FixCentral shipments ?
  • 18.2 [ย R,R] :ย Does your workload provide Content Verification Tests (cv-tests) to insure your Chart installs, runs, and cleans up the workload in the compatibility test environments ?

19. Multiple Configuration Testing

  • 19.1 [ย P,R] :ย ย Does your workload have multiple CV Tests that are built to exercise your chart and workload for multiple configurations and use cases ?
  • 19.2 [ย R,R] :ย Does your workloadย deploy both from the CLI and from the Platform Catalog User Experience ?
  • 19.3 [ย R,R] :ย Does your workload test on ALL platforms identified in the Chart.yaml keywords file?ย 

20. Ship End User License Displayable in Catalog

  • 20.1 [R,R] :ย Does your workload have the Product License/s available in the helm chart and viewable in the Catalog ?ย 
  • 20.2 [R,R] :ย Does product license in your docker image match your helm chart product license ?ย 

21. On-boarded to the Integrated Support Processes

  • 21.1 [ย R/A,R/A] :ย Does your workload follow the ICP support processes ?
  • 21.2 [ย R/A,R/A] :ย Does your workload follow the IBM Public Cloud support processes ?

Problem with configuring driver with local and remote clusters

@przem123 commented on Wed Dec 11 2019

I created a configuration for CSI driver with local and remote cluster. 17399599334479190260 and gpfs0 are local id and filesystem. 4473793006880872527 is my remote cluster.

apiVersion: csi.ibm.com/v1
kind: CSIScaleOperator
metadata:
  labels:
    app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
    app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
    app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
  name: ibm-spectrum-scale-csi
  release: ibm-spectrum-scale-csi-operator
  namespace: ibm-spectrum-scale-csi-driver
spec:
  clusters:
    - id: '17399599334479190260'
      primary:
        primaryFs: gpfs0
        primaryFset: csifset
      restApi:
        - guiHost: 10.10.1.21
      secrets: csisecret-local
      secureSslMode: false
    - id: '4473793006880872527'
      restApi:
        - guiHost: 10.10.1.52
      secrets: csisecret-remote
      secureSslMode: false
  scaleHostpath: /mnt/gpfs0
status: {}

The driver and other items are in ibm-spectrum-scale-csi-drivernamespace.

[root@worker01 operator]# oc get all -n ibm-spectrum-scale-csi-driver
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/ibm-spectrum-scale-csi-attacher-0                  1/1     Running   0          5h12m
pod/ibm-spectrum-scale-csi-kjcs6                       2/2     Running   0          4h46m
pod/ibm-spectrum-scale-csi-nxmkg                       2/2     Running   0          4h46m
pod/ibm-spectrum-scale-csi-operator-75f65c5999-9wnk2   2/2     Running   0          5h58m
pod/ibm-spectrum-scale-csi-provisioner-0               1/1     Running   0          5h12m

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/ibm-spectrum-scale-csi-operator-metrics   ClusterIP   172.30.179.144   <none>        8383/TCP   5h58m

NAME                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/ibm-spectrum-scale-csi   2         2         2       2            2           <none>          5h12m

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ibm-spectrum-scale-csi-operator   1/1     1            1           5h58m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/ibm-spectrum-scale-csi-operator-75f65c5999   1         1         1       5h58m

NAME                                                  READY   AGE
statefulset.apps/ibm-spectrum-scale-csi-attacher      1/1     5h12m
statefulset.apps/ibm-spectrum-scale-csi-provisioner   1/1     5h12m

Unfortunately, I cannot use it dynamic povisioning for local and remote fs:

[root@worker01 operator]# oc describe pvc scale-remotefset-pvc
Name:          scale-remotefset-pvc
Namespace:     default
StorageClass:  ibm-spectrum-scale-csi-remotefs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: ibm-spectrum-scale-csi
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                      From                         Message
  ----       ------                ----                     ----                         -------
  Normal     ExternalProvisioning  109s (x1191 over 4h51m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "ibm-spectrum-scale-csi" or manually created by system administrator
Mounted By:  <none>

Any idea how to solve it ?


@yadaven commented on Thu Dec 12 2019

I think in storage class we are not using correct provisioner name. Recently it has been changed to spectrumscale.csi.ibm.com instead of ibm-spectrum-scale-csi


@mew2057 commented on Thu Dec 12 2019

@yadaven @smitaraut Has the documentation in the GitHub Repo and KC been verified to match the provisioner name change? Moving this issue to driver repo.

When deleting Custom Resource, driver pods increase first before deleted

Describe the bug

I have noticed on my test system when I issue the oc delete -f <cr.yaml> to remove the driver pods, and monitor the number of pods, I see them increase first (but in Terminating) state before going away completely. I just wanted to log this so it's not lost. It's not high priority.

It could be that i'm in this bad state, but i'll monitor also a working state next... In the driver image v0.9.1 version, it's in a non running state:

[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS             RESTARTS   AGE
ibm-spectrum-scale-csi-8vnh7                       0/2     CrashLoopBackOff   22         41m
ibm-spectrum-scale-csi-attacher-0                  0/1     CrashLoopBackOff   52         7h36m
ibm-spectrum-scale-csi-hv7tg                       0/2     CrashLoopBackOff   22         41m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running            0          8h
ibm-spectrum-scale-csi-provisioner-0               1/1     Running            0          7h36m
ibm-spectrum-scale-csi-x9q78                       0/2     CrashLoopBackOff   22         41m
+ set +x

Deleting the CR

[root@c943f5n01-pvt csi-operator-ansible]# oc delete -f ibm-spectrum-scale-csi-operator-cr.yaml
csiscaleoperator.scale.ibm.com "ibm-spectrum-scale-csi" deleted
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-8vnh7                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-attacher-0                  0/1     Terminating   55         7h57m
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
ibm-spectrum-scale-csi-provisioner-0               0/1     Terminating   0          7h56m
ibm-spectrum-scale-csi-x9q78                       0/2     Terminating   30         61m
+ set +x

Checking pods a few seconds later, we see

[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-8vnh7                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-attacher-0                  0/1     Terminating   55         7h57m
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
ibm-spectrum-scale-csi-provisioner-0               0/1     Terminating   0          7h56m
ibm-spectrum-scale-csi-qb6nh                       0/2     Pending       0          0s
ibm-spectrum-scale-csi-shm85                       0/2     Pending       0          0s
ibm-spectrum-scale-csi-x9q78                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-zn5t6                       0/2     Terminating   0          0s
+ set +x

Then eventually it goes down and away

[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-8vnh7                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-attacher-0                  0/1     Terminating   55         7h57m
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
ibm-spectrum-scale-csi-qb6nh                       0/2     Terminating   0          6s
ibm-spectrum-scale-csi-shm85                       0/2     Terminating   0          6s
ibm-spectrum-scale-csi-x9q78                       0/2     Terminating   30         61m
ibm-spectrum-scale-csi-zn5t6                       0/2     Terminating   0          6s
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
ibm-spectrum-scale-csi-qb6nh                       0/2     Terminating   0          9s
ibm-spectrum-scale-csi-x9q78                       0/2     Terminating   30         62m
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
ibm-spectrum-scale-csi-qb6nh                       0/2     Terminating   0          10s
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS        RESTARTS   AGE
ibm-spectrum-scale-csi-hv7tg                       0/2     Terminating   30         62m
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running       0          9h
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-operator-6ff9cf6979-h7bss   2/2     Running   0          9h
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]#

To Reproduce

Deploy v0.9.2 of operator and v0.9.1 of driver... operator comes up fine, driver comes up with the CrashLoopBackOff Error... Then delete the driver and watch closely at the number of pods.

Expected behavior

The pods should not increase (??)

Environment

Openshift 4.2 GA code, not building the operator but using quay images v.0.9.2 for operator and v0.9.1 for driver.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

CSI Driver documentation needs to be fixed

I installed the CSI. This is the list of points I have:

  1. You need to mention the installation of docker as it is needed for image creation (make build-image)
    yum install docker
    systemctl start docker

Maybe you can provide a full image instead?

  1. 3x the same cd
    cd $GOPATH/src/github.com/IBM/ibm-spectrum-scale-csi-driver

  2. You dont write the full path to spectrum-scale-driver.conf
    Set the variable CSI_SCALE_PATH before mentioning spectrum-scale-driver.conf

  3. You dont write on which node the CSI should be deployed. As it needs kubectl it would be a worker node.

  4. Fix relative path in your create.sh script
    [root@worker01 deploy]# ./create.sh
    error: the path "deploy/csi-attacher-rbac.yaml" does not exist

  5. Why do I need to encode the admin and password in the spectrum-scale-driver.conf ? Can you encode it automatically in your python script?
    Also you should add information how to decode it with base64 (for example: echo guipass | base64) if you cannot encode it with python script.

  6. In spectrum-scale-driver.conf I need to set mount path for the filesystem. Cant you determine it ? I specified the fs in conf file already.

  • Mount path of the primary filesystem.
    scalehostpath = /mnt/gpfs0
  1. Add information on how to determine spectrumscaleplugin in spectrum-scale-driver.conf.
    It can be get from the command below, but this information is missing in the github documentation.
    podman images | grep spectrum-scale

Maybe you can discover it in your python script ?

  1. Use newer versions for these images as the versions used in the conf file are already over 1 year old:
    provisioner = quay.io/k8scsi/csi-provisioner:v1.0.0
    attacher = quay.io/k8scsi/csi-attacher:v1.0.0
    driverregistrar = quay.io/k8scsi/csi-node-driver-registrar:v1.0.1

  2. Dynamic Provisioning Example
    The files are called
    podfileset.yaml
    pvcfileset.yaml
    NOT podfset.yaml pvcfset.yaml as in the documentation

  3. In spectrum-scale-driver.conf I had to set if I have Openshift. Cant you determine it ?

  • Specify true if this is an openshift deployment
    openshiftdeployment = true
  1. For remote cluster, provide a way to install it automatically, without editing yaml files. Also, setting remoteFS/remoteCluster in primary section does not seem to be required as the primary FS must be a local one.

CSI driver does not work with Retain Policy DELETE on static PV

I have reclaim policy DELETE on my directory from a remote fs:

[root@worker01 tools]# oc describe pv pv-images
Name:            pv-images
Labels:          id=test1
Annotations:     kubectl.kubernetes.io/last-applied-configuration:
                   {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"id":"test1"},"name":"pv-images"},"spec":{"accessModes...
                 pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Bound
Claim:           default/test3
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            ibm-spectrum-scale-csi
    VolumeHandle:      17399599334479190260;099B6A42:5DC52700;path=/mnt/remote_fs0_1m/static/images
    ReadOnly:          false
    VolumeAttributes:  <none>
Events:                <none>

When I remove the corresponding PVC, then the PV goes into a failed state. Nothing will be removed.

status:
  phase: Failed
  message: >-
    Error getting deleter volume plugin for volume "pv-images": no volume plugin
    matched

Please fix it so that I can use reclaim policy DELETE on static PVs so that the data is then removed automatically.

Removed image cannot be recreated

I created the image from the devel branch.

[root@worker01 ibm-spectrum-scale-csi-driver]# podman build -t ibm-spectrum-scale-csi:v0.9.2 -f Dockerfile.msb .
STEP 1: FROM golang:1.13.1 AS builder
STEP 2: RUN wget -q -O $GOPATH/bin/dep https://github.com/golang/dep/releases/download/v0.5.1/dep-linux-amd64
--> Using cache 3c41a5abe1cae7d1b529dd9eb3186d7bda820122dc8c8bd9fc76a7977d6a1b5d
STEP 3: RUN chmod +x $GOPATH/bin/dep && export PATH=$PATH:$GOPATH/bin
--> Using cache 4a1b178d985760041cc670390f0997b832cb28526de2aca4768807992438f75f
STEP 4: WORKDIR /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/
--> Using cache fd67b356d737639644c4c7c89e7287ae7b26f8e959531d4854b95d9dba038883
STEP 5: COPY . .
4181fda2388c8058dba2192f0fe171277ee74e0f6a89e673522aa10c255b6e59
STEP 6: RUN [ -d /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/vendor ] || dep ensure
53b70a8e2e4b27335f28ab277d9e1c8c6f4a064a4918c365bc06dacc755fcf8d
STEP 7: RUN CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o  _output/ibm-spectrum-scale-csi ./cmd/ibm-spectrum-scale-csi
49e77e32d8fcbc5f68bc4c06a8954afc914695a3487bf7f1ec85849b882af0d5
STEP 8: FROM registry.access.redhat.com/ubi7-minimal:latest
STEP 9: LABEL name="IBM Spectrum Scale CSI driver"       vendor="ibm"       version="0.9.2"       release="1"       run='docker run ibm-spectrum-scale-csi-driver'       summary="An implementation of CSI Plugin for the IBM Spectrum Scale product."      description="CSI Plugin for IBM Spectrum Scale"      maintainers="IBM Spectrum Scale"
--> Using cache 3d4f2a4f287600fbe1266ab5fc4763727e2105aa1617ddda0f88afe53e6c599e
STEP 10: COPY licenses /licenses
626ba5080a45a52a48c8c4f41f5a915945ccf095568897772b19b8f049c03e9c
STEP 11: COPY --from=builder /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/_output/ibm-spectrum-scale-csi /ibm-spectrum-scale-csi
0b84c659e5857437bd637a67cfa4d2d3fd3f963655a0d76c382c46936cb68231
STEP 12: RUN chmod +x /ibm-spectrum-scale-csi
89fa199f165a589730ae8bfb5e72bb7812a60ab97a2a2e8515748a756b71b950
STEP 13: ENTRYPOINT ["/ibm-spectrum-scale-csi"]
STEP 14: COMMIT ibm-spectrum-scale-csi:v0.9.2
9f183c017f256bb49d2f2859c9fd3f1d3ff9a9299a51f13fac2ebe93c3b8331d

I can see the image created in podman registry. Everything looks fine.

[root@worker01 ibm-spectrum-scale-csi-driver]# podman images|grep ibm-spectrum-scale-csi
localhost/ibm-spectrum-scale-csi                                     v0.9.2                                                                    9f183c017f25   About a minute ago   113 MB

Then I remove the image

[root@worker01 ibm-spectrum-scale-csi-driver]# podman rmi 9f183c017f25
89fa199f165a589730ae8bfb5e72bb7812a60ab97a2a2e8515748a756b71b950
0b84c659e5857437bd637a67cfa4d2d3fd3f963655a0d76c382c46936cb68231
626ba5080a45a52a48c8c4f41f5a915945ccf095568897772b19b8f049c03e9c
9f183c017f256bb49d2f2859c9fd3f1d3ff9a9299a51f13fac2ebe93c3b8331d

Cant recreate the image again

[root@worker01 ibm-spectrum-scale-csi-driver]# podman build -t ibm-spectrum-scale-csi:v0.9.2 -f Dockerfile.msb .
STEP 1: FROM golang:1.13.1 AS builder
STEP 2: RUN wget -q -O $GOPATH/bin/dep https://github.com/golang/dep/releases/download/v0.5.1/dep-linux-amd64
--> Using cache 3c41a5abe1cae7d1b529dd9eb3186d7bda820122dc8c8bd9fc76a7977d6a1b5d
STEP 3: RUN chmod +x $GOPATH/bin/dep && export PATH=$PATH:$GOPATH/bin
--> Using cache 4a1b178d985760041cc670390f0997b832cb28526de2aca4768807992438f75f
STEP 4: WORKDIR /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/
--> Using cache fd67b356d737639644c4c7c89e7287ae7b26f8e959531d4854b95d9dba038883
STEP 5: COPY . .
--> Using cache 4181fda2388c8058dba2192f0fe171277ee74e0f6a89e673522aa10c255b6e59
STEP 6: RUN [ -d /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/vendor ] || dep ensure
--> Using cache 53b70a8e2e4b27335f28ab277d9e1c8c6f4a064a4918c365bc06dacc755fcf8d
STEP 7: RUN CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o  _output/ibm-spectrum-scale-csi ./cmd/ibm-spectrum-scale-csi
--> Using cache 49e77e32d8fcbc5f68bc4c06a8954afc914695a3487bf7f1ec85849b882af0d5
STEP 8: FROM registry.access.redhat.com/ubi7-minimal:latest
STEP 9: LABEL name="IBM Spectrum Scale CSI driver"       vendor="ibm"       version="0.9.2"       release="1"       run='docker run ibm-spectrum-scale-csi-driver'       summary="An implementation of CSI Plugin for the IBM Spectrum Scale product."      description="CSI Plugin for IBM Spectrum Scale"      maintainers="IBM Spectrum Scale"
--> Using cache 3d4f2a4f287600fbe1266ab5fc4763727e2105aa1617ddda0f88afe53e6c599e
STEP 10: COPY licenses /licenses
47e37a1e6774d765eea05e04635186fa1cfb843b75126d5203e9319b3477c8a5
STEP 11: COPY --from=builder /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/_output/ibm-spectrum-scale-csi /ibm-spectrum-scale-csi
Error: error building at STEP "COPY --from=builder /go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/_output/ibm-spectrum-scale-csi /ibm-spectrum-scale-csi": no files found matching "/var/lib/containers/storage/overlay/9847673abdf6c7c8d4925ad3d5dcc72135644724f23b425d3de33dcfd7614b79/merged/go/src/github.com/IBM/ibm-spectrum-scale-csi-driver/_output/ibm-spectrum-scale-csi": no such file or directory

Any idea how to fix it ?

Do we need to maintain the namespace.yaml file, or just document the command line?

It seems like a namespace can be created simply by running:

# k8s
kubectl create ns ibm-spectrum-scale-csi-driver
# ocp
oc new-project ibm-spectrum-scale-csi-driver

Do we really need to have the namespace.yaml file? I've verfied that this does in fact work....


Remove namespace

[root@c943f5n01-pvt csi-operator-ansible]# oc delete -f curl.tmp/namespace.yaml
namespace "ibm-spectrum-scale-csi-driver" deleted

Create with command:

[root@c943f5n01-pvt csi-operator-ansible]# oc new-project ibm-spectrum-scale-csi-driver
Now using project "ibm-spectrum-scale-csi-driver" on server "https://api.ocp4-c2.pok.stglabs.ibm.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

Deploy single operator file:

[root@c943f5n01-pvt csi-operator-ansible]# oc apply -f curl.tmp/ibm-spectrum-scale-csi-operator.yaml
deployment.apps/ibm-spectrum-scale-csi-operator created
role.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-operator created
clusterrole.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-operator created
clusterrole.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-node created
clusterrole.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-attacher created
clusterrole.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-provisioner created
rolebinding.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-operator created
clusterrolebinding.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-operator created
clusterrolebinding.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-node created
clusterrolebinding.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/ibm-spectrum-scale-csi-attacher created
serviceaccount/ibm-spectrum-scale-csi-operator created
serviceaccount/ibm-spectrum-scale-csi-attacher created
serviceaccount/ibm-spectrum-scale-csi-node created
serviceaccount/ibm-spectrum-scale-csi-provisioner created
customresourcedefinition.apiextensions.k8s.io/csiscaleoperators.scale.ibm.com created
[root@c943f5n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS              RESTARTS   AGE
ibm-spectrum-scale-csi-operator-6ff9cf6979-qjfxb   0/2     ContainerCreating   0          2s
+ set +x
[root@c943f5n01-pvt csi-operator-ansible]#

Scan Remediation: csi-plugin-provisioner.yaml.j2

Template

roles/csi-scale/templates/csi-plugin-provisioner.yaml.j2

Raw Scan

[REVIEW] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-attacher.yaml: spec.template.spec.containers[0].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)
[REVIEW] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)

[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
[ERROR] scanned-statefulset-ibm-spectrum-scale-csi-driver-csi-scale-operator-provisioner.yaml: value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

Action Items

  • "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
  • ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels
  • ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels
  • metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
  • neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined
  • spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
  • spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
  • spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
  • use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
  • value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

operator namespace and driver should be allowed to be separate namespaces

Describe the bug
operator namespace and driver should be allowed to be separate namespaces

To Reproduce
Steps to reproduce the behavior:

  1. Create the operator under namepsace = ibm-spectrum-scale-csi-driver
[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc get all
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/ibm-spectrum-scale-csi-operator-6d4bd865f6-bn7zf   2/2     Running   0          13s

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/ibm-spectrum-scale-csi-operator-metrics   ClusterIP   172.30.233.186   <none>        8383/TCP   4s

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ibm-spectrum-scale-csi-operator   1/1     1            1           13s

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/ibm-spectrum-scale-csi-operator-6d4bd865f6   1         1         1       13s
  1. Create the secret and configmap for GUI certificate in namespace=default
[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc get secrets guisecret -n default
NAME        TYPE     DATA   AGE
guisecret   Opaque   2      3h15m
[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc get configmap -n default
NAME             DATA   AGE
guicertificate   1      3h17m
  1. make the changes in deploy/crds/ibm-spectrum-scale-csi-operator-cr.yaml file
apiVersion: csi.ibm.com/v1
kind: 'CSIScaleOperator'
metadata:
    name: 'ibm-spectrum-scale-csi'
    namespace: 'default'
    labels:
      app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
      app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
      app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
    release: ibm-spectrum-scale-csi-operator

  1. Starting the CSI Driver
[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc apply -f  deploy/crds/ibm-spectrum-scale-csi-operator-cr.yaml
csiscaleoperator.csi.ibm.com/ibm-spectrum-scale-csi created

[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc get CSIScaleOperator -n default
NAME                     AGE
ibm-spectrum-scale-csi   58s


[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc get pods -n default
No resources found.

Expected behavior
ibm-spectrum-scale-csi-driver namespace for operator and another namespace for driver should work

Environment
Please run the following an paste your output here:

[root@oc-w3 ibm-spectrum-scale-csi-operator]# oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0     True        False         46d     Cluster version is 4.2.0

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Add a proper form for CSI driver for Openshift 4

When I create CSI driver instances with the Operator I must adjust the yaml file. Openshift 4 offers a possibility to edit a form version instead. Maybe you could adjust the yaml file so that it would match the form ? This would be much easier and user friendly than manually editing the yaml file as it shows required fields in a nice UI.

Refactor landing README for this project

I'm running through the main README file and I'm making some changes to prepare submitting a Pull Request. But as I run this, the refactoring may be a little more than what is expected, so I wanted to pitch an skeleton outline and get feedback from @mew2057 so we are on the same page...

I think there's two User Roles for the csi-operator and my thoughts on each..

  • Developer that wants to help - The README should be really written to this user
  • Admin/Customer - They should go through OLM (Operator Lifecycle Management) and not see this repo

Consumer

This one is easier, they should be going through this process to get the operator...

  • From OpenShift
    • Operator -> OperatorHub
    • Search scale-csi-operator
    • Install and follow instructions
  • From https://operatorhub.io/
    • search for scale-csi-operator
    • Install and follow instructions

Developer

So the real audience for this README github project should be developers.... Here's my proposal of the skeleton outline for the landing page README.md doc....

  • Build from Source

    • Clone
    • Build
  • Deploy

    • spectrum scale config file
    • Create operator
      • Repository (quay) - recommended
        • kubectl apply - single file
      • Manual (kubectl apply multiple files )
    • Destroy operator
      • Repository
        • kubectl delete - single file
      • Manual (kubectl delete multiple files )

I don't have the whole picture yet, so I might have mistaken something, but let's discuss

I haven't refactored yet, but working on smaller changes here: https://github.com/whowutwut/ibm-spectrum-scale-csi-operator/blob/readme_fix/README.md

[ocp4] Unable to bring up the csi-driver from the csi-operator

Here's what my cluster looks like:

[root@c943f4n01-pvt csi-operator-ansible]# oc get nodes
NAME                                  STATUS   ROLES    AGE   VERSION
master0.ocp4-c4.pok.stglabs.ibm.com   Ready    master   12d   v1.14.6+c07e432da
master1.ocp4-c4.pok.stglabs.ibm.com   Ready    master   12d   v1.14.6+c07e432da
master2.ocp4-c4.pok.stglabs.ibm.com   Ready    master   12d   v1.14.6+c07e432da
worker0.ocp4-c4.pok.stglabs.ibm.com   Ready    worker   12d   v1.14.6+c07e432da
worker1.ocp4-c4.pok.stglabs.ibm.com   Ready    worker   12d   v1.14.6+c07e432da
worker2.ocp4-c4.pok.stglabs.ibm.com   Ready    worker   12d   v1.14.6+c07e432da

The csi-operator is running:

[root@c943f4n01-pvt csi-operator-ansible]# oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                              READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-operator-cd74b89f4-28lxl   2/2     Running   0          3h55m

I have a secrets for GUI defined in the namespace:

[root@c943f4n01-pvt csi-operator-ansible]# oc get secrets -n ibm-spectrum-scale-csi-driver | grep gui
spectrum-scale-gui-secret                            Opaque                                2      4h37m
[root@c943f4n01-pvt csi-operator-ansible]#

But when I launch the yaml for the operator, nothing happens on the driver side to bring it up. I'm deploying this image of the driver

 # Image name for the csi spectrum scale plugin container.
  spectrumScale: "quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1"
  # ----

The secret counter is high...

[root@c943f4n01-pvt csi-operator-ansible]# oc -n ibm-spectrum-scale-csi-driver  get csiscaleoperators -o yaml | grep -i secretCounter
    secretCounter: 5

I see these kinds of entries in the operator log

{"level":"info","ts":1574129726.9677927,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"Gathering Facts"}
{"level":"info","ts":1574129728.7495406,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"csi-scale : Ensure the clusters are valid"}
{"level":"info","ts":1574129728.9058044,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"csi-scale : Ensure secret spectrum-scale-gui-secret defined in ibm-spectrum-scale-csi-driver"}
{"level":"info","ts":1574129730.4197047,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"csi-scale : Label unlabled secrets"}
{"level":"info","ts":1574129730.533673,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"csi-scale : Remove old version of secret"}
{"level":"info","ts":1574129731.8541794,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"csi-scale : Ensure the secret has been created with the correct label"}
{"level":"info","ts":1574129732.9854968,"logger":"logging_event_handler","msg":"[playbook task]","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver","gvk":"scale.ibm.com/v1alpha1, Kind=CSIScaleOperator","event_type":"playbook_on_task_start","job":"4562621698571384881","EventData.Name":"csi-scale : Ensure csi-scale objects are present"}
{"level":"info","ts":1574129737.651785,"logger":"runner","msg":"Ansible-runner exited successfully","job":"4562621698571384881","name":"ibm-spectrum-scale-csi","namespace":"ibm-spectrum-scale-csi-driver"}
[root@c943f4n01-pvt csi-operator-ansible]# oc get daemonsets  -n ibm-spectrum-scale-csi-driver
NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ibm-spectrum-scale-csi   0         0         0       0            0           <none>          29m
[root@c943f4n01-pvt csi-operator-ansible]# oc get statefulsets  -n ibm-spectrum-scale-csi-driver
NAME                                 READY   AGE
ibm-spectrum-scale-csi-attacher      0/1     28m
ibm-spectrum-scale-csi-provisioner   0/1     28m

Talking to @mew2057, he thinks that the statefulsets and daemonsets are not coming up, so I am opening this issue here. Let me know what kind of additional info is needed?

Can the driver detect and bubble up an error message when the GUI hasn't been logged into?

If we don't think this would happen in real life, then just close this issue,

What I'm looking for is some error message that is bubbled up to the user to let us know some REST service is down? I think i hit a problem where the driver doesn't come up OK because I never logged into the SCALE GUI for the first time

Is there any messages or some command that would help me realize this mistake, describe pods or something else that would bubble up this user error? I don't think I saw it in describe pods but maybe i missed it?

Scan Remediation: csi-plugin.yaml.j2

Template

roles/csi-scale/templates/csi-plugin.yaml.j2

Raw Scan

[REVIEW] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)
[REVIEW] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].readinessProbe not defined (ContainerWithNoMatchingServiceHasReadinessProbe)

[WARNING] scanned-serviceaccount-ibm-spectrum-scale-csi-driver-ibm-spectrum-scale-csi-operator.yaml: no imagePullSecrets defined, pods will not be able to pull namespace-scoped images from the local registry (ServiceAccountHasPullSecret)

[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: "ALL" not found in spec.template.spec.containers[1].securityContext.capabilities.drop (ContainerHasDropAll)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels (RequiredMetadataLabelsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: neither spec.template.spec.containers[1].resources.limits.cpu nor spec.template.spec.containers[1].resources.requests.cpu is defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].livenessProbe not defined (ContainerHasLivenessProbe)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].resources.limits.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: spec.template.spec.containers[1].resources.requests.memory not defined (ContainerDefinesResources)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostNetwork at spec.template.spec.hostNetwork not allowed (NoHostNetwork)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[1].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[2].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[3].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: use of hostPath at spec.template.spec.volumes[5].hostPath not allowed (NoHostPath)
[ERROR] scanned-daemonset-ibm-spectrum-scale-csi-driver-csi-scale-operator.yaml: value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

Action Items

  • "ALL" not found in spec.template.spec.containers[0].securityContext.capabilities.drop (ContainerHasDropAll)
  • "ALL" not found in spec.template.spec.containers[1].securityContext.capabilities.drop (ContainerHasDropAll)
  • ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under metadata.labels (RequiredMetadataLabelsDefined)
  • ["app.kubernetes.io/instance" "app.kubernetes.io/managed-by" "app.kubernetes.io/name"] not defined under spec.template.metadata.labels
  • metering annotations ["productID" "productName" "productVersion"] not found under spec.template.metadata.annotations (MeteringAnnotationsDefined)
  • neither spec.template.spec.containers[0].resources.limits.cpu nor spec.template.spec.containers[0].resources.requests.cpu is defined (ContainerDefinesResources)
  • neither spec.template.spec.containers[1].resources.limits.cpu nor spec.template.spec.containers[1].resources.requests.cpu is defined (ContainerDefinesResources)
  • spec.template.spec.containers[0].livenessProbe not defined (ContainerHasLivenessProbe)
  • spec.template.spec.containers[0].resources.limits.memory not defined (ContainerDefinesResources)
  • spec.template.spec.containers[0].resources.requests.memory not defined (ContainerDefinesResources)
  • spec.template.spec.containers[1].livenessProbe not defined (ContainerHasLivenessProbe)
  • spec.template.spec.containers[1].resources.limits.memory not defined (ContainerDefinesResources)
  • spec.template.spec.containers[1].resources.requests.memory not defined (ContainerDefinesResources)
  • use of hostNetwork at spec.template.spec.hostNetwork not allowed (NoHostNetwork)
  • use of hostPath at spec.template.spec.volumes[0].hostPath not allowed (NoHostPath)
  • use of hostPath at spec.template.spec.volumes[1].hostPath not allowed (NoHostPath)
  • use of hostPath at spec.template.spec.volumes[2].hostPath not allowed (NoHostPath)
  • use of hostPath at spec.template.spec.volumes[3].hostPath not allowed (NoHostPath)
  • use of hostPath at spec.template.spec.volumes[5].hostPath not allowed (NoHostPath)
  • value "beta.kubernetes.io/arch" at some spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[i].matchExpressions[j].key not defined for architecture-based node affinity (PodHasArchBasedNodeAffinity)

After some time, daemon sets are being re-deployed for no apparent reason

Describe the bug

After the operator brings up the driver, leaving for a few hours, we see that the AGE of the statefulsets and the daemonsets are not in sync, despite no changes done to the cluster.

Here's the pods showing the age difference

[root@c943f4n01-pvt csi-operator-ansible]# ./operator-helper.sh get_pods
+ oc get pods -n ibm-spectrum-scale-csi-driver
NAME                                               READY   STATUS    RESTARTS   AGE
ibm-spectrum-scale-csi-8t9k2                       2/2     Running   0          32m
ibm-spectrum-scale-csi-attacher-0                  1/1     Running   0          4h56m
ibm-spectrum-scale-csi-operator-6ff9cf6979-k2gpd   2/2     Running   0          4h56m
ibm-spectrum-scale-csi-provisioner-0               1/1     Running   0          4h56m
ibm-spectrum-scale-csi-tcml4                       2/2     Running   0          32m
ibm-spectrum-scale-csi-x8nf6                       2/2     Running   0          32m
+ set +x

To Reproduce
Steps to reproduce the behavior:

  1. Deploy the operator
  2. use the operator to deploy driver..
  3. Wait... and check pods.

Expected behavior

The age should not change when nothing is triggering changes to the yaml files

Environment

Running the following container images

    Image:          quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-operator:v0.9.2
    Image ID:       quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-operator@sha256:cbb83684dbf172bba95e7914a0685808880dedd033daa43e736778b59cd946b8
    Image:         quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v0.9.1
    Image ID:      quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver@sha256:e5926978bd4f4a553df18fc79ac532037c4e31e3a2cdaa6a5c06014fba02c808

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

React to GRPC response failures

I1022 16:09:27.285217       1 connection.go:235] GRPC call: /csi.v1.Identity/Probe
I1022 16:09:27.285251       1 connection.go:236] GRPC request:
I1022 16:09:27.285330       1 connection.go:238] GRPC response:
I1022 16:09:27.285357       1 connection.go:239] GRPC error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
I1022 16:09:27.285847       1 main.go:214] Probe failed with rpc error: code = Unavailable desc = all SubConns are in TransientFailure

I was testing an olm deployment by hand and I noticed the above error in the attacher. I'm still doing an RCA, but it looks like the secret wasn't present at container creation.

To resolve this better in the future we need to do several things:

  1. Detect connection issues in containers.
  2. Trigger events on certain error types (this might be event correlation in the controller).
  3. Repair any errors possible.
  4. If errors persist stop the container and call an admin.

[MDD] Power and Z support for the operator

Power and Z support for the operator

Dependencies

Description

The Spectrum Scale CSI Operator has a requirement to support both the s390x and ppc64le architectures.

Problem

Testing on an s390x system revealed a gap in the operator-sdk, today s390x is not in the support matrix for the operator-sdk tool or the ansible operator. I don't think this applies to ppc64le, as binaries exist.

For now I have a fix that I rolled by hand in PR #7, using a custom operator-sdk build for s390x. I'm currently hosting the s390x build of the image on quay.

Resolution

The path to resolution is likely going to require some form of upstream contribution to the operator-sdk upstream repository, as I don't currently see a support timeline.

Add the exact cause for error instead of a generic 400 error

I encountered this error message during my experience with the CSI driver:

Error [Remote call completed with error [400 Bad Request]]

I noticed, this can be related to the gpfs filesystem not being mounted on the worker nodes or that the fs does not have enough inodes.

Can you present in a message a clear cause of the error instead of showing a generic 400 error?

[Operator] Delete on invalid secret name hangs CR deletion

Describe the bug
When deleting the CR for the operator if the secret is invalid it will prevent the user from deleting their custom resource.

To Reproduce
Steps to reproduce the behavior:

  1. Create a new Custom Resource.
  • This resource should have a bad secret name.
  1. Attempt to delete the Custom Resource.

Expected behavior
The Custom Resources should be deleted, however, the driver gets caught in ContainerCreating and the Custom Resource removal gets hung.

Environment

# Developement
operator-sdk version: "v0.11.0", commit: "39c65c36159a9c249e5f3c178205cc6e86c16f8d", go version: "go1.12.7 linux/amd64"
go version go1.13.1 linux/amd64

# Deployment
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Describe how to use remote fs as primary fs only

In the configuration yaml you describe how to use a remote filesystem. I would need to use one as my primary fs, I dont want use any local filesystems. How should the configuration look like ? Should the primary cluster ID be my remote ID ? Do I need to specify credentials for my local GUI as I only intend to use my remote filesystems ? Or only for the remote GUI?

  clusters:
    - id: "< Primary Cluster ID - WARNING: THIS IS A STRING NEEDS YAML QUOTES!>"
      secrets: "secret1"
      secureSslMode: false
      primary:
        primaryFs: "< Primary Filesystem >"
        primaryFset: "< Fileset in Primary Filesystem >"
#        inodeLimit: "< node limit for Primary Fileset >" # Optional
#        remoteCluster: "< Remote ClusterID >"            # Optional - This ID should have seperate entry in Clusters map.
#        remoteFs: "< Remote Filesystem >"                # Optional
#      cacert: "< CA cert configmap for GUI >"            # Optional
      restApi: 
      - guiHost: "< Primary cluster GUI IP/Hostname >" 

Can you provide a basic sample for a single remote fs only?

Use newer versions of attacher, provisioner and driverregistrar sidecars

Is your feature request related to a problem? Please describe.
No. However, we'll need this to eventually support CSI v1.2.0

This was mentioned in issue #94

Describe the solution you'd like
It has been suggested to bump to the following versions:

Driver registrar: v1.2.0
CSI Attacher: v2.1.1 (Min k8s 1.14), else v1.2.1
CSI Provisioner: v1.5.0

Describe alternatives you've considered
As we move forward we will need to consider how we track sidecar versions, and how that relates to our own support matrix. For example, newer sidecar versions have minimum k8s version support.

Additional context
CSI Provisioner v1.0.0 -> v1.1.0 deprecated two command line options that we use (without bumping major versions...)

  • --connection-timeout
  • --provisioner

This will require minor tweaks to the deployment that the operator does.

Condition messages can be misleading

I installed the Operator from OperatorHub in Openshift 4. Everything looks fine, as seen in the console:

[root@worker01 ~]# oc get all -n ibm-spectrum-scale-csi-driver
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/ibm-spectrum-scale-csi-attacher-0                  1/1     Running   0          2d2h
pod/ibm-spectrum-scale-csi-kjcs6                       2/2     Running   0          2d1h
pod/ibm-spectrum-scale-csi-nxmkg                       2/2     Running   0          2d1h
pod/ibm-spectrum-scale-csi-operator-75f65c5999-9wnk2   2/2     Running   0          2d3h
pod/ibm-spectrum-scale-csi-provisioner-0               1/1     Running   0          2d2h

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/ibm-spectrum-scale-csi-operator-metrics   ClusterIP   172.30.179.144   <none>        8383/TCP   2d3h

NAME                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/ibm-spectrum-scale-csi   2         2         2       2            2           <none>          2d2h

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ibm-spectrum-scale-csi-operator   1/1     1            1           2d3h

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/ibm-spectrum-scale-csi-operator-75f65c5999   1         1         1       2d3h

NAME                                                  READY   AGE
statefulset.apps/ibm-spectrum-scale-csi-attacher      1/1     2d2h
statefulset.apps/ibm-spectrum-scale-csi-provisioner   1/1     2d2h

Also, I can create PV and PVC as expected.

However, when I look at the Operator Details in Openshift 4 console I can see these messages on Condition view .

Conditions

Pending True Dec 11, 5:57 pm RequirementsUnknown requirements not yet checked 
Pending True Dec 11, 5:57 pm RequirementsNotMet one or more requirements couldn't be found 
InstallReady True Dec 11, 5:57 pm AllRequirementsMet all requirements found, attempting install 
Installing True Dec 11, 5:57 pm InstallSucceeded waiting for install components to report healthy 
Installing True Dec 11, 5:57 pm InstallWaiting installing: Waiting: waiting for deployment ibm-spectrum-scale-csi-operator to become ready: Waiting for rollout to finish: 0 of 1 updated replicas are available...
Succeeded True Dec 11, 5:57 pm InstallSucceeded install strategy completed with no errors

Maybe you can make the condition logs more user friendly ? My installation is fine, with the last message confirming it, but the output is quite misleading. Not sure if something should be done or not.

Vague Error Message

While modifying some of the operator's functionality I hit an error in the csi plugin where a parameter was malconfigured. The current error message is too vague and makes it too hard to easily debug:

[root csi-scale]# kubectl logs csi-spectrum-scale-w6f4s csi-spectrum-scale
ERROR: logging before flag.Parse: I1030 19:13:41.532922       1 gpfs.go:108] scale: Loaded 0 volumes from /var/lib/kubelet/plugins/csi-spectrum-scale/controller
I1030 19:13:41.533544       1 gpfs.go:112] gpfs GetScaleDriver
I1030 19:13:41.534648       1 gpfs.go:189] gpfs SetupScaleDriver. name: csi-spectrum-scale, version: 1.0.0, nodeID: worker1
I1030 19:13:41.534699       1 gpfs.go:228] gpfs PluginInitialize
I1030 19:13:41.534961       1 scale_config.go:74] scale_config LoadScaleConfigSettings
I1030 19:13:41.536603       1 scale_config.go:97] scale_config HandleSecrets
I1030 19:13:41.536701       1 gpfs.go:430] gpfs ValidateScaleConfigParameters.
E1030 19:13:41.536715       1 gpfs.go:233] Parameter validation failure
E1030 19:13:41.536720       1 gpfs.go:196] Error in plugin initialization: Mandatory parameters not specified for cluster 11832033572270148010
F1030 19:13:41.536733       1 main.go:65] Failed to initialize Scale CSI Driver: Mandatory parameters not specified for cluster 11832033572270148010

The parameter not specified should be called out in the log.

Mount filesystem automatically, enable quota automatically

Mount filesystem and enable quota automatically, especially for remote filesystem as they cannot be set locally to automount with mmchfs. Since you can perform mount/quota enablement with the GUI REST API, please add this functionality to the driver too.

deploy/spectrum-scale-secret.json missing in devel branch

According to the documentation in devel branch, for remote cluster I need to adjust deploy/spectrum-scale-secret.json. I dont have it in this branch. I have only deploy/spectrum-scale-secret.yaml.

Can you describe how to adjust the yaml file ?

Remove clusterId param from storage class

When I use storage class for dynamic provisioning fileset, I need to specify clusterId. Without it, I get this error in my PVC:

failed to provision volume with StorageClass "ibm-spectrum-scale-csi-remotefs-wicid": rpc error: code = InvalidArgument desc = clusterId not specified in request parameters

It does not make any sense to require this parameter. The filesystem names are unique in a cluster. Locally mounted remote filesystems must have unique names too. I think you should remove this parameter or make it optional in storage classes.

[OCP] Using the olm script manually to apply does not work in OpenShift environment (no marketplace namespace)

Describe the bug

Trying to follow this instruction:
image

Results in this error:

clusterrolebinding.rbac.authorization.k8s.io/olm-crb created
Error from server (NotFound): error when creating "deploy/olm-scripts/operator-source.yaml": namespaces "marketplace" not found
Error from server (NotFound): error when creating "deploy/olm-scripts/operator-source.yaml": namespaces "marketplace" not found
Error from server (NotFound): error when creating "deploy/olm-scripts/operator-source.yaml": namespaces "marketplace" not found
No resources found.

To Reproduce

In Openshift environment, run oc apply -f deploy/olm-scripts/operator-source.yaml

Expected behavior

Operator comes up?

Environment

Running this code:

[root@c943f4n01-pvt csi-operator-ansible]# cd ~/go/src/github.com/IBM/ibm-spectrum-scale-csi-operator/
[root@c943f4n01-pvt ibm-spectrum-scale-csi-operator]# git remote -v
origin	https://github.com/IBM/ibm-spectrum-scale-csi-operator.git (fetch)
origin	https://github.com/IBM/ibm-spectrum-scale-csi-operator.git (push)
[root@c943f4n01-pvt ibm-spectrum-scale-csi-operator]# git branch
* dev
[root@c943f4n01-pvt ibm-spectrum-scale-csi-operator]# git  log -1
commit be2ca36225d6fc5760b69a0595c4a709be7eaf7e
Merge: 3303227 9b1057c
Author: John Dunham <[email protected]>
Date:   Thu Nov 21 17:22:08 2019 -0500

    Merge pull request IBM/ibm-spectrum-scale-csi-operator#60 from mew2057/csv-vet

    Cluster Service Version Cleanup

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.