GithubHelp home page GithubHelp logo

ibm-mas / cli Goto Github PK

View Code? Open in Web Editor NEW
21.0 23.0 17.0 505.54 MB

Maximo Application Suite command line interface

Home Page: https://ibm-mas.github.io/cli/

License: Eclipse Public License 2.0

Shell 35.78% Dockerfile 0.05% Makefile 0.11% Python 14.65% Jinja 49.40% PowerShell 0.01%
cli maximo

cli's Introduction

IBM Maximo Application Suite CLI Utility

GitHub release Build CLI

Introduction

There are minimal dependencies to meet on your own computer to use the CLI:

  • Bash (v4)
  • OpenShift client
  • IBMCloud client with container plugin enabled
  • Ansible
  • Python
  • Network access to the OpenShift cluster

The best way to use the CLI is via the container image: docker run -ti -v ~:/mnt/home --pull always quay.io/ibmmas/cli.

All settings can be controlled via environment variables to avoid needing to manually type them out, for example if you export IBM_ENTITLEMENT_KEY=xxxx then when you run the install that input will be prefilled with the value from the environment variable, allowing you to press Enter to continue, or modify the value if you need to. Most commands support both an interactive and a non-interactive mode.

The engine that performs all tasks is written in Ansible, you can directly use the same automation outside of this CLI if you wish. The code is open source and available in ibm-mas/ansible-devops, the collection is also available to install directly from Ansible Galaxy.

Documentation

https://ibm-mas.github.io/cli/

Want to contribute to MAS Command Line Interface?

We welcome every Maximo Application Suite users, developers and enthusiasts to contribute to the MAS Command Line Interface while fixing code issues and implementing new automated functionalities.

You can contribute to this collection by raising a new issue with suggestions on how to make our MAS automation engine even better, or if you want to become a new code contributor, please refer to the Contributing section and learn more about how to get started.

cli's People

Contributors

alequint avatar alicenahas avatar andrercm avatar anilprajapatiibm avatar caroazad avatar creyesibm avatar durera avatar gabrielbonamicoibm avatar harsh42774 avatar ianboden avatar joaopauloksn avatar jonahluckett avatar jonatash avatar leo-miran avatar liyongc avatar lokesh-sreedhara avatar mattlrx avatar padmankosalaram avatar racree avatar rawa-resul avatar sanjayprab avatar sbudhira avatar sekharcvalluri avatar stonepd avatar terenceq avatar tomklapiscak avatar unnati-solanki-git avatar whitfiea avatar xwgao avatar yuvraj-vansure avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cli's Issues

CLI Observations and Usage Recommendations.

Through repeated use of the CLI tool, I have come to the belief that this is a nice tool. It would be better if:

  1. I had a list of the tool contents and possible paths before I had to discover and make every mistake possible to know HOW to use the tool.
  2. There is no instruction anywhere on the possible routes through the CLI.
  3. The fact that I cannot install predict WITHOUT doing Manage is silly. In other words YES it must be installed FIRST but if I have already done so please provide me the ability to create a new pipeline run without forcing me back through the Manage install pipeline run.... ugh...
  4. I really should install CPD on a separate track PRIOR to installing IOT or Manage. Without CPD the DBs end up on islands and I have to do a lot of work to gain visibility within CPD UI in my cloud account. In earlier version it was part and parcel of the UI and I had visibility as part of install. Please bring this back.
  5. Is this team aware of the work begin done over here: https://github.com/cloud-native-toolkit/operator-masauto. It occurs to me that the work is do a degree complimentary and overlapping. The idea of an operator on the front end of the gun is very attractive for those who prefer to have a point-and-click experience. Further the choices and options in the CLI would be nice to see in the front end of the Operator during the set up of the operator. Dunno if that can be done.
  6. Finally, If the CLI had the ability to incorporate choices into the execution of the pipeline runs (and moreover a recommendation of the order of the pipeline runs) then I could choose to install through either the CLI or the pipeline runs in the OCP UI, OR the Automation Operator the various applications in a manner of my choosing all with the same user experience vis-a-vis the order and choices.

In other words when I install Maximo application suite I really should be presented with the same set of options in the same order whether it is through a CLI tool or an operator tool.

IBM has enough knowledge on HOW the install should take place... don't make the user guess.

The task "appconnect" failed very quickly

See the logs:

TASK [ibm.mas_devops.appconnect : Load default storage class information] ******
Thursday 09 February 2023  03:06:09 +0000 (0:00:00.082)       0:00:02.500 *****
ok: [localhost] => {"ansible_facts": {"default_storage_classes_rwo": ["ibmc-block-gold", "ocs-storagecluster-ceph-rbd", "gp2", "managed-premium"], "default_storage_classes_rwx": ["ibmc-file-gold-gid", "ocs-storagecluster-cephfs", "efs", "azurefiles-premium"], "default_storage_classes_rwx_nogid": ["ibmc-file-gold", "ocs-storagecluster-cephfs", "efs", "azurefiles-premium"]}, "ansible_included_var_files": ["/opt/app-root/lib64/python3.9/site-packages/ansible_collections/ibm/mas_devops/common_vars/default_storage_classes.yml"], "changed": false}

TASK [ibm.mas_devops.appconnect : Lookup storage classes] **********************
Thursday 09 February 2023  03:06:09 +0000 (0:00:00.076)       0:00:02.576 *****
ok: [localhost] => {"api_found": true, "changed": false, "resources": [{"apiVersion": "storage.k8s.io/v1", "kind": "StorageClass", "metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}, "creationTimestamp": "2023-02-08T05:20:42Z", "managedFields": [{"apiVersion": "storage.k8s.io/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:parameters": {".": {}, "f:archiveOnDelete": {}}, "f:provisioner": {}, "f:reclaimPolicy": {}, "f:volumeBindingMode": {}}, "manager": "kubectl-create", "operation": "Update", "time": "2023-02-08T05:20:42Z"}, {"apiVersion": "storage.k8s.io/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:storageclass.kubernetes.io/is-default-class": {}}}}, "manager": "kubectl-patch", "operation": "Update", "time": "2023-02-08T09:41:13Z"}], "name": "nfs-client", "resourceVersion": "337222", "uid": "687920a4-0d29-4b32-bead-47196e2475e9"}, "parameters": {"archiveOnDelete": "false"}, "provisioner": "icpd-nfs.io/nfs", "reclaimPolicy": "Delete", "volumeBindingMode": "Immediate"}]}

TASK [ibm.mas_devops.appconnect : Debug available storage classes] *************
Thursday 09 February 2023  03:06:11 +0000 (0:00:01.804)       0:00:04.380 *****
ok: [localhost] => {
    "msg": [
        "nfs-client"
    ]
}

TASK [ibm.mas_devops.appconnect : Default Storage if not set by user] **********
Thursday 09 February 2023  03:06:11 +0000 (0:00:00.119)       0:00:04.500 *****
ok: [localhost] => {"ansible_facts": {"appconnect_storage_class": ""}, "changed": false}

TASK [ibm.mas_devops.appconnect : Assert that storage class has been defined] ***
Thursday 09 February 2023  03:06:11 +0000 (0:00:00.113)       0:00:04.614 *****
fatal: [localhost]: FAILED! => {
    "assertion": "appconnect_storage_class is defined and appconnect_storage_class != \"\"",
    "changed": false,
    "evaluated_to": false,
    "msg": "appconnect_storage_class must be defined"
}

Is that caused by NFS storage class?

mas installing is failing with

Using the MAS CLI to install the following components

MAS Core
MAS IOT
MAS Manage
MAS Monitor
MAS Predict

The pipeline failed app-cfg-manage task. As below

image

``` ibm.mas_devops.suite_app_config : Configure namespace ------------------- 0.04s ibm.mas_devops.suite_app_config : Replace xml file with jms_internal_endpoint: {{ jms_internal_endpoint }} --- 0.04s ibm.mas_devops.suite_app_config : Create Manage app JMS additional server config secrets --- 0.04s ibm.mas_devops.suite_app_config : Load application variables ------------ 0.04s Pipeline results not recorded as DEVOPS_MONGO_URI is not defined

Improve build system support for master pre-release build

Today, we have fairly simple logic that says:

  • if there is a tgz in the ansible-install directory use that
  • otherwise, use the latest release from galaxy

We want the logic to be like this instead:

  • if there is a tgz in the ansible-install directory use that
  • otherwise, if we are building a branch (rather than a tag) use the latest master pre-release of ibm.mas_devops
  • otherwise, use the latest release from galaxy

To achieve this, we will want to publish the pre-release and release builds of ibm.mas_devops to Artifactory using exactly the same logic we do with the tekton definitions in this repository. We will then be able to have logic in the build that basically works like this:

if this is a branch build and there isn't a tgz in the ansible-install directory

  • download the latest master pre-release tgz from artifactory and save it to the normal location

Should be fairly simple, will require updates in ansible-devops first so that the collection is published to artifactory.

ocp release and operator mirror only support direct mode

From the ibm-mas/ansible-airgap repository, obviously the OCP release and operator mirror can only support direct mirror
and ignores MIRROR_MODE environment variable. The functions/mirror_to_registry should add checking for this situation.

Add option to include demo data when installing Manage

When installing MAS using the ibmmas/cli container image by executing the “mas provision-xxxx” and “mas install” commands there is no way to force the installation of demo data. As this a very attractive and smooth way to install MAS I and my business partners would really appreciate if there was a way to include demo data (as opposed to running a number of Ansible playbooks in sequence).

Mikael Vester, IBM Technical Sales, Sustainability Software

Add option to select name of common namespaces

We would like to be able to select a name for the following namespaces since we think that "grafana" or "mongoce" is too generic and would be difficult to associate with MAS in an Openshift/Kubernetes cluster full of applications:

{{ lookup('env', 'DB2_NAMESPACE') | default('db2u', true) }}
{{ lookup('env', 'GRAFANA_NAMESPACE') | default('grafana', true) }}
{{ lookup('env', 'MONGODB_NAMESPACE') | default('mongoce', true) }}

Thanks!

Add support to enable OpenTelemetry in monitoring stack

CLUSTER_MONITORING_INCLUDE_OPENTELEMETRY environment variable must be able to be passed into the cluster_monitoring role (boolean) ... this defaults to False inside the role. User only needs to set this if they want to use OpenTelemetry, and right now we don't acctively make use of this operator anyway.

Error when setting up db2 for Manage

On the role suite_db2_setup_for_manage, a db2 script is generated, copied to the db2u pod and executed.
When looking at the logs, it looks like the script is interrupted and some of the queries are not executed.
This cause the maxinst to fail because the tablespaces and the maximo schema doesn't exist.

setupdb.log

 Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.7.0
 SQL authorization ID   = DB2INST1
 Local database alias   = BLUDB

DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
SQL1363W  One or more of the parameters submitted for immediate modification 
were not changed dynamically. For these configuration parameters, the database 
must be shutdown and reactivated before the configuration parameter changes 
become effective.

setupdb.sh

#!/bin/bash

# Note that we have removed the standard GRANT command, as in the context of this playbook the
# db2inst1 user already owns the schema:
#
# db2 create schema maximo authorization db2inst1
#
# Also, the following line was removed:
# db2 GRANT DBADM,CREATETAB,BINDADD,CONNECT,CREATE_NOT_FENCED_ROUTINE,IMPLICIT_SCHEMA,
#   LOAD,CREATE_EXTERNAL_ROUTINE,QUIESCE_CONNECT,SECADM ON DATABASE TO USER db2inst1
#
# We also removed these statements due to similar error because we are running as the user that owns these anyway:
# SQL0554N  An authorization ID cannot grant a privilege or authority to itself
# db2 GRANT USE OF TABLESPACE MAXDATA TO USER db2inst1
# db2 GRANT CREATEIN,DROPIN,ALTERIN ON SCHEMA maximo TO USER db2inst1

if [ -f "/tmp/setupdb_complete" ]; then
  echo "Aborting run.  /tmp/setupdb_complete exists; this script has already been executed"
  exit 0
fi

set -e
db2 connect to BLUDB
db2 update db cfg for BLUDB using dft_table_org ROW
db2 update db cfg for BLUDB using AUTO_MAINT ON DEFERRED
db2 update db cfg for BLUDB using AUTO_TBL_MAINT ON DEFERRED
db2 update db cfg for BLUDB using AUTO_RUNSTATS ON DEFERRED

db2 update db cfg for BLUDB using AUTO_REORG OFF DEFERRED


db2 update db cfg for BLUDB using AUTO_DB_BACKUP OFF DEFERRED

db2 update db cfg for BLUDB using CATALOGCACHE_SZ 800 DEFERRED
db2 update db cfg for BLUDB using CHNGPGS_THRESH 40 DEFERRED
db2 update db cfg for BLUDB using DBHEAP AUTOMATIC
db2 update db cfg for BLUDB using LOCKLIST AUTOMATIC DEFERRED
db2 update db cfg for BLUDB using LOGBUFSZ 1024 DEFERRED
db2 update db cfg for BLUDB using LOCKTIMEOUT 300 DEFERRED

db2 update db cfg for BLUDB using LOGPRIMARY 100 DEFERRED

db2 update db cfg for BLUDB using LOGSECOND 156 DEFERRED


db2 update db cfg for BLUDB using LOGFILSIZ 50000 DEFERRED

db2 update db cfg for BLUDB using MIRRORLOGPATH /mnt/backup/MIRRORLOGPATH   
db2 update db cfg for BLUDB using NUM_DB_BACKUPS 60                         
db2 update db cfg for BLUDB using LOGARCHCOMPR1 ON                          
db2 update db cfg for BLUDB using REC_HIS_RETENTN 60                        


db2 update db cfg for BLUDB using SOFTMAX 1000 DEFERRED
db2 update db cfg for BLUDB using MAXFILOP 61440 DEFERRED
db2 update db cfg for BLUDB using PCKCACHESZ AUTOMATIC DEFERRED
db2 update db cfg for BLUDB using STAT_HEAP_SZ AUTOMATIC DEFERRED
db2 update db cfg for BLUDB using STMTHEAP AUTOMATIC DEFERRED
db2 update db cfg for BLUDB using UTIL_HEAP_SZ 10000 DEFERRED
db2 update db cfg for BLUDB using DATABASE_MEMORY AUTOMATIC DEFERRED
db2 update db cfg for BLUDB using AUTO_STMT_STATS OFF DEFERRED

### the STMT_CONC may need to be set to off for sum customers
### For customers that need this parameter set to OFF, we will need to update the configmap settings for this parameter to OFF 
### so the database will retain his value (Currently, there is only one customer that needs it set to OFF). 
### However, if, in the future we support Text Search, this parameter will need to be OFF for all databases with Text search enabled.
db2 update db cfg for BLUDB using STMT_CONC LITERALS DEFERRED
db2 update alert cfg for database on BLUDB using db.db_backup_req SET THRESHOLDSCHECKED YES
db2 update alert cfg for database on BLUDB using db.tb_reorg_req SET THRESHOLDSCHECKED YES
db2 update alert cfg for database on BLUDB using db.tb_runstats_req SET THRESHOLDSCHECKED YES
db2 update dbm cfg using PRIV_MEM_THRESH 32767 DEFERRED
db2 update dbm cfg using KEEPFENCED NO DEFERRED
db2 update dbm cfg using NUMDB 2 DEFERRED
db2 update dbm cfg using RQRIOBLK 65535 DEFERRED
db2 update dbm cfg using HEALTH_MON OFF DEFERRED
db2 update dbm cfg using AGENT_STACK_SZ 1000 DEFERRED
db2 update dbm cfg using MON_HEAP_SZ AUTOMATIC DEFERRED
db2 update db cfg using DDL_CONSTRAINT_DEF Yes
db2set DB2_SKIPINSERTED=ON
db2set DB2_INLIST_TO_NLJN=YES
db2set DB2_MINIMIZE_LISTPREFETCH=Y
db2set DB2_EVALUNCOMMITTED=YES
db2set DB2_FMP_COMM_HEAPSZ=65536
db2set DB2_SKIPDELETED=ON
db2set DB2_USE_ALTERNATE_PAGE_CLEANING=ON

echo "###  set the tmp backup location   ###"
db2set DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/mnt/backup/staging       
echo "###  Enable PIT recovery  ###"
db2set DB2_CDE_REDUCED_LOGGING=REDUCED_REDO:NO                         

# These will fail if re-running the script, but I don't know enough about Db2 to make them idempotent
# so for now we will ignore failures in them
set +e
db2 CREATE BUFFERPOOL MAXBUFPOOL IMMEDIATE SIZE 4096 AUTOMATIC PAGESIZE 32 K

### db2 CREATE REGULAR TABLESPACE MAXDATA PAGESIZE 32 K MANAGED BY AUTOMATIC STORAGE INITIALSIZE 5000 M BUFFERPOOL MAXBUFPOOL
db2 CREATE LARGE TABLESPACE MAXDATA PAGESIZE 32 K MANAGED BY AUTOMATIC STORAGE INITIALSIZE 5000 M BUFFERPOOL MAXBUFPOOL

db2 CREATE TEMPORARY TABLESPACE MAXTEMP PAGESIZE 32 K MANAGED BY AUTOMATIC STORAGE BUFFERPOOL MAXBUFPOOL

### db2 CREATE REGULAR TABLESPACE MAXINDEX PAGESIZE 32 K MANAGED BY AUTOMATIC STORAGE INITIALSIZE 5000 M BUFFERPOOL MAXBUFPOOL
db2 CREATE LARGE TABLESPACE MAXINDEX PAGESIZE 32 K MANAGED BY AUTOMATIC STORAGE INITIALSIZE 5000 M BUFFERPOOL MAXBUFPOOL

db2 create schema maximo authorization db2inst1

set -e
db2stop force
sleep 10
db2start

echo "COMPLETE" > /tmp/setupdb_complete
chmod a+rw /tmp/setupdb_complete

# If we get this far, then we can consider the setup a success
exit 0

CLI hang up at the very beginning

See the below logs:

IBM Maximo Application Suite CLI Container v3.11.0

https://github.com/ibm-mas/ansible-devops
https://github.com/ibm-mas/ansible-airgap
https://github.com/ibm-mas/cli

Available commands:
  - mas install to launch a MAS install pipeline
  - mas provision-fyre to provision an OCP cluster on IBM DevIT Fyre (internal)
  - mas provision-roks to provision an OCP cluster on IBMCloud Red Hat OpenShift Service (ROKS)
  - mas provision-aws to provision an OCP cluster on AWS
  - mas provision-rosa to provision an OCP cluster on AWS Red Hat OpenShift Service (ROSA)
  - mas setup-registry to setup a private container registry on an OCP cluster
  - mas mirror-images to mirror container images required by mas to a private registry
  - mas configure-ocp-for-mirror to configure a cluster to use a private registry as a mirror

[ibmmas/cli:3.11.0]mascli$ mas install
IBM Maximo Application Suite Installer
Powered by https://github.com/ibm-mas/ansible-devops/ and https://tekton.dev/

Current Limitations
1. Support for airgap installation is limited to Core with IoT, Manage, Optimizer at present


1. Set Target OpenShift Cluster
  Server URL ..... > https://api.ocp410.cluster.local.com:6443
  Login Token  ... > sha256~51YsgDpzDogFDmu6Art8uOfHkWLbWXkTx0QM8rPsMzo

The process hang up at the very beginning, no error, no further logs
How could I figure out what exactly going on.

I have a centos 8 as bastion node and ocp with version 4.10.37
with nfs as the storage

wait on TektonConfig loops forever when OpenShift 4.6

This is not an AirGap issue...

When I use mascli to install MAS 8.7 on OpenShift 4.6 the function pipeline_install_operator hangs on "oc get TektonConfig config". Apparently there is no TektonConfig CRD when installed on OpenShift 4.6.

Therefore lines 25 through 38 of bin/functions/pipeline_install_operator should not execute when installing on OpenShift 4.6

 25  echo "Wait for Tekton config to be available" &>> $LOGFILE
 26  oc get TektonConfig config &>> $LOGFILE
 27  LOOKUP_RESULT=$?
 28  while [ "$LOOKUP_RESULT" == "1" ]; do
 29    echo "Waiting 5s for TektonConfig to be created before checking again ..." &>> $LOGFILE
 30    sleep 5
 31    oc get TektonConfig config &>> $LOGFILE
 32    LOOKUP_RESULT=$?
 33  done
 34 
 35  # This will have no effect on OCP 4.8 ... OpenShift Pipelines doesn't support configuration properly at this level
 36  # Note: We need to use --tpye merge because it's a custom resource
 37  echo "Patch Tekton config to enable alpha API fields and scope when expressions to tasks" &>> $LOGFILE
 38  oc patch TektonConfig config --type merge -p '{"spec":{"pipeline":{"enable-api-fields": "alpha", "scope-when-expressions-to-task": true}}}'  &>> $LOGFILE
 
 Another possible solution is to have the "while loop" timeout.

When you choose ocs while installing assist, ibm cloud questions are prompted

At some point when we are configuring Assist to be installed, you are prompted to pick whether you want OCS or COS, if you pick OCS, it will ask you questions that seems to be really related to IBM COS like IBM Cloud API key and Resource Group.

We need to check if there's a condition in the cli that validates the provider and goes down the correct path.

Also, there seems there's no option available to provision ocs via cli path.

Misleading info in catalog description

In each catalog description there is info "IBM Maximo Application Suite will run anywhere that you can run a supported OpenShift release", yet it does not work on POWER and ARM architecture.

Error during roks provision with cli 2.6.1 since plugin is missing (ibmcloud plugin install ks)

TASK [ibm.mas_devops.ocp_provision : roks : Check if cluster already exists] ******************************************************************************************************************************************
Wednesday 26 October 2022 08:18:01 +0000 (0:00:02.202) 0:00:11.402 *****
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "ibmcloud oc cluster get -c CLUSTERNAME --output json\n", "delta": "0:00:01.459113", "end": "2022-10-26 08:18:03.579984", "failed_when_result": true, "msg": "non-zero return code", "rc": 2, "start": "2022-10-26 08:18:02.120871", "stderr": "FAILED\n'oc' is not a registered command. Check your list of installed plug-ins. See 'ibmcloud help'.", "stderr_lines": ["FAILED", "'oc' is not a registered command. Check your list of installed plug-ins. See 'ibmcloud help'."], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT ****************************************************************************************************************************************************************************************************

[ibmmas/cli:2.6.1]~$ ibmcloud plugin repo-plugins
Getting plug-ins from all repositories...

Repository: IBM Cloud
Status Name Versions Description
Not Installed container-registry[cr] 1.0.2, 1.0.1, 1.0.0... Manage IBM Cloud Container Registry content and configuration.
Not Installed container-service[kubernetes-service/ks] 1.0.459, 1.0.454, 1.0.452... Manage IBM Cloud Kubernetes Service clusters
Not Installed analytics-engine 1.0.194, 1.0.186, 1.0.181... Manage Analytics Engine service
Not Installed cloud-functions[wsk/functions/fn] 1.0.61, 1.0.60, 1.0.59... Manage Cloud Functions
Not Installed cloud-internet-services[cis] 1.14.8, 1.14.5, 1.14.4... Manage Cloud Internet Service
Not Installed dbaas-cli[dbaas] 2.2.0, 2.1.7, 2.1.6... Manage Hyper Protect DBaaS clusters
Not Installed cloud-databases[cdb] 0.12.0, 0.11.0, 0.10.10... Manage Cloud databases
Not Installed key-protect[kp] 0.6.12, 0.6.11, 0.6.10... Manage encryption keys on IBM Cloud
Not Installed doi[doi] 0.4.0, 0.3.9, 0.3.8... Integrate with DevOps Insights service
Not Installed tke 1.3.1, 1.3.0, 1.2.3... Manage the master key of Cloud HSMs from Hyper Protect Crypto service
Not Installed cloud-object-storage 1.6.0, 1.5.0, 1.4.0... Manage Cloud Object Storage service
Not Installed event-streams 2.3.2, 2.3.1, 2.3.0... Manage Event Streams service
Not Installed power-iaas[pi] 0.4.3, 0.4.2, 0.4.1... Manage IBM Cloud Power Virtual Server service
Not Installed vpc-infrastructure[infrastructure-service/is] 5.4.0, 5.3.0, 5.2.0... Manage Virtual Private Cloud infrastructure service
Not Installed schematics[sch] 1.12.2, 1.12.1, 1.12.0... Managing IBM Cloud resources with Terraform
Not Installed cloud-dns-services[dns] 0.7.2, 0.7.1, 0.7.0... Manage IBM Cloud Dns Service
Not Installed dl-cli[dl] 0.5.2, 0.4.11, 0.4.10... Manage Direct Link
Not Installed watson 0.0.11, 0.0.10, 0.0.9... Manage Watson services
Not Installed catalogs-management 2.2.2, 2.2.1, 2.2.0... Manage personal catalogs and offerings
Not Installed tg-cli[tg] 0.8.2, 0.7.1, 0.6.1... Manage Transit Gateway service
Not Installed observe-service 1.0.82, 1.0.61, 1.0.54... Manage logging and monitoring configurations for IBM Cloud Kubernetes Service clusters
Not Installed code-engine 1.39.6, 1.39.5, 1.39.3... Manage Code Engine components
Not Installed hpvs 1.4.21, 1.4.19, 1.4.17... Manage Hyper Protect Virtual Server service
Not Installed secrets-manager[sm] 0.1.23, 0.1.22, 0.1.21... Manage IBM Cloud Secrets Manager secrets and secret groups.
Not Installed app-configuration[ac] 1.0.11, 1.0.10, 1.0.9... Interact with IBM Cloud App Configuration service instance
Not Installed monitoring 0.2.12, 0.2.9, 0.2.8... Manage IBM Cloud monitoring
Not Installed logging 0.0.8, 0.0.7, 0.0.6... Manage IBM Cloud logging
Not Installed cloudant[cl] 0.0.6, 0.0.5, 0.0.4... Manage Cloudant service
Not Installed hpcs-cert-mgr[hpcs-cert-mgr] 1.0.0 Manage the client certificates for IBM Cloud Hyper Protect Crypto Services
Not Installed atracker[at] 0.2.17, 0.2.12, 0.1.13... Manage IBM Cloud Activity Tracker
Not Installed analytics-engine-v3[ae-v3] 2.0.0, 1.0.9, 1.0.5... Manage serverless Spark instances and run applications
Not Installed cra[cra] 1.0.4, 1.0.3, 1.0.2... Integrate with Code Risk Analyzer
Not Installed event-notifications[en/event-notifications] 0.1.2, 0.1.1, 0.1.0... IBM Cloud Event Notifications.
Not Installed dvaas[watson-query] 1.0.2 Manage data virtualization as a service
Not Installed hpnet 1.0.2, 1.0.1 Manage Hyper Protect Secure Network
Not Installed qiskit-runtime[qr] 0.1.5 [Beta] Manage Qiskit Runtime
Not Installed hpcs 0.0.1 Manage Hyper Protect Crypto Services Instances
Not Installed cbr 1.1.0, 1.0.0 Manage Context Based Restrictions
Not Installed privileged-access-gateway[pag] 1.2.2, 1.2.1 [Experimental] Manage Privilege Access Gateway

Use ibmcloud plugin update PLUGIN_NAME -r REPO_NAME to update a plugin from a repo.
Use ibmcloud plugin update --all -r REPO_NAME to upgrade all available plugins from a repo.

[ibmmas/cli:2.6.1]~$

Solution:
Add this plugin into next build.

ibmcloud plugin install ks

Thanks Olof

DB2 issue using MAS-cli

when trying to setup MAS instance using pipeline approach, we are getting below error in db2-manage step. This is connected environment and we are trying to setup DB within the cluster.

JDBC configuration was unable to be verified: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "172.30.145.80". Communication function detecting the error: "connect". Protocol specific error code(s): "111", "", "". SQLSTATE=08001 SQLCODE=-30081

DB2 sizing is not getting set on DB2uCluster instance using CLI

The DB2 sizing is not getting set on DB2uCluster instance using CLI

export DB2_META_STORAGE_SIZE=10Gi
export DB2_BACKUP_STORAGE_SIZE=10Gi
export DB2_LOGS_STORAGE_SIZE=10Gi
export DB2_TEMP_STORAGE_SIZE=10Gi
export DB2_DATA_STORAGE_SIZE=20Gi

DB2U cluster instance shows the default values set at the playbook.

storage:
- name: meta
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: odf-lvm-vg1
type: create
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: odf-lvm-vg1
type: template
- name: backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: odf-lvm-vg1
type: create
- name: activelogs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: odf-lvm-vg1
type: template
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: odf-lvm-vg1
type: template
version: 11.5.7.0-cn4

Dev Branch: db2test

Add missing prompt for mirror mode

When running mas mirror-images command interactive there is no prompt for the mirror mode ... all recent testing has been focused on the non-interactive mode ... we need to verify the interactive command and ensure all options are supported.

e.g. make sure we can still do the interactive equivalent of this command, mirror mode might be only one of the problems we have, ensure we fix anything else at the same time:

mas mirror-images \
  -m to-filesystem -d /pvc/mirror/v8-230217-amd64/8.9.x \
  -H $MIRROR_HOST -P $MIRROR_PORT -u $MIRROR_USERNAME -p $MIRROR_PASSWORD \
  -c v8-230217-amd64 -C 8.9.x --mirror-core --mirror-iot --mirror-optimizer --mirror-manage \
  --ibm-entitlement $IBM_ENTITLEMENT_KEY \
  --redhat-username $REDHAT_USERNAME --redhat-password $REDHAT_PASSWORD \
  --no-confirm

Mas Mirror images Breaking

Trying to mirror the images for MAS Core and Mas Manage 8.9.1 and 8.9.2 - mirroring fails with

fatal: [localhost]: FAILED! => {"changed": true, "cmd": "oc ibm-pak get ibm-mas --version 8.9.1", "delta": "0:00:12.815272", "end": "2023-02-24 07:06:07.015168", "msg": "non-zero return code", "rc": 1, "start": "2023-02-24 07:05:54.199896", "stderr": "Error: Unable to validate a signature object for the CASE ibm-mas-assist->=8.1.0 <9.0.0", "stderr_lines": ["Error: Unable to validate a signature object for the CASE ibm-mas-assist->=8.1.0 <9.0.0"], "stdout": "Downloading and extracting the CASE ...\n- Success\nRetrieving CASE version ...\n- Success\nValidating the CASE ...\nValidating the signature for the ibm-mas CASE...\n- Success\nCreating inventory ...\n- Success\nFinding inventory items\n- Success\nResolving inventory items ...\nParsing inventory items\nValidating the signature for the ibm-mas-monitor CASE...\nValidating the signature for the ibm-mas-iot CASE...\nValidating the signature for the ibm-mas-manage CASE...\nValidating the signature for the ibm-mas-predict CASE...\nValidating the signature for the ibm-mas-visualinspection CASE...\nValidating the signature for the ibm-mas-assist CASE...", "stdout_lines": ["Downloading and extracting the CASE ...", "- Success", "Retrieving CASE version ...", "- Success", "Validating the CASE ...", "Validating the signature for the ibm-mas CASE...", "- Success", "Creating inventory ...", "- Success", "Finding inventory items", "- Success", "Resolving inventory items ...", "Parsing inventory items", "Validating the signature for the ibm-mas-monitor CASE...", "Validating the signature for the ibm-mas-iot CASE...", "Validating the signature for the ibm-mas-manage CASE...", "Validating the signature for the ibm-mas-predict CASE...", "Validating the signature for the ibm-mas-visualinspection CASE...", "Validating the signature for the ibm-mas-assist CASE..."]}

this seems to be a problem with the ibm-pak code - trying to get the pak using

oc ibm-pak get ibm-mas --version 8.9.1 -v 2

returns the following error

Error: Unable to validate a signature object for the CASE ibm-mas-assist->=8.1.0 <9.0.0

Pod error when installing MAS 8.8 on an existing OCP in Azure

Failed to pull image "us.icr.io/techzone/kubeturbo:8.8.0": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: The login credentials are not valid, or your IBM Cloud account is not active.

Support SKIP_X for dependency steps in tekton pipeline

See the TODO here: https://github.com/ibm-mas/cli/blob/master/image/cli/bin/functions/pipeline_prepare#L50-L53

When a customer wants to "BYO" JdbcCfg (or any other config) they should take advantage of the pipeline-additional-configs secret.

The CLI should:

  • Prompt user for a local directory containing additional configs
  • For each yaml file in the directory build the pipeline-additional-configs secret

This allows the customer to configure anything in MAS core .. SMTP, LDAP, SAML, Kafka, .... the code to process additional-configs already exists in the underlying Ansible code; we just don't expose ability to use it in the CLI.

Secondly, the pipeline itself needs to support "skip dependency" ... so even though the pipeline logic says "if Manage is being installed I need to install Db2" there need to be an override built into the pipeline that says "nah, skip this, I've handled it elsewhere". To achieve this the pipeline should have a simple additional flag SKIP_DB2 which when set will cause the db2 task to be skipped over. This can be used in conjunction with the ability to bring your own configs - of any type - to the install.

This work should be done for the following 4 steps in the pipeline initially:

  • mongodb (mongocfg) ... SKIP_MONGO
  • db2 (jdbccfg) ... SKIP_DB2
  • sls (slscfg) ... SKIP_SLS
  • kafka (kafkacfg) ... SKIP_KAFKA

Extra care would need to be taken with SKIP_MONGO due to it's use within the SLS role.

Uninstall doesn't clean resources of failed installation

In case installation process has failed in the middle, and some resources have been created, uninstall is not cleaning resources of failed instance

[ibmmas/cli:3.12.0]mascli$ mas uninstall
IBM Maximo Application Suite Uninstall Manager
Powered by https://github.com/ibm-mas/ansible-devops/ and https://tekton.dev/

 
1. Set Target OpenShift Cluster
Connected to OCP cluster: 
   https://console-openshift-console.aaaaaaaaaaa-9ca4d14d48413d18ce61b80811ba4308-0000.us-south.containers.appdomain.cloud
Proceed with this cluster? [Y/n] 
MAS Instance ID > mas-instance-1
Error: The project "mas-mas-instance-1-core" does not exist.


Fatal Error[1]  See /opt/app-root/src/.ibm-mas/logs/mas.log for details

Update catalog support for MAS 8.10 release

Publish a new curated catalog containing updates from the MAS 8.10 release cycle.

March patch content will be released on or around March 28th and will be available in the official IBM Operator Catalog. This issue will track the updates and release of the IBM Maximo Operator Catalog.

Curated content from the IBM Operator Catalog was refreshed on February 28th.

The release tag for the catalog will be 230328

mas install failing

I am using the cli version 3.10.5 in an air gapped configuration. when trying to run mas install I get an error in the Ibm-common-servces pipeline

TASK [ibm.mas_devops.ibm_catalogs : Load CASE bundle versions for the offline catalog] ***
Thursday 26 January 2023 11:59:24 +0000 (0:00:00.053) 0:00:02.243 ******
fatal: [localhost]: FAILED! => {"ansible_facts": {}, "ansible_included_var_files": [], "changed": false, "message": "Could not find or access '/opt/app-root/lib64/python3.9/site-packages/ansible_collections/ibm/mas_devops/roles/ibm_catalogs/../../common_vars/casebundles/v8-20230111-amd64.yml' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}

looking in the directory above I do not see the file mentioned I do see the following files

[ibmmas/cli:3.10.5]~$ ls -al /opt/app-root/lib64/python3.9/site-packages/ansible_collections/ibm/mas_devops/roles/ibm_catalogs/../../common_vars/casebundles/
total 32
drwxrwxr-x 1 default root 216 Jan 19 17:21 .
drwxrwxr-x 1 default root 135 Jan 19 17:21 ..
-rw-rw-r-- 1 default root 1716 Jan 19 17:21 v8-220717-amd64.yml
-rw-rw-r-- 1 default root 1716 Jan 19 17:21 v8-220717.yml
-rw-rw-r-- 1 default root 1716 Jan 19 17:21 v8-220805-amd64.yml
-rw-rw-r-- 1 default root 2015 Jan 19 17:21 v8-220927-amd64.yml
-rw-rw-r-- 1 default root 2369 Jan 19 17:21 v8-221025-amd64.yml
-rw-rw-r-- 1 default root 2533 Jan 19 17:21 v8-221129-amd64.yml
-rw-rw-r-- 1 default root 2533 Jan 19 17:21 v8-221228-amd64.yml
-rw-rw-r-- 1 default root 2533 Jan 19 17:21 v8-230111-amd64.yml

Add documentation for mas provision-aws

There are no instructions for running mas provision-aws, specifically making sure that the Red Hat pull secret needs to be stored in /opt/app-root/src/masconfig/

db2 operator version is old

DB2 version hardcoded as default in the db2 role is old. The maximo catalog doesn't have the needed operator version to bring it to 11.5.7.0-cn7. Any way this can be changed? In addition, this information doesn't seems to be logical, given that everything in MAS 8.9 is supporting OCP 4.10 (except for DB2) :

DB2, which is installed with ibm-db2u-operator, is supported on OpenShift Container Platform versions 4.6-4.9 only.

CLI does not detect when the pipelines namespace fails to be created

Customer was using an invalid instanceId that contains characters not permitted in a namespace name. We already have a requirement to validate instance ID (#244), but additionally we should be failing with a nice error message when the pipelines namespace fails to be created for any reason.

Custom storage class support for KAFKA storage

https://github.com/ibm-mas/ansible-devops/blob/master/ibm/mas_devops/roles/kafka/tasks/determine-storage-classes.yml
going by this supported_storage_classes: are [ibmc-block-gold, ocs-storagecluster-ceph-rbd, managed-premium]

Customer has NFS based block storage and that is not being picked by the one-click install. Using one-click install, we provided the relevant storage class and see KAFKA_STORAGE_CLASS=trunas-iscsi-nfs env variable populated. However, zookeeper_storage_class is not assigned the storage class.

Had discussion with David Parker and fix to support custom classes has to be implemented here : https://github.com/ibm-mas/cli/blob/master/tekton/pipelines/install.yaml#L458-L481 -- broken link - replacement: https://github.com/ibm-mas/cli/blob/master/tekton/src/pipelines/install.yml.j2

Enhanced must-gather for SLS

We need to detect instances of SLS in the cluster, and perform must-gather similar to how we do for the Suite itself.

Default storage class should be used when the storage class environment variable is not defined

The playbooks can be fixed to use the default storage class when there is no storage class override environment variable is used. This will prevent the failure.

TASK [ibm.mas_devops.kafka : Assert that zookeeper storage class has been defined] ***
Wednesday 18 January 2023  16:09:45 +0000 (0:00:00.051)       0:00:01.986 ***** 
fatal: [localhost]: FAILED! => {
    "assertion": "zookeeper_storage_class is defined and zookeeper_storage_class != \"\"",
    "changed": false,
    "evaluated_to": false,
    "msg": "zookeeper_storage_class must be defined"
}
TASK [ibm.mas_devops.appconnect : Assert that storage class has been defined] ***
Wednesday 18 January 2023  17:17:47 +0000 (0:00:00.054)       0:00:02.260 ***** 
fatal: [localhost]: FAILED! => {
    "assertion": "appconnect_storage_class is defined and appconnect_storage_class != \"\"",
    "changed": false,
    "evaluated_to": false,
    "msg": "appconnect_storage_class must be defined"
}

Add option to select the name of the ingress secret

By default mas install assumes that the Openshift cluster has the default ingress certificates with a secret name of: router-certs-default.

But we have replaced the default ingress certificates following this documentation:
https://docs.openshift.com/container-platform/4.10/security/certificates/replacing-default-ingress-certificate.html

We would need to be able to set the name of the secret, since the documentation allows you to set any name you want.

Thanks!

Updates for 230314 ibm-maximo-operator-catalog

Publish a new curated catalog containing patches released in March
March patch content will be released on or around March 14th and will be available in the official IBM Operator Catalog. This issue will track the updates and release of the IBM Maximo Operator Catalog.

The release tag for the catalog will be 230314

modifications for pending hostname changes to artifactory

Action Required

Sometime in early March 2023 we will need to switch artifactory references from hostnames to path-based repository access
All Docker repositories can now be accessed using a path-based mechanism.

for example the Docker URL reference:

wiotp-docker-local.artifactory.swg-devops.com

should become:

docker-na-public.artifactory.swg-devops.com/wiotp-docker-local

mas install should enforce rules for MAS "Instance ID" and "workspace ID"

While installing MAS and Manage, entering a MAS instance id such as 810instance will cause ManageDeployment to fail with the message "Service "810instance-mas-maxinst" is invalid: metadata.name: Invalid value...

a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is 'a-z?')

mas install should reject invalid values for instance id rather than proceeding and have the ManageDeployment fail

the mas cli should also validate the workspace id.

Review/fix container directory permissions

Reported internally: We need to tweak directory permissions to make the container more user-friendly when running inside a cluster as a deployment:

So back to my attempt to make MAS cli work in a pod... as I mentioned the mascli does not seem to be writable by the user I rsh in: drwxr-xr-x. 1 root root 17 Sep 21 13:44 mascli

sh-4.4$ whoami
1000670000
sh-4.4$ cat /etc/passwd | grep 1000670000
1000670000:x:1000670000:0:1000670000 user:/opt/app-root/src:/sbin/nologin

For this reason:

sh-4.4$ mas
mkdir: cannot create directory '/mascli/logs': Permission denied
mkdir: cannot create directory '/mascli/config': Permission denied
unknown parameter

Fatal Error[1]  See /mascli/bin/mas.log for details
/mascli/bin/functions/save_config: line 10: /mascli/bin/mas.env: Permission denied

would it be possible to allow writing in that directory from the rsh user?

Support mixed use of Db2u operator and external databases

During the CLI install there are two options today:

  • Use Db2u for Manage & IoT
  • Don't use Db2u for Manage or IoT

We want to make these two things individual choices. I should be able to choose the use Db2u for either the shared system jdbc datasource and/or the dedicated Manage datasource, rather than having to choose once and have that choice apply for both.

So, instead of:

  • Do you want to use db2u?

We should have:

  • <config options for IoT database, including "do you want to use Db2u?">
  • <config options for Manage database, including "do you want to use Db2u?">

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.