GithubHelp home page GithubHelp logo

ibm / storagescalevagrant Goto Github PK

View Code? Open in Web Editor NEW
16.0 11.0 19.0 1.68 MB

Example scripts and configuration files to install and configure IBM Storage Scale in a Vagrant environment

License: Apache License 2.0

Ruby 26.80% Shell 73.20%
spectrum-scale vagrant gpfs

storagescalevagrant's Introduction

Storage Scale Vagrant

Example scripts and configuration files to install and configure IBM Storage Scale in a Vagrant environment.

Installation

The scripts and configuration files provision a single node IBM Storage Scale cluster using Vagrant.

Get the scripts and configuration files from GitHub

Open a Command Prompt and clone the GitHub repository:

  1. git clone https://github.com/IBM/StorageScaleVagrant.git
  2. cd StorageScaleVagrant

Get the Storage Scale self-extracting installation package

The creation of the Storage Scale cluster requires the Storage Scale self-extracting installation package. The developer edition can be downloaded from the Storage Scale home page.

Download the Storage_Scale_Developer-5.2.0.0-x86_64-Linux-install package and save it to directory StorageScaleVagrant/software on the host.

Please note that in case the Storage Scale Developer version you downloaded is newer than the one we listed here, you still might want to use the new version. You need to update the $SpectrumScale_version variable in Vagrantfile.common to match the version you downloaded before continuing.

Vagrant will copy this file during the provisioning from the host to directory /software on the management node m1.

Install Vagrant

Follow the Vagrant Getting Started Guide to install Vagrant to get familiar with Vagrant.

Provisioning

Storage Scale Vagrant supports the creation of a single node Storage Scale cluster on VirtualBox, libvirt and on AWS. There is a subdirectory for each supported provider. Follow the instructions in the subdirectory of your preferred provider to install and configure a virtual machine.

Directory Provider
aws Amazon Web Services
virtualbox VirtualBox
libvirt libvirt (KVM/QEMU)

Please note that for AWS you might want to prefer the new "Cloudkit" Storage Scale capability that is also available with the Storage Scale Developer Edition. For more details about Cloudkit, please refer to the documentation.

Once the virtual environment is provided, Storage Scale Vagrant uses the same scripts to install and configure Storage Scale. Storage Scale Vagrant executes those scripts automatically during the provisioning process (vagrant up) for your preferred provider.

Directory Description
setup/install Perform all steps to provision a Storage Scale cluster
setup/demo Perform all steps to configure the Storage Scale for demo purposes

Storage Scale Management Interfaces

Storage Scale Vagrant uses the Storage Scale CLI and the Storage Scale REST API to install and configure Storage Scale. In addition it configures the Storage Scale GUI to allow interested users to explore its capabilities.

Storage Scale CLI

Storage Scale Vagrant configures the shell $PATH variable and the sudo secure_path to include the location of the Storage Scale executables.

[vagrant@m1 ~]$ sudo mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         demo.example.com
  GPFS cluster id:           4200744107440960413
  GPFS UID domain:           demo.example.com
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address  Admin node name  Designation
------------------------------------------------------------------
   1   m1.example.com    10.1.2.11   m1m.example.com  quorum-manager-perfmon

[vagrant@m1 ~]$

Storage Scale REST API

To explore the Storage Scale REST API, enter https://localhost:8888/ibm/api/explorer (for AWS please use https://>AWS Public IP>/ibm/api/explorer) in a browser. The Storage Scale REST API uses the same accounts as the Storage Scale GUI. There's also a blog post available which contains more details on how to explore the REST API using the IBM API Explorer URL:

Trying out and exploring the Storage Scale REST API using โ€œcurlโ€ and/or the IBM API Explorer website

Configuration of Storage Scale Cluster:

[vagrant@m1 ~]$ curl -k -X GET --header 'Accept: application/json' -u admin:admin001 'https://localhost:8888/scalemgmt/v2/cluster'
{
  "cluster" : {
    "clusterSummary" : {
      "clusterId" : 4200744107441232322,
      "clusterName" : "demo.example.com",
      "primaryServer" : "m1.example.com",
      "rcpPath" : "/usr/bin/scp",
      "rcpSudoWrapper" : false,
      "repositoryType" : "CCR",
      "rshPath" : "/usr/bin/ssh",
      "rshSudoWrapper" : false,
      "uidDomain" : "demo.example.com"
    }
  },
  .....
  "status" : {
    "code" : 200,
    "message" : "The request finished successfully."
  }
}

[vagrant@m1 ~]$

Cluster nodes:

[vagrant@m1 ~]$ curl -k -X GET --header 'Accept: application/json' -u admin:admin001 'https://localhost:8888/scalemgmt/v2/nodes'
{
  "nodes" : [ {
    "adminNodeName" : "m1.example.com"
  } ],
  "status" : {
    "code" : 200,
    "message" : "The request finished successfully."
  }
}

[vagrant@m1 ~]$

Storage Scale GUI

To connect to the Storage Scale GUI, enter https://localhost:8888 (AWS: https://<AWS Public IP>) in a browser. The GUI is configured with a self-signed certificate. The login screen shows, after accepting the certificate. The user admin has the default password admin001.

Cluster overview in Storage Scale GUI:

Storage Scale Filesystem

Storage Scale Vagrant configures the filesystem fs1 and adds some example data to illustrate selected Storage Scale features.

Filesystems

The filesystem fs1 mounts on all cluster nodes at /ibm/fs1:

[vagrant@m1 ~]$ mmlsmount all
File system fs1 is mounted on 1 nodes.

[vagrant@m1 ~]$ mmlsfs fs1 -T
flag                value                    description
------------------- ------------------------ -----------------------------------
 -T                 /ibm/fs1                 Default mount point

[vagrant@m1 ~]$

On Linux, a Storage Scale filesystem can be used like any other filesystem:

[vagrant@m1 ~]$ mount | grep /ibm/
fs1 on /ibm/fs1 type gpfs (rw,relatime,seclabel)

[vagrant@m1 ~]$ find /ibm/
/ibm/
/ibm/fs1
/ibm/fs1/.snapshots

[vagrant@m1 ~]$

REST API call to show all filesystems:

[vagrant@m1 ~]$ curl -k -s -S -X GET --header 'Accept: application/json' -u admin:admin001 'https://localhost/scalemgmt/v2/filesystems/'
{
  "filesystems" : [ {
    "name" : "fs1"
  } ],
  "status" : {
    "code" : 200,
    "message" : "The request finished successfully."
  }
}[vagrant@m1 ~]$

Storage Pools

Storage pools allow to integrate different media types such es NVMe, SSD and NL-SAS into a single filesystem. Each Storage Scale filesystem has at list the system pool which stores metadata (inodes) and optionally data (content of files).

[vagrant@m1 ~]$ mmlspool fs1
Storage pools in file system at '/ibm/fs1':
Name                    Id   BlkSize Data Meta Total Data in (KB)   Free Data in (KB)   Total Meta in (KB)    Free Meta in (KB)
system                   0      4 MB  yes  yes        5242880        1114112 ( 21%)        5242880        1167360 ( 22%)

[vagrant@m1 ~]$ mmdf fs1
disk                disk size  failure holds    holds           free in KB          free in KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 15.87 GB)
nsd3                  1048576        1 Yes      Yes          229376 ( 22%)         11384 ( 1%)
nsd4                  1048576        1 Yes      Yes          204800 ( 20%)         11128 ( 1%)
nsd5                  1048576        1 Yes      Yes          217088 ( 21%)         11128 ( 1%)
nsd2                  1048576        1 Yes      Yes          225280 ( 21%)         11640 ( 1%)
nsd1                  1048576        1 Yes      Yes          237568 ( 23%)         11640 ( 1%)
                -------------                         -------------------- -------------------
(pool total)          5242880                               1114112 ( 21%)         56920 ( 1%)

                =============                         ==================== ===================
(total)               5242880                               1114112 ( 21%)         56920 ( 1%)

Inode Information
-----------------
Number of used inodes:            4108
Number of free inodes:          103412
Number of allocated inodes:     107520
Maximum number of inodes:       107520

[vagrant@m1 ~]$

A typical configuration is to use NVMe or SSD for the system pool for metadata and hot files, and to add a second storage pool with NL-SAS for colder data.

[vagrant@m1 ~]$ cat /vagrant/files/spectrumscale/stanza-fs1-capacity
%nsd: device=/dev/sdg
nsd=nsd6
servers=m1
usage=dataOnly
failureGroup=1
pool=capacity

%nsd: device=/dev/sdh
nsd=nsd7
servers=m1
usage=dataOnly
failureGroup=1
pool=capacity

[vagrant@m1 ~]$ sudo mmadddisk fs1 -F /vagrant/files/spectrumscale/stanza-fs1-capacity

The following disks of fs1 will be formatted on node m1:
    nsd6: size 10240 MB
    nsd7: size 10240 MB
Extending Allocation Map
Creating Allocation Map for storage pool capacity
Flushing Allocation Map for storage pool capacity
Disks up to size 322.37 GB can be added to storage pool capacity.
Checking Allocation Map for storage pool capacity
Completed adding disks to file system fs1.
mmadddisk: mmsdrfs propagation completed.

[vagrant@m1 ~]$

Now the filesystem has two storage pool.

[vagrant@m1 ~]$ mmlspool fs1
Storage pools in file system at '/ibm/fs1':
Name                    Id   BlkSize Data Meta Total Data in (KB)   Free Data in (KB)   Total Meta in (KB)    Free Meta in (KB)
system                   0      4 MB  yes  yes        5242880        1101824 ( 21%)        5242880        1155072 ( 22%)
capacity             65537      4 MB  yes   no       20971520       20824064 ( 99%)              0              0 (  0%)

[vagrant@m1 ~]$ mmdf fs1
disk                disk size  failure holds    holds           free in KB          free in KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 15.87 GB)
nsd1                  1048576        1 Yes      Yes          233472 ( 22%)         11640 ( 1%)
nsd2                  1048576        1 Yes      Yes          221184 ( 21%)         11640 ( 1%)
nsd3                  1048576        1 Yes      Yes          229376 ( 22%)         11384 ( 1%)
nsd4                  1048576        1 Yes      Yes          204800 ( 20%)         11128 ( 1%)
nsd5                  1048576        1 Yes      Yes          212992 ( 20%)         11128 ( 1%)
                -------------                         -------------------- -------------------
(pool total)          5242880                               1101824 ( 21%)         56920 ( 1%)

Disks in storage pool: capacity (Maximum disk size allowed is 322.37 GB)
nsd6                 10485760        1 No       Yes        10412032 ( 99%)          8056 ( 0%)
nsd7                 10485760        1 No       Yes        10412032 ( 99%)          8056 ( 0%)
                -------------                         -------------------- -------------------
(pool total)         20971520                              20824064 ( 99%)         16112 ( 0%)

                =============                         ==================== ===================
(data)               26214400                              21925888 ( 84%)         73032 ( 0%)
(metadata)            5242880                               1101824 ( 21%)         56920 ( 1%)
                =============                         ==================== ===================
(total)              26214400                              21925888 ( 84%)         73032 ( 0%)

Inode Information
-----------------
Number of used inodes:            4108
Number of free inodes:          103412
Number of allocated inodes:     107520
Maximum number of inodes:       107520

[vagrant@m1 ~]$

Disclaimer

Please note: This project is released for use "AS IS" without any warranties of any kind, including, but not limited to installation, use, or performance of the resources in this repository. We are not responsible for any damage, data loss or charges incurred with their use. This project is outside the scope of the IBM PMR process. If you have any issues, questions or suggestions you can create a new issue here. Issues will be addressed as team availability permits.

storagescalevagrant's People

Contributors

akoeninger89 avatar hseipp avatar imgbotapp avatar kant avatar neikei avatar stevemart avatar troppens avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

storagescalevagrant's Issues

Vagrant AWS provisioning cannot find gpfs_bda_integration_ansible_cli module

Hello,

I am using vagrant aws together with Spectrum Scale Developer edition (Spectrum_Scale_Developer-5.1.1.0-x86_64-Linux-install). During provisioning it fails with the following exception:

   SpectrumScale_single: =========================================================================================
    SpectrumScale_single: ===>
    SpectrumScale_single: ===> Running /vagrant/install/script-03.sh
    SpectrumScale_single: ===> Setup management node (m1) as Spectrum Scale Install Node
    SpectrumScale_single: ===>
    SpectrumScale_single: =========================================================================================
    SpectrumScale_single: + set -e
    SpectrumScale_single: + '[' 1 -ne 1 ']'
    SpectrumScale_single: + case $1 in
    SpectrumScale_single: + PROVIDER=AWS
    SpectrumScale_single: + '[' AWS = AWS ']'
    SpectrumScale_single: ++ hostname -I
    SpectrumScale_single: ===> Setup management node (m1) as Spectrum Scale Install Node
    SpectrumScale_single: + INSTALL_NODE='172.31.18.2 '
    SpectrumScale_single: + '[' AWS = VirtualBox -o AWS = libvirt ']'
    SpectrumScale_single: + echo '===> Setup management node (m1) as Spectrum Scale Install Node'
    SpectrumScale_single: + sudo /usr/lpp/mmfs/5.1.1.0/ansible-toolkit/spectrumscale setup -s 172.31.18.2 --storesecret
    SpectrumScale_single: Unable to find bda_integration in this environment or HDFS is not supported on this OS/Arch
    SpectrumScale_single: Traceback (most recent call last):
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/cli/config.py", line 1369, in hdfs
    SpectrumScale_single:     import gpfs_bda_integration_ansible_cli as bda_cli
    SpectrumScale_single: ModuleNotFoundError: No module named 'gpfs_bda_integration_ansible_cli'
    SpectrumScale_single: 
    SpectrumScale_single: During handling of the above exception, another exception occurred:
    SpectrumScale_single: 
    SpectrumScale_single: Traceback (most recent call last):
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/spectrumscale", line 97, in <module>
    SpectrumScale_single:     main()
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/spectrumscale", line 56, in main
    SpectrumScale_single:     commands(parser.add_subparsers())  # Connect up all the commands
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/cli/utils.py", line 170, in __call__
    SpectrumScale_single:     configurer(subparser_action)
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/cli/utils.py", line 157, in _configurer
    SpectrumScale_single:     command_group(command_parser.add_subparsers())  # configure subcommands
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/cli/utils.py", line 170, in __call__
    SpectrumScale_single:     configurer(subparser_action)
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/cli/utils.py", line 156, in _configurer
    SpectrumScale_single:     command_group = command()  # The command should return the group
    SpectrumScale_single:   File "/usr/lpp/mmfs/5.1.1.0/ansible-toolkit/cli/config.py", line 1373, in hdfs
    SpectrumScale_single:     raise ImportError(error.__class__.__name__ + ": " + error.message)
    SpectrumScale_single: AttributeError: 'ModuleNotFoundError' object has no attribute 'message'
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Could you please help with this issue?

I think it is related to HDFS setup, but I am not sure if we could setup the required module or disable it.

Vagrant package fails

Currently running vagrant package step for AWS setup fails with following exception:

$ vagrant package SpectrumScale_base --output SpectrumScaleBase.box

==> SpectrumScale_base: Burning instance i-049ac1cd75ae1f46b into an ami
There was an error talking to AWS. The error message is shown
below:

Malformed => AMI names must be between 3 and 128 characters long, and may contain letters, numbers, '(', ')', '.', '-', '/' and '_'

`ansible-playbook` not found as /usr/local/bin is not on the default path for root

/vagrant/install/script-05.sh VirtualBox 5.1.4.0 does the actual install and so calls ansible-playbook
This is no longer on the default path as pip3 installs it into /usr/local/bin

2022-10-11 15:05:37,031 [ ERROR ] The following error was encountered:
Traceback (most recent call last):
  File "/usr/lpp/mmfs/5.1.4.0/ansible-toolkit/espylib/install.py", line 332, in _run_install_playbooks
    extra_args_env, True, 'install')
  File "/usr/lpp/mmfs/5.1.4.0/ansible-toolkit/espylib/connectionmanager.py", line 75, in execute_playbook
    log_prefix
  File "/usr/lpp/mmfs/5.1.4.0/ansible-toolkit/espylib/deploy.py", line 300, in run_playbook_local
    close_fds=True, env=facts_env)
  File "/usr/lib64/python3.6/subprocess.py", line 729, in __init__
    restore_signals, start_new_session)
  File "/usr/lib64/python3.6/subprocess.py", line 1364, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ansible-playbook': 'ansible-playbook'

Add port forwarding for AWS setup

Currently AWS vagrant file is missing port forwarding for https (443) port.

config.vm.network "forwarded_port", guest: 443, host: 8888

If directory disk/ already exists then Vagrant Up fails with "no SATA controller" error

In SpectrumScaleVagrant/virtualbox, if an install with vagrant up somehow fails, or if the user simply deletes the VM with vagrant destroy then the subdirectory disk/ is left behind.
This is now empty (It used to contain the 7 LUNs)
However a subsequent vagrant up will always fail with the error

A customization command failed:

["storageattach", :id, "--storagectl", "SATA", "--port", "0", "--device", 0, "--type", "hdd", "--medium", "disk/disk-m1-djk-001.vdi"]

The following error was experienced:

#<Vagrant::Errors::VBoxManageError: There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["storageattach", "8f5f4e0d-0771-430d-8dcd-6eee8bc46d02", "--storagectl", "SATA", "--port", "0", "--device", "0", "--type", "hdd", "--medium", "disk/disk-m1-djk-001.vdi"]

Stderr: VBoxManage.exe: error: Could not find a controller named 'SATA'

install/script-05.sh waits forever for a keyboard keystroke if run from an interactive shell

install/script-05.sh contains lines like

service gpfsgui.service status

This waits for a keystoke like 'q' if run from an interacitve shell thus preventing unattented (but interactve) installation (as needed when debugging ansible-playbook issues)

1 Suggest replacing service gpfsgui.service status with systemctl status gpfsgui.service as Centos, Ubuntu etc are all systemd based now
2 Suggest adding --no-pager to prevent waiting for a keystroke. ie. systemctl status --no-pager gpfsgui.service

Using latest vagrant version throws an exception

Running the vagrant up throws the following exception:

vagrant up                                                                                   
/home/mtoraz/.vagrant.d/gems/2.6.7/gems/vagrant-aws-0.7.2/lib/vagrant-aws/action/connect_aws.rb:41:in `call': undefined method `except' for #<Hash:0x0000000001c17860> (NoMethodError)
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/action/warden.rb:48:in `call'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/action/warden.rb:48:in `call'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/action/builder.rb:149:in `call'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/action/runner.rb:89:in `block in run'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/util/busy.rb:19:in `busy'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/action/runner.rb:89:in `run'                                                                                            from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/machine.rb:246:in `action_raw'                                                                                          from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/machine.rb:215:in `block in action'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/machine.rb:197:in `block in action'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/machine.rb:201:in `action'
        from /home/mtoraz/.vagrant.d/gems/2.6.7/gems/vagrant-aws-0.7.2/lib/vagrant-aws/provider.rb:32:in `state'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/machine.rb:541:in `state'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/machine.rb:154:in `initialize'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/vagrantfile.rb:81:in `new'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/vagrantfile.rb:81:in `machine'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/environment.rb:716:in `machine'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/plugin/v2/command.rb:180:in `block in with_target_vms'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/plugin/v2/command.rb:204:in `block in with_target_vms'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/plugin/v2/command.rb:186:in `each'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/plugin/v2/command.rb:186:in `with_target_vms'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/plugins/commands/up/command.rb:87:in `execute'                                                                 
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/cli.rb:67:in `execute'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/lib/vagrant/environment.rb:290:in `cli'
        from /opt/vagrant/embedded/gems/2.2.16/gems/vagrant-2.2.16/bin/vagrant:231:in `<main>'      

It is related to this issue: mitchellh/vagrant-aws#566

I solved the issue using the workaround suggested on this comment: mitchellh/vagrant-aws#566 (comment)

I think vagrant-aws seems to be not very actively maintained, general solution would be to use differnt aws vagrant plugin.

Installing the VirtualBox prep-box failes with "Peer certificate cannot be authenticated with given CA certificates"

The install gets a long way. This is that last bit that works

/usr/bin/dnf install -y\
        boost-regex\
        cyrus-sasl\

But after that it errors with:

StorageScale_base: Complete!
==> StorageScale_base: Running provisioner: Fix Ansible installation after the June 2022 name change (shell)...
    StorageScale_base: Running: script: Fix Ansible installation after the June 2022 name change
==> StorageScale_base: Running provisioner: Install Ansible as required by Storage Scale 5.1.1+ installer (shell)...
    StorageScale_base: Running: script: Install Ansible as required by Storage Scale 5.1.1+ installer
    StorageScale_base: Extra Packages for Enterprise Linux 8 - x86_64  0.0  B/s |   0  B     00:00
    StorageScale_base: Errors during downloading metadata for repository 'epel':
    StorageScale_base:   - Curl error (60): Peer certificate cannot be authenticated with given CA certificates for https://mirror.init7.net/fedora/e
pel/8/Everything/x86_64/repodata/repomd.xml [SSL certificate problem: unable to get local issuer certificate]
    StorageScale_base: Error: Failed to download metadata for repo 'epel': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirro
rs were tried
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Some users may have trouble with line endings

Failure description

Some users may have trouble with line endings, because after a git checkout the file line endings are set to CLRF instead of LF. Interestingly, this did not happen when I used the ZIP file download from Github.

Setup

Setup Version / Configuration
Host OS Windows 10
VirtualBox 6.0.4 r128413
Vagrant 2.2.3
Vagrant Plugins vagrant-hostmanager, vagrant-vbguest, vagrant-winnfsd

Error output

The base box creation worked fine, but the single node setup failed.

==> m1: Configuring and enabling network interfaces...
==> m1: Rsyncing folder: /cygdrive/c/Users/Niklas/Desktop/Projekte/SpectrumScaleVagrant_IBM/setup/ => /vagrant
==> m1: Rsyncing folder: /cygdrive/c/Users/Niklas/Desktop/Projekte/SpectrumScaleVagrant_IBM/software/ => /software
==> m1: Running provisioner: shell...
    m1: Running: script: Configure /ets/hosts
==> m1: Running provisioner: shell...
    m1: Running: script: Configure ssh keys for user root
==> m1: Running provisioner: shell...
    m1: Running: script: Configure ssh host keys
    m1: # m1:22 SSH-2.0-OpenSSH_7.4
    m1: # m1:22 SSH-2.0-OpenSSH_7.4
    m1: # m1:22 SSH-2.0-OpenSSH_7.4
==> m1: Running provisioner: shell...
    m1: Running: script: Get fingerprint for management IP address
==> m1: Running provisioner: shell...
    m1: Running: script: Add /usr/lpp/mmfs/bin to $PATH
==> m1: Running provisioner: shell...
    m1: Running: script: Add /usr/lpp/mmfs/bin to sudo secure_path
==> m1: Running provisioner: shell...
    m1: Running: script: Install and configure single node Spectrum Scale cluster
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
PS C:\Users\Niklas\Desktop\Projekte\SpectrumScaleVagrant_IBM\single> vagrant ssh
-bash: $'\r': command not found
[vagrant@m1 ~]$ file /etc/profile.d/spectrumscale.sh
/etc/profile.d/spectrumscale.sh: ASCII text, with CRLF line terminators

Workaround

Set the line endings manually to LF and try it again. Run "vagrant destroy" and "vagrant up".

Ansible version dependancy problem stops the install

I am getting this error

SpectrumScale_base: Error:
  SpectrumScale_base:  Problem: conflicting requests
  SpectrumScale_base:   - nothing provides (ansible-core >= 2.12.2 with ansible-core < 2.13) needed by ansible-5.4.0-3.el8.noarch
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

I have tried with both
Spectrum_Scale_Developer-5.1.4.0-x86_64-Linux-install
and
Spectrum_Scale_Developer-5.1.5.0-x86_64-Linux-install

The problem occurs with both.

Missmatch kernel and kernel-headers version

Many thanks for the great project! ๐Ÿ‘

Failure description

Unfortunately I could not reproduce the installation completely. There seems to be a mismatch with the kernel and kernel-headers. Maybe this is caused by the VirtualBox guest additions.

kernel.x86_64                        3.10.0-957.1.3.el7         @koji-override-1
kernel-devel.x86_64                  3.10.0-957.1.3.el7         @updates
kernel-devel.x86_64                  3.10.0-957.5.1.el7         @updates
kernel-headers.x86_64                3.10.0-957.5.1.el7         @updates

Setup

Setup Version / Configuration
Host OS Windows 10
VirtualBox 6.0.4 r128413
Vagrant 2.2.3
Vagrant Plugins vagrant-hostmanager, vagrant-vbguest, vagrant-winnfsd

Workaround

sudo yum downgrade kernel-headers-3.10.0-957.1.3.el7 
# Start scripts /vagrant/scripts/script-05.sh && /vagrant/scripts/script-06.sh

Error log

...
    m1: [ INFO  ] Checking pre-requisites for portability layer.
    m1: [ INFO  ] Detailed error log: /usr/lpp/mmfs/5.0.2.2/installer/logs/INSTALL-02-02-2019_13:09:27.log
    m1: [ FATAL ] m1.example.com: Pre requisite package not found on m1.example.com (Pre requisite: kernel-headers-3.10.0-957.1.3.el7.x86_64). Ensure this is resolved before proceeding with the install toolkit.
    m1: [ FATAL ] Pre requisite check failed on one or more nodes
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
PS C:\Users\Niklas\Desktop\Projekte\SpectrumScaleVagrant_IBM\single> vagrant ssh
[vagrant@m1 ~]$ uname -a
Linux m1 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@m1 ~]$ sudo yum install kernel-headers
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.wiuwiu.de
 * extras: ftp.rz.uni-frankfurt.de
 * updates: mirror.wiuwiu.de
Package kernel-headers-3.10.0-957.5.1.el7.x86_64 already installed and latest version
Nothing to do
[vagrant@m1 ~]$ tail -n 15 /usr/lpp/mmfs/5.0.2.2/installer/logs/INSTALL-02-02-2019_13:09:27.log
2019-02-02 13:09:41,008 [ TRACE ] Thread Thread-4 for m1.example.com has finished.
2019-02-02 13:09:41,008 [ FATAL ] m1.example.com: Pre requisite package not found on m1.example.com (Pre requisite: kernel-headers-3.10.0-957.1.3.el7.x86_64). Ensure this is resolved before proceeding with the install toolkit.
2019-02-02 13:09:41,009 [ TRACE ] Stopping chef zero
2019-02-02 13:09:41,009 [ ERROR ] The following error was encountered:
Traceback (most recent call last):
  File "/usr/lpp/mmfs/5.0.2.2/installer/espylib/reporting.py", line 206, in log_to_file
    yield handler
  File "/usr/lpp/mmfs/5.0.2.2/installer/espylib/install.py", line 208, in _install
    setup.pre_check(config)
  File "/usr/lpp/mmfs/5.0.2.2/installer/espylib/setup/gpfs.py", line 629, in pre_check
    config.nodes_not_in_cluster
  File "/usr/lpp/mmfs/5.0.2.2/installer/espylib/systemfunctions.py", line 798, in parallel_task
    raise FriendlyError("Pre requisite check failed on one or more nodes")
FriendlyError: Pre requisite check failed on one or more nodes
2019-02-02 13:09:41,011 [ INFO  ] Detailed error log: /usr/lpp/mmfs/5.0.2.2/installer/logs/INSTALL-02-02-2019_13:09:27.log
[vagrant@m1 ~]$ sudo yum list installed | grep kernel
kernel.x86_64                        3.10.0-957.1.3.el7         @koji-override-1
kernel-devel.x86_64                  3.10.0-957.1.3.el7         @updates
kernel-devel.x86_64                  3.10.0-957.5.1.el7         @updates
kernel-headers.x86_64                3.10.0-957.5.1.el7         @updates
kernel-tools.x86_64                  3.10.0-957.1.3.el7         @koji-override-1
kernel-tools-libs.x86_64             3.10.0-957.1.3.el7         @koji-override-1

Repeated warning messages about "Unable to find bda_integration" from ./spectrumscale

The installer /vagrant/install/setup*.sh is generating many error mesages of the form

===> Specify cluster name
Unable to find bda_integration in this environment or HDFS is not supported on this OS/Arch
===> Specify to disable call home
Unable to find bda_integration in this environment or HDFS is not supported on this OS/Arch

This appears to stop ./spectrumscale from setting any parameters, nor to run 'install'
Maybe this is because it is Centos not Redhat ?

Suggest switching off callhome warning

To reduce the false positive warnings in the GUI, ca I suggest that you add

mmhealth event hide callhome_not_enabled

(Unless you want to leave it in so customers might talk about what it is?)

Minor typo in usage() in install/common-preamble.sh

install/common-preamble.sh
says

  echo "Usage: $0 <provider> <spectrumscale-version>"
  echo "Supported <provider>:"
  echo "  AWS"
  echo "  Virtualbox"
  echo "  libvirt"
  echo "<spectrumscale-version> is the full version number like 5.1.5.0"
}

However putting "Virtualbox" as an argument fails as the test is:

  'AWS'|'VirtualBox'|'libvirt' )
    PROVIDER=$1
    ;;

ie a capital letter B

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.