GithubHelp home page GithubHelp logo

thecodeteam / puppet-scaleio Goto Github PK

View Code? Open in Web Editor NEW
20.0 22.0 18.0 141 KB

A Puppet module for installing, and configuring ScaleIO 2.0x data services components.

License: Apache License 2.0

Ruby 69.79% Puppet 28.72% Shell 0.86% Roff 0.63%

puppet-scaleio's Introduction

ScaleIO

Overview

A Puppet module that installs and configures the ScaleIO 2.0 block storage service components. The module currently supports Ubuntu 14.04/16.04 and CentOS 6 and 7.

Module Description

ScaleIO is software that takes local storage from operating systems and configures them in a virtual SAN to deliver block services to operating systems via IP. The module handles the configuration of ScaleIO components and the creation and mapping of volumes to hosts.

Most aspects of configuration of ScaleIO have been brought into Puppet.

Setup

What Puppet-ScaleIO affects

  • Installs firewall (iptables) settings based on ScaleIO components installed
  • Installs dependency packages such as numactl and libaio1
  • Installs oracle-java8 for gateway

Tested with

  • Puppet 3., 4.
  • ScaleIO 2.0+
  • Ubuntu 14.04/16.04, CentOS 6, CentOS 7

Setup Requirements

  • Requires ScaleIO packages available in apt repository (depending on the specific components you want to install)

    emc-scaleio-mdm
    emc-scaleio-sds
    emc-scaleio-sdc
    emc-scaleio-gateway
    emc_scaleio_gui
    
  • Required modules to install

    puppet module install puppetlabs-stdlib
    puppet module install puppetlabs-firewall
    

Beginning with scaleio

puppet module install cloudscaling-scaleio

Structure and specifics

All files reside in the root of manifests.

They consist of:

  • NAME_server.pp files - containing installation of the services named with the "NAME". Should be invoked on the nodes where the service is to be installed.
  • All other .pp files - configure ScaleIO cluster. Should be invoked on either current master MDM or with FACTER_mdm_ips="ip1,ip2,..." set can be invoked from anywhere.

Main parameter for addressing components in cluster is "name". Only SDC is addressed by "ip" for removal. All resource declarations are idempotent - they can be repeated as many times as required with the same results. Any optional parameters can be specified later with the same resource declaration.

Usage example

Example of deployment for 3 nodes MDM and 3 nodes SDS cluster is below:

It's possible to deploy from local directory by the command (replace <my_puppet_dir> with the place where your puppet is):

puppet apply --modulepath="/<my_puppet_dir>:/etc/puppet/modules" -e "command"
  1. You might want to make sure that kernel you have on the nodes for ScaleIO SDC installation (compute and cinder nodes in case of OpenStack deployment) is suitable for the drivers present here: ftp://QNzgdxXix:[email protected]/. Look for something like Ubuntu/2.0.5014.0/4.2.0-30-generic. Local kernel version can be found with uname -a command.

  2. Deploy servers. Each puppet should be run on a machine where this service should reside (in any order or in parallel):

Deploy master MDM and create 1-node cluster (can be run without name and ips to just install without cluster creation)

host1> puppet apply -e "class { 'scaleio::mdm_server': master_mdm_name=>'master', mdm_ips=>'10.0.0.1', is_manager=>1 }"

Deploy secondary MDM (can be rerun with is_manager=>0 to make it TieBreaker)

host2> puppet apply -e "class { 'scaleio::mdm_server': is_manager=>1 }"

Deploy TieBreaker (can be rerun with is_manager=>1 to make it Manager)

host3> puppet apply -e "class { 'scaleio::mdm_server': is_manager=>0 }"

Deploy 3 SDS server ()

host1> puppet apply -e "class { 'scaleio::sds_server': }"
host2> puppet apply -e "class { 'scaleio::sds_server': }"
host3> puppet apply -e "class { 'scaleio::sds_server': }"
  1. Configure the cluster (commands can be run from any node).

Set FACTER_mdm_ips variable

FACTER_mdm_ips='10.0.0.1,10.0.0.2'

Change default cluster password

puppet apply -e "scaleio::login {'login': password=>'admin'} -> scaleio::cluster { 'cluster': password=>'admin', new_password=>'password' }"

Login to cluster

puppet apply -e "scaleio::login {'login': password=>'password'}"

Add standby MDMs

puppet apply -e "scaleio::mdm { 'slave': sio_name=>'slave', ips=>'10.0.0.1', role=>'manager' }"
puppet apply -e "scaleio::mdm { 'tb': sio_name=>'tb', ips=>'10.0.0.2', role=>'tb' }"

Create Protection domain with 2 storage pools (fault_sets=>['fs1','fs2','fs3'] can also be specified here)

puppet apply -e "scaleio::protection_domain { 'protection domain':
  sio_name=>'pd', storage_pools=>['sp1'] }"

Add 3 SDSs to cluster (Storage pools and device paths in comma-separated lists should go in the same order)

puppet apply -e "scaleio::sds { 'sds 1':
  sio_name=>'sds1', ips=>'10.0.0.1', ip_roles=>'all', protection_domain=>'pd', storage_pools=>'sp1', device_paths=>'/dev/sdb' }"
puppet apply -e "scaleio::sds { 'sds 2':
  sio_name=>'sds2', ips=>'10.0.0.2', ip_roles=>'all', protection_domain=>'pd', storage_pools=>'sp1', device_paths=>'/dev/sdb' }"
puppet apply -e "scaleio::sds { 'sds 3':
  sio_name=>'sds3', ips=>'10.0.0.3', ip_roles=>'all', protection_domain=>'pd', storage_pools=>'sp1', device_paths=>'/dev/sdb' }"

Set password for user 'scaleio_client' (non-admin user account)

puppet apply -e "scaleio::cluster { 'cluster': client_password=>'Client_Password' }"
  1. Deploy clients (in any order or in parallel)

Deploy SDC service (should be on the same nodes where volume are mapped to)

host1> puppet apply -e "class { 'scaleio::sdc_server': mdm_ip=>'10.0.0.1,10.0.0.2' }"

Deploy Gateway server (password and ips are optional, can be set later with the same command)

host2> puppet apply -e "class { 'scaleio::gateway_server': mdm_ips=>'10.0.0.1,10.0.0.2', password=>'password' }"

Deploy GUI (optional)

host3> puppet apply -e "class { 'scaleio::gui_server': }"

Performance tuning

  • The manifest scaleio::sds_server sets noop scheduler for all SSD disks.

  • The manifests scaleio::sdc and scaleio::sds apply high_performance profile for SDS and SDC. In order to use regular profile set the parameter performance_profile, e.g.

    puppet apply -e "scaleio::sds { 'sds 1':
      sio_name=>'sds1', ips=>'10.0.0.1', protection_domain=>'pd', storage_pools=>'sp1',
      device_paths=>'/dev/sdb', performance_profile=>'default' }"
    

Reference

  • puppetlabs-stdlib
  • puppetlabs-firewall

Limitations

This module currently only support ScaleIO 2.0 and presumes that linux kernel for OS to host the SDC service is suitable for the one in emc-scaleio-sdc package. Alternatively after SDC deployment scini driver can be updated on the system according to ScaleIO 2.0 deployment guide.

No InstallationManager support is provided. Provisioning of LIA and CallHome is not available.

Contact information

puppet-scaleio's People

Contributors

alexandrelevine avatar alexey-mr avatar andrey-mp avatar brianvans avatar clintkitson avatar codenrhoden avatar jonasrosland avatar sushilrai avatar tikitavi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-scaleio's Issues

Ubuntu 16.04

Hi, the readme indicates Ubuntu 14.04, has any tests or work been done on 16.04 yet?

Duplicate declaration: Package[libaio1]

Hi,

I'm trying to setup a 3-node cluster (with MDM and SDS on each of the three) and if I understand the REAME correctly, for the first master node one needs something like this

class { 'scaleio::mdm_server': master_mdm_name=>'master', mdm_ips=>'10.98.180.23', is_manager=>1 }
class { 'scaleio::sds_server': }```

But that gives 
Duplicate declaration: Package[libaio1] is already declared in file modules/scaleio/manifests/common_server.pp:18

Either of the above lines can be used, but not both.
(testing on master with Ubuntu 16.04)

Support for RHEL7

Is there scope to expand this module to also cover Red Hat Enterprise Linux 7?

Just can't get REX-Ray working with this!

Hi,
I've been trying to get this script working to the way it's supposed to for over two weeks now (don't laugh, please) with little success. Initially it was things like vagrant missing the correct plugins and certificate errors blocking files from being downloaded, but now that I've jumped those hurdles problems with REX-Ray are starting to pop up. When I ssh into mdm1 and try running REX-Ray CLIs, two distinct things will come out of it:

  • Any rexray input will have mdm1 just hang until it complains about "fatal runtime error: out of memory"
  • The CLIs will work but they don't work as expected. "rexray help" works fine, but "rexray get-instance" returns nothing and "rexray new-volume" shows all the inputs as being null (i.e. size=0, name = "", etc) and doesn't actually create a volume.

I'm certain I've followed the instructions to a T, so I don't know what could be wrong. Have you experienced these issues before?

Duplicate declaration: Package[wget] is already declared

HI,

I have wget already define in a base common class that is applied to all servers, so when doing "class { 'scaleio::sds_server': }" one gets:

scaleio::sds_server: Duplicate declaration: Package[wget] is already declared

Should such a common base package really be defined as a dependency? I understand why one does it, but unfortunately puppet does not allow several packages to have the same dependency.

(testing on Ubuntu 16.04 from master)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.