This repo is deprecated, please use bootstrap-openstack-k8s instead
Main objective is to create an small OpenStack infrastructure within an OVH public cloud infrastructure (which is also run by OpenStack by the way :p So we will create an OpenStack over OpenStack).
ssh +----------+
you +-----------> | deployer |
+----------+
|
ansible (ssh)
|
+----------+ +----------+ +----------+ +---+
| rabbit | | nova | | neutron | <-----> | V |
+----------+ +----------+ +----------+ | R |
| a |
+----------+ +----------+ +----------+ | c |
| mysql | | glance | | compute | <-----> | k |
+----------+ +----------+ +----------+ +---+
|
+----------+ +----------+ +----------+ |
| horizon | | keystone | | designate| |
+----------+ +----------+ +----------+ |
Instances public access
| with /28 network block
HTTP API access |
| |
+----------+----------------------------+
|
Internet
Every machine will have a public IP and be accessible from internet.
They will also be connected to each other with a management network.
Neutron and compute will also be connected to a special network through vRack.
In this vRack we will route a failover IP block (/28 in my example) so that we can give public IPs to instances / routers.
Deployer is used to configure the others (like an admin / jumphost machine).
To start working on this project, you must have:
- an account on OVH
- a cloud project
- a vRack
See here: https://www.ovh.com/fr/support/new_nic.xml
You must create two subnets:
- one with a VLAN ID (you choose, don't care) named management and DHCP enable
- one without any VLAN ID named public and DHCP disabled
Respect the names as we refer to them within the bootstrap script.
Example of creation of the management network from CLI:
$ openstack network create management
$ openstack network create public --provider-network-type=vrack --provider-segment=0
$ openstack subnet create --dhcp --gateway none --subnet-range 192.168.1.0/24 --network management --dns-nameserver 0.0.0.0 192.168.1.0/24
$ openstack subnet create --no-dhcp --gateway none --subnet-range 192.168.1.0/24 --network public --dns-nameserver 0.0.0.0 192.168.1.0/24
Example of creation of the public network from manager:
To do that, you can run the script data/order_ip_block.py
$ pip3 install -r requirements.txt
$ python3 order_ip_block.py
Please pay the BC 12345678 --> https://www.ovh.com/cgi-bin/order/displayOrder.cgi?orderId=12345678&orderPassword=ABCD
Done
Once your BC (Bon de Commande / order) is paid, you should receive a /28 in your manager. You can now move this pool of IP in your vRack by doing so:
$ git clone https://github.com/arnaudmorin/bootstrap-openstack.git
$ cd bootstrap-openstack
$ pip install python-openstackclient
$ source openrc.sh
$ ./bootstrap.sh
This will create 10 instances, connected to an external network (Ext-Net) and two vRack networks (public and management).
Each instance is going to be dedicated to one of the core OpenStack services (see architecture).
You will also have a special instance named deployer which you will use as jump host / ansible executor.
Wait for the instances to be ACTIVE. You can check the status with:
$ openstack server list
Now that your infrastructure is bootstrapped, you can start the configuration of OpenStack itself from the deployer machine.
To do so, you will need to connect to the deployer with the SSH key (named zob) that was created during bootstraping:
$ chmod 600 data/zob.key # Because permissions of the key were too loose when you cloned this repo.
$ ssh -i data/zob.key debian@deployer_ip # Replace deployer_ip with the real IP.
Now that you are inside the deployer, be root
$ sudo su -
Behind the scene, the bootstrapping created the instances with custom cloud-init scripts (from userdata). Those scripts are executed as postinstall inside the machine at boot time.
As the execution of this postinstall can take some time (few minutes), if you are SSHing to your machine quite quickly, you can check the postinstall logs in live:
$ tail -f /var/log/postinstall.log
The postinstall is finished when you can read done at the end.
Ansible is using a dynamic inventory file that will ask openstack all instances that you currently have in your infrastructure. A config file should already be configured in /etc/ansible/openstack.yml You can check its content and update if necessary
$ /etc/ansible/dynhosts --list
should return something ending like:
...
"ovh": [
"designate",
"horizon",
"mysql",
"compute-1",
"neutron",
"glance",
"nova",
"keystone",
"rabbit",
"deployer"
]
}
Run ansible on deployer itself, so it can learn the different IP addresses of your infrastructure.
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags deployer
Continue with rabbit
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags rabbit
Then mysql
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags mysql
Then keystone
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags keystone
Glance is one of the easiest service to install, so try to install it by your own.
To do so, you can read the documentation here: https://docs.openstack.org/glance/stein/install/index.html
Or, if you are eager to get to the end of this boostraping, you can use:
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags glance
Then nova
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags nova
Then placement
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags placement
Then neutron
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags neutron
Then horizon
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags horizon
Then, compute (it will also trigger nova again to let it knows about the compute)
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags compute
And finally, designate
$ ansible-playbook /etc/ansible/deploy_openstack.yml --tags designate
Or if you want to perform all in one shot:
$ ansible-playbook /etc/ansible/deploy_openstack.yml
On deployer server, you will find the openrc_admin and openrc_demo files that can be used to access your brand new OpenStack infrastructure You will also find a helper script that contains basic functions to create images, networks, keypair, security groups, etc.
From your deployer node, as root:
# Source helper functions
source helper
# Following actions are done as admin
source openrc_admin
create_flavors
create_image_cirros
create_image_debian
# Before running this one, adjust the parameters with your network settings
create_network_public 5.135.0.208/28 5.135.0.222
# Following actions are done as demo
source openrc_demo
create_network_private
create_rules
create_key
create_server_public
You can also browse the dashboard by opening url like this: http://your_horizon_ip/horizon/