The easy-way to create and manage a personal cloud envirnoment. mHC has been created using Shell, Proxmox-VE, Packer, Terraform, Ansible, MAAS and is not completely reliable for Production environments.
It is an open source Server Virtualization Platform. Proxmox-VE includes two different virtualization technologies which are Kernel-Based Virtual Machine (KVM) and Container-Based Virtualization (LXC). Proxmox-VE can run on a single node, or assemble a cluster of many nodes. This way, your virtual machines and containers can run on Proxmox-VE with high availability.
- Download the installer ISO image from: Proxmox-VE ISO Image
- Create an USB flash drive and Boot from USB
- baleneEtcher is an easy way to create Proxmox-VE USB flash drive.
Installing Proxmox VE | |
---|---|
The Proxmox VE menu will be displayed and select Install Proxmox VE to starts the normal installation. Click for more detail about Options |
|
After selecting Install Proxmox VE and accepting the EULA, the prompt to select the target hard disk(s) will appear. The Options button opens the dialog to select the target file system. In my instruction, we can select the default file system ext4, or xfs different from the one in the screenshot. The installer creates a Volume Group (VG) called pve, and additional Logical Volumes (LVs) called root, data, and swap. To control the size of these volumes use:
Click for more detail about Advanced LVM Options |
|
After setting the disk options the next page asks for basic configuration options like the location, the time zone, and keyboard layout. They only need to be changed in the rare case that auto detection fails or a different keyboard layout should be used. | |
Next the password of the superuser (root) and an email address needs to be specified. The password must be at least 5 characters. However, it is highly recommended that you use a stronger password, so set a password that is at least 12 to 14 characters. The email address is used to send notifications to the system administrator. | |
The last step is the network configuration. Please note that during installation you can either use an IPv4 or IPv6 address, but not both. To configure a dual stack node, add additional IP addresses after the installation. There will be created a proxmox cluster consisting of 3 physical servers. Therefore, 3 different network information is given below.
|
|
The next step shows a summary of the previously selected options. Re-check every setting and use the Previous button if a setting needs to be changed. To accept, press Install. The installation starts to format disks and copies packages to the target. Please wait until this step has finished; then remove the installation medium and restart your system. Then point your browser to the IP address given during installation https://youripaddress:8006 to reach Proxmox Web Interface.Default login is "root" and the root password is defined(step 4) during the installation process. |
- After the installation is completed, the files which repositories are defined should be as follows in order to use APT Package Management tool successfully.
- File /etc/apt/sources.list
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib
deb http://security.debian.org/debian-security buster/updates main contrib
deb http://download.proxmox.com/debian/pve buster pve-no-subscription
- Note: PVE pve-no-subscription repository provided by proxmox.com, but NOT recommended for production use
- File /etc/apt/sources.list
- File /etc/apt/sources.list.d/pve-enterprise.list
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
- Then check
locale
if there is an error like "Cannot set LC_ALL(or others) to default locale: No such file or directory"- Run the following commands for each error
echo "export LC_CTYPE=en_US.UTF-8" >> ~/.bashrc
echo "export LC_ALL=en_US.UTF-8" >> ~/.bashrc
source ~/.bashrc
- then run the following commands once
locale-gen en_US en_US.UTF-8
dpkg-reconfigure locales
choose en_US.UTF-8
- Run the following commands for each error
- Get latest updates
apt update && apt upgrade -y && apt dist-upgrade
- RESTART/REBOOT System
- For more information to Create Proxmox-VE Cluster
Ubuntu ISO images can be downloaded from releases of Ubuntu. For popular architectures, please use releases of Ubuntu. Also other Ubuntu images not found on releases of Ubuntu, such as builds for less popular architectures and other non-standard and unsupported images and daily build images, can downloaded from the cdimage server. For old releases, see old-releases of Ubuntu.
As of the Ubuntu LTS release in 2020, the server documentation has moved to Ubuntu Server Guide. However; the detailed ubuntu latest LTS installation guide can be found here.
Fully automated installations are possible on Ubuntu using Ubuntu Installer(debian-installer) or Ubuntu Live Server Installer(autoinstall).
- The Ubuntu Installer (based on the Debian Installer, and so often called simply debian-installer or just d-i) consists of a number of special-purpose components to perform each installation task. The debian-installer(d-i)) supports automating installs via preconfiguration(preseed.cfg) files. Preseeding method provides a way to set answers to questions asked during the installation process, without having to manually enter the answers while the installation is running. For more information visit Automating the Installation using Preseeding, Example Preseed File and Packer Preseed Ubuntu.
- However, Ubuntu announced that it will complete the transition to the Live Server Installer(autoinstall) with 20.04 LTS. It lets you answer all those configuration questions ahead of time with an autoinstall config and lets the installation process run without any interaction. The autoinstall config is provided via cloud-init configuration, which is almost endlessly flexible. The live server installer is now the preferred media to install Ubuntu Server on all architectures. For more information visit Ubuntu Autoinstall Quick Start and Automated Server Installs Config File Reference
Ubuntu also offers Cloud Images. Ubuntu Cloud Images are the official Ubuntu images and are pre-installed disk images that have been customized by Ubuntu engineering to run on public clouds that provide Ubuntu Certified Images, Openstack, LXD, and more. It will be used in create-template-via-cloudinit.sh
due to the fast and easy setup.
To create Ubuntu Images via ISO without using Cloud-Images, the following repositories and articles can be viewed
- Automating Ubuntu 20.04 installs with Packer
- Automating Ubuntu Server 20.04 with Packer
- Packer build - Ubuntu Images(autoinstall & cloud-config)
- Packer Ubuntu 20.04 Image(autoinstall & cloud-config)
- Madalynn Packer - Ubuntu Image(autoinstall & cloud-config)
- Packer Proxmox Ubuntu Templates(ansible & preseed)
- Packer Boxes(ansible & preseed)
- Packer Proxmox Ubuntu Templates(preseed)
- Packer Ubuntu Templates(preseed)
- Packer Templates for Ubuntu(preseed)
- Automated image builds with Jenkins, Packer, and Kubernetes
Creating Ubuntu Image Documents
- Install Ubuntu ISO images
- Ubuntu Server Guide
- Ubuntu Installer(debian-installer)
- Ubuntu Live Server Installer(autoinstall)
- Automating the Installation using Preseeding
- Example Preseed File
- Packer Preseed Ubuntu
- Server installer plans for 20.04 LTS
- Ubuntu Autoinstall Quick Start
- Automated Server Installs Config File Reference
- Ubuntu Cloud Images
- Ubuntu Enterprise Cloud - Images
After installation to create cloud-init template(s) create-template-via-cloudinit.sh
should be executed on Proxmox-VE Server(s). The script is based on the create-cloud-template.sh developed by chriswayg.
create-template-via-cloudinit.sh Execution Prerequisites |
|
---|---|
1 | create-template-via-cloudinit.sh must be executed on a Proxmox VE 6.x Server. |
2 | A DHCP Server should be active on vmbr0 . |
3 | Download Latest Version of the Script on Proxmox VE Server:curl https://raw.githubusercontent.com/BarisGece/mHC/main/proxmox-ve/create-template-via-cloudinit.sh > /usr/local/bin/create-template-via-cloudinit.sh && chmod -v +x /usr/local/bin/create-template-via-cloudinit.sh |
4 | -- Caution! MUST BE DONE to USE cloud-init-config.yml -- The cloud-init files need to be stored in a snippet. There is not detail information very well documented in Proxmox-VE qm cloud_init but Alex Williams kept us well informed.
qm set with --cicustom , like this:(If cloud-init-config.yml is present, the following command will run automatically in create-template-via-cloudinit.sh )qm set 100 --cicustom "user=snippets:snippets/user-data,network=snippets:snippets/network-config,meta=snippets:snippets/meta-data" |
5 | Prepare a cloudinit user-cloud-init-config.yml in the working directory. sample-cloud-init-config.yml can be used as a sample. For more information Cloud-Init-Config Sample. |
6 | To the migration to be completed successfully, the Proxmox Storage Configuration should be set as follows. local(Type - Directory):
|
7 | Run the Script:$ create-template-via-cloudinit.sh |
8 | Clone the Finished Template from the Proxmox GUI and Test. |
- Network Device
- The VirtIO paravirtualized NIC should be used if you aim for maximum performance. Like all VirtIO devices, the guest OS should have the proper driver installed.
- The VirtIO model provides the best performance with very low CPU overhead. If your guest does not support this driver, it is usually best to use e1000.
qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0
- Hard Disk -- Bus/Controller -- Cache
- If you aim at maximum performance, you can select a SCSI controller of type VirtIO SCSI single which will allow you to select the IO Thread option.
- cache=none seems to be the best performance and is the default since Proxmox 2.X. However, cache=unsafe doesn't flush data, so it's fastest but unsafest. The information is based on using raw volumes, other volume formats may behave differently. For more information Performance Tweaks.
- Use raw disk image instead of qcow2 if possible
qm importdisk 9000 /tmp/VMIMAGE local-lvm --format raw
qm set 9000 --scsihw virtio-scsi-single --scsi0 local-lvm:vm-9000-disk-0,iothread=1
- CPU Types
- If you have a homogeneous cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance.
qm set 9000 --cpu host
- NUMA(non-uniform memory access)
- With NUMA, memory can be evenly distributed among CPUs, which improves performance. Also, to enable CPU and Memory hot-plugging in Proxmox-VE, NUMA option should be enabled. To enable NUMA option on VM execute the following command.
qm set --kvm 1 numa 1
- If the following command returns more than one node, then your host system has a NUMA architecture.
numactl --hardware | grep available
numactl --hardware
- This command will show all the nodes in the cluster that are NUMA aware and their performance stats.
numastat
- With NUMA, memory can be evenly distributed among CPUs, which improves performance. Also, to enable CPU and Memory hot-plugging in Proxmox-VE, NUMA option should be enabled. To enable NUMA option on VM execute the following command.
- HOT-PLUGGING
- The hotplugging feature provides the ability to add or remove devices or resources from the Virtual Machine without rebooting. To enable hotplug execute the following command.
- qm set --hotplug disk,network,usb,memory,cpu
- NUMA option MUST be ENABLED.
- Preparing Linux Guests
- A kernel newer than 4.7 is recommended for Linux Guests for all hotplugging features to work.
- The following kernel modules should bu installed on Linux Guests. To automatically load the modules during boot, add them into
/etc/modules
. The automate command was added tosample-cloud-init-config.yml
Caution! Lines beginning with "#" are ignored.# modprobe acpiphp
# modprobe pci_hotplug
- After kernel 4.7, only the following kernel parameter should be added to
/etc/default/grub
during boot for the CPU. It also added tosample-cloud-init-config.yml
.GRUB_CMDLINE_LINUX_DEFAULT="quiet splash memhp_default_state=online"
- Update the grub boot loader
sudo update-grub
- REBOOT Linux Guest
- Sample command for hotplugging vCPUs
- In Proxmox VE the maximal number of plugged vCPUs is always
cores * sockets
. Also,Total Cores = cores * sockets
. vCPUs value can not more than Total Core qm set 9000 -vcpus 4
- In Proxmox VE the maximal number of plugged vCPUs is always
-
Device Kernel Hotplug Unplug OS Disk All Linux/Windows Linux/Windows Linux/Windows NIC All Linux/Windows Linux/Windows Linux/Windows USB All Linux/Windows Linux/Windows Linux/Windows CPU 3.10+ Linux/Windows Linux(4.10+) Linux/Windows Server 2008+ Memory 3.10+ Linux/Windows Linux(4.10+) Linux/Windows Server 2008+
- The hotplugging feature provides the ability to add or remove devices or resources from the Virtual Machine without rebooting. To enable hotplug execute the following command.
- Ballooning Device
- Amount of target RAM for the VM in MB. Using zero disables the ballon driver. In general, you should leave ballooning enabled, but if you want to disable it (e.g. for debugging purposes), simply uncheck Ballooning Device or set
balloon: 0
in the configuration. - Even when using a fixed memory size, the ballooning device gets added to the VM, because it delivers useful information such as how much memory the guest really uses.
- All Linux distributions released after 2010 have the balloon kernel driver included. For Windows OSes, the balloon driver needs to be added manually and can incur a slowdown of the guest, so we don’t recommend using it on critical systems. The passing around of memory between host and guest is done via a special balloon kernel driver running inside the guest, which will grab or release memory pages from the host. A good explanation of the inner workings of the balloon driver can be found here
- Amount of target RAM for the VM in MB. Using zero disables the ballon driver. In general, you should leave ballooning enabled, but if you want to disable it (e.g. for debugging purposes), simply uncheck Ballooning Device or set
create-proxmox-users.sh
will create Proxmox users for Packer, Terraform and Ansible. The password information of the users to be created will be read from Environment Variables. Before running the script, define the variables with the following Environment Variable Names. For more information pveum User Management
$PACKER_PVE_USER
,$PACKER_PVE_PASSWORD
-$TERRAFORM_PVE_USER
,$TERRAFORM_PVE_PASSWORD
-$ANSIBLE_PVE_USER
,$ANSIBLE_PVE_PASSWORD
create-proxmox-users.sh
must be executed once on a Proxmox VE 6.x Server.curl https://raw.githubusercontent.com/BarisGece/mHC/main/proxmox-ve/create-proxmox-users.sh > /usr/local/bin/create-proxmox-users.sh && chmod -v +x /usr/local/bin/create-proxmox-users.sh
Proxmox-VE Documents
- Admin Guide - PDF
- Admin Guide - HTML
- Wiki Page
- Qemu/KVM(qm) Virtual Machines-Guide
- Qemu/KVM(qm) VM Templates-Wiki
- Proxomox-VE qm Commands
- Proxmox(qm) Cloud-Init Support-Guide
- Proxmox(qm) Cloud-Init Support-Wiki
- Proxmox(qm) Cloud-Init Support FAQ-Wiki
- Canonical cloud-init
- Cloud-Init-Config Sample
- Cloud-Init-Config Documentation
- Performance Tweaks
- Virtio Balloon
- NUMA
- Hotplug
- pveum User Management
- Ansible role to configure Proxmox server
- Provision Proxmox VMs with Ansible, quick and easy
Packer is an automatic machine image generation tool and Proxmox-VE templates will be created with Packer to make it more standardized and automated.
- Add the HashiCorp GPG key.
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
- Add the official HashiCorp Linux repository.
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
- Update and install.
sudo apt-get update && sudo apt-get install packer
[Packer Proxmox Builder][Packer Proxmox Builder] will be used to create the Proxmox-VE template. It provision and configure the VM and then converts it into a template. Packer Proxmox Builder perfoms operations via the Proxmox Web API.
Packer Proxmox Builder is able to create new images using both ISO(proxmox-iso) and existing Cloud-Init Images(proxmox-clone). Creating a new image using (proxmox-iso) will be developed later.
Now, Proxmox-VE templates will be created with proxmox-clone using existing Cloud-Init Images created via create-template-via-cloudinit.sh
.
Packer Execution Prerequisites | |
---|---|
1 | To skip validating the certificate set insecure_skip_tls_verify = true in sources.pkr.hcl |
2 | To Packer run sucessfully qemu-guest-agent must be installed on VMs & qemu_agent = ... configuration option should be true in sources.pkr.hcl For more detail Error getting SSH address 500 QEMU guest agent is not running |
In Packer, Assigning Values to the build Variables with HCL2 can be done in 3 different ways as follows
- Command-line flags
- Variables can be defined directly on the command line with the
-var
flag. We will not use.packer build -var 'weekday=Sunday' -var 'flavor=chocolate'
- Variables can be defined directly on the command line with the
- Variables file
- To persist variable values, create a
*.pkrvars.hcl
file and assign variables within this file. Also, packer will automatically load any var file that matches the name*.auto.pkrvars.hcl
, without the need to pass the file via the command line.*.pkrvars.hcl
=>packer build -var-file="*.pkrvars.hcl" .
*.auto.pkrvars.hcl
=>packer build .
- To persist variable values, create a
- Environment Variables
- Packer will read environment variables in the form of
PKR_VAR_name
to find the value for a variable.export PKR_VAR_access_key=Key1 && packer build .
- Packer will read environment variables in the form of
- Variable Defaults
- If no value is assigned to a variable via any of these methods and the variable has a
default
key in its declaration, that value will be used for the variable.packer build .
- If no value is assigned to a variable via any of these methods and the variable has a
- Notes about Packer Variables
- Don't save sensitive data to version control via varibles files. You can create a local secret variables file or use environment variables
- Multiple
-var-file
flags can be provided.packer build -var-file="secret.pkrvars.hcl" -var-file="production.pkrvars.hcl" .
- If a default value is set in
variables.pkr.hcl
, the variable is optional. Otherwise, the variable must be set. To force set variables don't set default value asvariable "vm_id" {...}
invariables.pkr.hcl
- The
variable
block, also called theinput-variable
block, defines variables within your Packer configuration. - Debug =>
PACKER_LOG=1 packer build -debug -on-error=ask .
Release =>PACKER_LOG=1 packer build .
An input-variable
cannot be used in another input variable, so locals could be used instead. The locals
block, also called the local-variable
block, defines locals within your Packer configuration. Local Values assign a name to an expression, that can then be used multiple times within a folder.
# locals.pkr.hcl
locals {
# locals can be bare values like:
wee = local.baz
# locals can also be set with other variables :
baz = "Foo is '${var.foo}' but not '${local.wee}'"
}
Packer Documents
- [Packer Proxmox Builder][Packer Proxmox Builder]
- proxmox-clone & proxmox-iso
- Input Variables and
local
variables - Creating Proxmox Templates with Packer - Aaron Berry
Terraform is an Infrastructure as Code tool to securely and efficiently provision, manage, and version infrastructure. Having more than 1000 Modules and more than 200 Providers makes it easy to manage existing and popular infrastructure, cloud or service providers as well as custom on-premises solutions.
The operations on Proxmox-VE are performed over Proxmox Web API as in the Packer. There is no officially supported Proxmox Provider on Terraform, but there are two Community-Supported Providers as below.
- Add the HashiCorp GPG key.
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
- Add the official HashiCorp Linux repository.
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
- Update and install.
sudo apt-get update && sudo apt-get install terraform
Terraform Proxmox Provider can create Virtual Machines(Instances, Guest OS) via an ISO or CLONE(existing images) such as Packer Proxmox Builder. Cloud-init defined Proxmox-VE templates were created by create-template-via-cloudinit.sh
& packer_proxmox-clone
. New instances will be created using these templates. The Terraform can be found here.
The same variable cannot be assigned multiple values within a single resource, so variables are loaded in the following order and subsequent resources override previous values.
- Environment variables
- The
terraform.tfvars
file - The
terraform.tfvars.json
file - Any
*.auto.tfvars
or*.auto.tfvars.json
files, execution order is by file names - Any
-var
and-var-file
options on the command line, in the order they are provided
pm_api_url
is required. Ifvar.api_url
is not set,PM_API_URL
must be set as the environment variable.pm_user
is required. Ifvar.user
is not set,PM_USER
must be set as the environment variable.pm_password
is required. Ifvar.password
is not set,PM_PASS
must be set as the environment variable. One of the recommended ways to set upPM_PASS
- If the 2FA OTP code is to be used,
var.otp
must be defined. Ifvar.otp
is not defined,PM_OTP
must be set as the environment variable.PM_OTP
must be set as the environment variable. Also,PM_OTP_PROMPT
can be set as environment variable to ask for OTP 2FA code. - Either
clone
oriso
must be set in resource block variables. If both are set, theclone
will be accepted. Therefore; only set one of them and the value of the other should benull
.- Sample for ISO: Using an iso file uploaded on the local storage =
local:iso/proxmox-mailgateway_2.1.iso
- Sample for CLONE: The name of the Proxmox-VE template or image to be used to provision the new VM =
ubuntu2004-cloud-template
.
- Sample for ISO: Using an iso file uploaded on the local storage =
full_clone
: The result of such copy is an independent VM. The new VM does not share any storage resources with the original. Default value istrue
. However; a full clone needs to read and copy all VM image data. This is usually much slower than creating a linked clone.
Terraform Documents
MAAS is a Metal as a Service that allows you to treat physical servers in the Cloud like VM Instances. It turns bare metal into a flexible cloud-like resource, so there is no need to manage servers individually. For more information MAAS Docs & Proxmox - MAAS - JuJu by VectOps