GithubHelp home page GithubHelp logo

pspglb / mhc Goto Github PK

View Code? Open in Web Editor NEW

This project forked from barisgece/mhc

0.0 1.0 0.0 1.32 MB

The easy-way to create and manage a Mini Home Cloud. mHC is built using Shell, Proxmox-VE, Packer, Terraform, Ansible and is not completely reliable for Production environments.

License: MIT License

Shell 38.61% HCL 61.39%

mhc's Introduction

mini Home Cloud

The easy-way to create and manage a personal cloud envirnoment. mHC has been created using Shell, Proxmox-VE, Packer, Terraform, Ansible, MAAS and is not completely reliable for Production environments.

Table of Contents

Proxmox-VE

It is an open source Server Virtualization Platform. Proxmox-VE includes two different virtualization technologies which are Kernel-Based Virtual Machine (KVM) and Container-Based Virtualization (LXC). Proxmox-VE can run on a single node, or assemble a cluster of many nodes. This way, your virtual machines and containers can run on Proxmox-VE with high availability.

Proxmox-VE Architecture

Installation - Manual Step

  • Download the installer ISO image from: Proxmox-VE ISO Image
  • Create an USB flash drive and Boot from USB
    • baleneEtcher is an easy way to create Proxmox-VE USB flash drive.
Installing Proxmox VE
The Proxmox VE menu will be displayed and select Install Proxmox VE to starts the normal installation.
Click for more detail about Options
Proxmox-VE Menu
After selecting Install Proxmox VE and accepting the EULA, the prompt to select the target hard disk(s) will appear. The Options button opens the dialog to select the target file system. In my instruction, we can select the default file system ext4, or xfs different from the one in the screenshot.
The installer creates a Volume Group (VG) called pve, and additional Logical Volumes (LVs) called root, data, and swap. To control the size of these volumes use:
  • hdsize: The total hard disk size to be used (Mine: 223)
  • swapsize: Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater than hdsize/8. If set to 0, no swap volume will be created (Mine: 4)
  • maxroot: Defines the maximum size of the root volume, which stores the operation system. The maximum limit of the root volume size is hdsize/4 (Mine: 23)
  • minfree: Defines the amount of free space left in the LVM volume group pve. With more than 128GB storage available the default is 16GB, else hdsize/8 will be used (Mine: 16)
  • maxvz: Defines the maximum size of the data volume. The actual size of the data volume is:
    datasize = hdsize - rootsize - swapsize - minfree
    Where datasize cannot be bigger than maxvz (Mine: 180)

Click for more detail about Advanced LVM Options
Proxmox-VE Select Target Disk
After setting the disk options the next page asks for basic configuration options like the location, the time zone, and keyboard layout. They only need to be changed in the rare case that auto detection fails or a different keyboard layout should be used. Proxmox-VE Select Location
Next the password of the superuser (root) and an email address needs to be specified. The password must be at least 5 characters. However, it is highly recommended that you use a stronger password, so set a password that is at least 12 to 14 characters. The email address is used to send notifications to the system administrator. Proxmox-VE Set Password
The last step is the network configuration. Please note that during installation you can either use an IPv4 or IPv6 address, but not both. To configure a dual stack node, add additional IP addresses after the installation. There will be created a proxmox cluster consisting of 3 physical servers. Therefore, 3 different network information is given below.
  • Management Interface: xx:xx:xx:xx:30:39 - xx:xx:xx:xx:30:31 - xx:xx:xx:xx:2f:04
  • Hostname(FQDN): one.mhc.pve - two.mhc.pve - three.mhc.pve
  • IP Adress: 192.168.50.10 - 192.168.50.20 - 192.168.50.30
  • Netmask: 255.255.255.0
  • Gateway: 192.168.50.1
  • DNS Server: 192.168.50.1
Proxmox-VE Setup Network
The next step shows a summary of the previously selected options. Re-check every setting and use the Previous button if a setting needs to be changed. To accept, press Install. The installation starts to format disks and copies packages to the target. Please wait until this step has finished; then remove the installation medium and restart your system.
Then point your browser to the IP address given during installation https://youripaddress:8006 to reach Proxmox Web Interface.
Default login is "root" and the root password is defined(step 4) during the installation process.
Proxmox-VE Installation Summary
  • After the installation is completed, the files which repositories are defined should be as follows in order to use APT Package Management tool successfully.
    • File /etc/apt/sources.list
      • deb http://ftp.debian.org/debian buster main contrib
      • deb http://ftp.debian.org/debian buster-updates main contrib
      • deb http://security.debian.org/debian-security buster/updates main contrib
      • deb http://download.proxmox.com/debian/pve buster pve-no-subscription
        • Note: PVE pve-no-subscription repository provided by proxmox.com, but NOT recommended for production use
  • File /etc/apt/sources.list.d/pve-enterprise.list
    • #deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
  • Then check locale if there is an error like "Cannot set LC_ALL(or others) to default locale: No such file or directory"
    • Run the following commands for each error
      • echo "export LC_CTYPE=en_US.UTF-8" >> ~/.bashrc
      • echo "export LC_ALL=en_US.UTF-8" >> ~/.bashrc
      • source ~/.bashrc
      • then run the following commands once
        • locale-gen en_US en_US.UTF-8
        • dpkg-reconfigure locales choose en_US.UTF-8
  • Get latest updates
    • apt update && apt upgrade -y && apt dist-upgrade
  • RESTART/REBOOT System
  • For more information to Create Proxmox-VE Cluster

Creating Ubuntu Image

Ubuntu ISO images can be downloaded from releases of Ubuntu. For popular architectures, please use releases of Ubuntu. Also other Ubuntu images not found on releases of Ubuntu, such as builds for less popular architectures and other non-standard and unsupported images and daily build images, can downloaded from the cdimage server. For old releases, see old-releases of Ubuntu.

As of the Ubuntu LTS release in 2020, the server documentation has moved to Ubuntu Server Guide. However; the detailed ubuntu latest LTS installation guide can be found here.

Fully automated installations are possible on Ubuntu using Ubuntu Installer(debian-installer) or Ubuntu Live Server Installer(autoinstall).

Ubuntu also offers Cloud Images. Ubuntu Cloud Images are the official Ubuntu images and are pre-installed disk images that have been customized by Ubuntu engineering to run on public clouds that provide Ubuntu Certified Images, Openstack, LXD, and more. It will be used in create-template-via-cloudinit.sh due to the fast and easy setup.

To create Ubuntu Images via ISO without using Cloud-Images, the following repositories and articles can be viewed

Creating Ubuntu Image Documents

Installation - Script Step - Creating cloud-init Template

After installation to create cloud-init template(s) create-template-via-cloudinit.sh should be executed on Proxmox-VE Server(s). The script is based on the create-cloud-template.sh developed by chriswayg.

create-template-via-cloudinit.sh Execution Prerequisites
1 create-template-via-cloudinit.sh must be executed on a Proxmox VE 6.x Server.
2 A DHCP Server should be active on vmbr0.
3 Download Latest Version of the Script on Proxmox VE Server:
curl https://raw.githubusercontent.com/BarisGece/mHC/main/proxmox-ve/create-template-via-cloudinit.sh > /usr/local/bin/create-template-via-cloudinit.sh && chmod -v +x /usr/local/bin/create-template-via-cloudinit.sh
4 -- Caution! MUST BE DONE to USE cloud-init-config.yml --
The cloud-init files need to be stored in a snippet. There is not detail information very well documented in Proxmox-VE qm cloud_init but Alex Williams kept us well informed.
  1. Go to Storage View -> Storage -> Add -> Directory
  2. Give it an ID such as snippets, and specify any path on your host such as /snippets
  3. Under Content choose Snippets and de-select Disk image (optional)
  4. Upload (scp/rsync/whatever) your user-data, meta-data, network-config files to your proxmox server in /snippets/snippets/ (the directory should be there if you followed steps 1-3)
Finally, you just need to qm set with --cicustom, like this:(If cloud-init-config.yml is present, the following command will run automatically in create-template-via-cloudinit.sh)
qm set 100 --cicustom "user=snippets:snippets/user-data,network=snippets:snippets/network-config,meta=snippets:snippets/meta-data"
5 Prepare a cloudinit user-cloud-init-config.yml in the working directory. sample-cloud-init-config.yml can be used as a sample.
For more information Cloud-Init-Config Sample.
6 To the migration to be completed successfully, the Proxmox Storage Configuration should be set as follows.
local(Type - Directory):
  • Content: VZDump backup file, Disk image, ISO image, Container template
  • Path/Target: /var/lib/vz
  • Shared: Yes
local-lvm(Type - LVM-Thin):
  • Content: Disk image, Container
  • Nodes: Select ALL Nodes by one by
snippets(Type - Directory):
  • Content: Snippets
  • Path/Target: /snippets
  • Nodes: Select ALL Nodes by one by
All of them should be ENABLED
DC_Storage_Settings
7 Run the Script:
$ create-template-via-cloudinit.sh
8 Clone the Finished Template from the Proxmox GUI and Test.

For Maximum Performance

  • Network Device
    • The VirtIO paravirtualized NIC should be used if you aim for maximum performance. Like all VirtIO devices, the guest OS should have the proper driver installed.
    • The VirtIO model provides the best performance with very low CPU overhead. If your guest does not support this driver, it is usually best to use e1000.
    • qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0
  • Hard Disk -- Bus/Controller -- Cache
    • If you aim at maximum performance, you can select a SCSI controller of type VirtIO SCSI single which will allow you to select the IO Thread option.
    • cache=none seems to be the best performance and is the default since Proxmox 2.X. However, cache=unsafe doesn't flush data, so it's fastest but unsafest. The information is based on using raw volumes, other volume formats may behave differently. For more information Performance Tweaks.
    • Use raw disk image instead of qcow2 if possible
    • qm importdisk 9000 /tmp/VMIMAGE local-lvm --format raw
    • qm set 9000 --scsihw virtio-scsi-single --scsi0 local-lvm:vm-9000-disk-0,iothread=1
  • CPU Types
    • If you have a homogeneous cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance.
    • qm set 9000 --cpu host
  • NUMA(non-uniform memory access)
    • With NUMA, memory can be evenly distributed among CPUs, which improves performance. Also, to enable CPU and Memory hot-plugging in Proxmox-VE, NUMA option should be enabled. To enable NUMA option on VM execute the following command.
      • qm set --kvm 1 numa 1
    • If the following command returns more than one node, then your host system has a NUMA architecture.
      • numactl --hardware | grep available
      • numactl --hardware
    • This command will show all the nodes in the cluster that are NUMA aware and their performance stats.
      • numastat
  • HOT-PLUGGING
    • The hotplugging feature provides the ability to add or remove devices or resources from the Virtual Machine without rebooting. To enable hotplug execute the following command.
      • qm set --hotplug disk,network,usb,memory,cpu
    • NUMA option MUST be ENABLED.
    • Preparing Linux Guests
      • A kernel newer than 4.7 is recommended for Linux Guests for all hotplugging features to work.
      • The following kernel modules should bu installed on Linux Guests. To automatically load the modules during boot, add them into /etc/modules. The automate command was added to sample-cloud-init-config.yml
        Caution! Lines beginning with "#" are ignored.
        • # modprobe acpiphp
          # modprobe pci_hotplug
      • After kernel 4.7, only the following kernel parameter should be added to /etc/default/grub during boot for the CPU. It also added to sample-cloud-init-config.yml.
        • GRUB_CMDLINE_LINUX_DEFAULT="quiet splash memhp_default_state=online"
      • Update the grub boot loader
        • sudo update-grub
      • REBOOT Linux Guest
      • Sample command for hotplugging vCPUs
        • In Proxmox VE the maximal number of plugged vCPUs is always cores * sockets. Also, Total Cores = cores * sockets. vCPUs value can not more than Total Core
        • qm set 9000 -vcpus 4
      • Device Kernel Hotplug Unplug OS
        Disk All Linux/Windows Linux/Windows Linux/Windows
        NIC All Linux/Windows Linux/Windows Linux/Windows
        USB All Linux/Windows Linux/Windows Linux/Windows
        CPU 3.10+ Linux/Windows Linux(4.10+) Linux/Windows Server 2008+
        Memory 3.10+ Linux/Windows Linux(4.10+) Linux/Windows Server 2008+
  • Ballooning Device
    • Amount of target RAM for the VM in MB. Using zero disables the ballon driver. In general, you should leave ballooning enabled, but if you want to disable it (e.g. for debugging purposes), simply uncheck Ballooning Device or set balloon: 0 in the configuration.
    • Even when using a fixed memory size, the ballooning device gets added to the VM, because it delivers useful information such as how much memory the guest really uses.
    • All Linux distributions released after 2010 have the balloon kernel driver included. For Windows OSes, the balloon driver needs to be added manually and can incur a slowdown of the guest, so we don’t recommend using it on critical systems. The passing around of memory between host and guest is done via a special balloon kernel driver running inside the guest, which will grab or release memory pages from the host. A good explanation of the inner workings of the balloon driver can be found here

Create PVE User for Terraform, Packer & Ansible

create-proxmox-users.sh will create Proxmox users for Packer, Terraform and Ansible. The password information of the users to be created will be read from Environment Variables. Before running the script, define the variables with the following Environment Variable Names. For more information pveum User Management

  • $PACKER_PVE_USER, $PACKER_PVE_PASSWORD - $TERRAFORM_PVE_USER, $TERRAFORM_PVE_PASSWORD - $ANSIBLE_PVE_USER, $ANSIBLE_PVE_PASSWORD
  • create-proxmox-users.sh must be executed once on a Proxmox VE 6.x Server.
  • curl https://raw.githubusercontent.com/BarisGece/mHC/main/proxmox-ve/create-proxmox-users.sh > /usr/local/bin/create-proxmox-users.sh && chmod -v +x /usr/local/bin/create-proxmox-users.sh

Proxmox-VE Documents

Packer

Packer is an automatic machine image generation tool and Proxmox-VE templates will be created with Packer to make it more standardized and automated.

Installing Packer on Ubuntu Jump Server

  • Add the HashiCorp GPG key.
    • curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
  • Add the official HashiCorp Linux repository.
    • sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
  • Update and install.
    • sudo apt-get update && sudo apt-get install packer

Preparing Proxmox-VE template via Packer

[Packer Proxmox Builder][Packer Proxmox Builder] will be used to create the Proxmox-VE template. It provision and configure the VM and then converts it into a template. Packer Proxmox Builder perfoms operations via the Proxmox Web API.

Packer Proxmox Builder is able to create new images using both ISO(proxmox-iso) and existing Cloud-Init Images(proxmox-clone). Creating a new image using (proxmox-iso) will be developed later.

Now, Proxmox-VE templates will be created with proxmox-clone using existing Cloud-Init Images created via create-template-via-cloudinit.sh.

Packer Execution Prerequisites
1 To skip validating the certificate set insecure_skip_tls_verify = true in sources.pkr.hcl
2 To Packer run sucessfully qemu-guest-agent must be installed on VMs & qemu_agent = ... configuration option should be true in sources.pkr.hcl
For more detail Error getting SSH address 500 QEMU guest agent is not running

Input Variables

In Packer, Assigning Values to the build Variables with HCL2 can be done in 3 different ways as follows

  • Command-line flags
    • Variables can be defined directly on the command line with the -var flag. We will not use.
      • packer build -var 'weekday=Sunday' -var 'flavor=chocolate'
  • Variables file
    • To persist variable values, create a *.pkrvars.hcl file and assign variables within this file. Also, packer will automatically load any var file that matches the name *.auto.pkrvars.hcl, without the need to pass the file via the command line.
      • *.pkrvars.hcl => packer build -var-file="*.pkrvars.hcl" .
      • *.auto.pkrvars.hcl => packer build .
  • Environment Variables
    • Packer will read environment variables in the form of PKR_VAR_name to find the value for a variable.
      • export PKR_VAR_access_key=Key1 && packer build .
  • Variable Defaults
    • If no value is assigned to a variable via any of these methods and the variable has a default key in its declaration, that value will be used for the variable.
      • packer build .
  • Notes about Packer Variables
    • Don't save sensitive data to version control via varibles files. You can create a local secret variables file or use environment variables
    • Multiple -var-file flags can be provided.
      packer build -var-file="secret.pkrvars.hcl" -var-file="production.pkrvars.hcl" .
    • If a default value is set in variables.pkr.hcl, the variable is optional. Otherwise, the variable must be set. To force set variables don't set default value as variable "vm_id" {...} in variables.pkr.hcl
    • The variable block, also called the input-variable block, defines variables within your Packer configuration.
    • Debug => PACKER_LOG=1 packer build -debug -on-error=ask .
      Release => PACKER_LOG=1 packer build .

local Variables

An input-variable cannot be used in another input variable, so locals could be used instead. The locals block, also called the local-variable block, defines locals within your Packer configuration. Local Values assign a name to an expression, that can then be used multiple times within a folder.

# locals.pkr.hcl
locals {
    # locals can be bare values like:
    wee = local.baz
    # locals can also be set with other variables :
    baz = "Foo is '${var.foo}' but not '${local.wee}'"
}
Packer Documents

Terraform

Terraform is an Infrastructure as Code tool to securely and efficiently provision, manage, and version infrastructure. Having more than 1000 Modules and more than 200 Providers makes it easy to manage existing and popular infrastructure, cloud or service providers as well as custom on-premises solutions.

The operations on Proxmox-VE are performed over Proxmox Web API as in the Packer. There is no officially supported Proxmox Provider on Terraform, but there are two Community-Supported Providers as below.

Installing Terraform on Ubuntu Jump Server

  • Add the HashiCorp GPG key.
    • curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
  • Add the official HashiCorp Linux repository.
    • sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
  • Update and install.
    • sudo apt-get update && sudo apt-get install terraform

Provisioning Virtual Machine on Proxmox-VE via Terraform

Terraform Proxmox Provider can create Virtual Machines(Instances, Guest OS) via an ISO or CLONE(existing images) such as Packer Proxmox Builder. Cloud-init defined Proxmox-VE templates were created by create-template-via-cloudinit.sh & packer_proxmox-clone. New instances will be created using these templates. The Terraform can be found here.

Terraform Input Variables

The same variable cannot be assigned multiple values ​​within a single resource, so variables are loaded in the following order and subsequent resources override previous values.

  • Environment variables
  • The terraform.tfvars file
  • The terraform.tfvars.json file
  • Any *.auto.tfvars or *.auto.tfvars.json files, execution order is by file names
  • Any -var and -var-file options on the command line, in the order they are provided

Terraform Proxmox Provider Variables

  • pm_api_url is required. If var.api_url is not set, PM_API_URL must be set as the environment variable.
  • pm_user is required. If var.user is not set, PM_USER must be set as the environment variable.
  • pm_password is required. If var.password is not set, PM_PASS must be set as the environment variable. One of the recommended ways to set up PM_PASS
  • If the 2FA OTP code is to be used, var.otp must be defined. If var.otp is not defined, PM_OTP must be set as the environment variable. PM_OTP must be set as the environment variable. Also, PM_OTP_PROMPT can be set as environment variable to ask for OTP 2FA code.
  • Either clone or iso must be set in resource block variables. If both are set, the clone will be accepted. Therefore; only set one of them and the value of the other should be null.
    • Sample for ISO: Using an iso file uploaded on the local storage = local:iso/proxmox-mailgateway_2.1.iso
    • Sample for CLONE: The name of the Proxmox-VE template or image to be used to provision the new VM = ubuntu2004-cloud-template.
  • full_clone: The result of such copy is an independent VM. The new VM does not share any storage resources with the original. Default value is true. However; a full clone needs to read and copy all VM image data. This is usually much slower than creating a linked clone.

Terraform Documents

MAAS

MAAS is a Metal as a Service that allows you to treat physical servers in the Cloud like VM Instances. It turns bare metal into a flexible cloud-like resource, so there is no need to manage servers individually. For more information MAAS Docs & Proxmox - MAAS - JuJu by VectOps

mhc's People

Contributors

barisgece avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.