In the last installment of this series, I discussed setting up the Proxmox VE hosts in VirtualBox. At this stage in the exercise there should be 3 VirtualBox VMs (VBVMs) running, in headless mode.
Before you can set up the cluster, storage replication, and high availability, you need to do a bit of housekeeping on your hosts. In this post, I will go over those steps making sure that the hosts are up to date OS wise, that the network interfaces are setup and communicating with eachother, and that your storage is properly configured. Most of these steps can be accomplished via the Web UI, but using SSH will be faster and more accurate. Especially when you use an SSH client like SuperPuTTY or MobaXTerm that lets you type in multiple terminals at the same time.
Log in as root@ip-address for each PVE node. In the previous post, the IPs I chose were 192.168.1.101, 192.168.1.102, and 192.168.1.103.
I don’t want to bog this post down with a bunch of Stupid SSH Tricks, so just spend a few minutes getting acquainted with MobaXTerm and thank me later. The examples below will work in a single SSH session, but you will have to paste them into 3 different windows, instead of feeling like a superhacker:
Step 1 – Fix The Subscription Thing
No, not the nag screen that pops up when you log into the web UI, the errors that you get when you try to update a PVE host with the enterprise repos enabled.
All you have to do is modify a secondary sources.list file. Open it with your editor, comment out the first line and add the second line:
nano /etc/apt/sources.list.d/pve-enterprise.list #deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise deb http://download.proxmox.com/debian/pve stretch pve-no-subscription
Save the file, and run your updates:
apt-get update; apt-get -y upgrade
While you are logged in to all 3 hosts, you might as well update the list of available Linux Container templates:
Finally, if you set up your virtual disk files correctly according to the last post, you can set up your ZFS disk pool:
- List your available disks, hopefully you see two 64GB volumes that aren’t in use on /dev/sdb and /dev/sdc:
root@prox1:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 32G 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 512M 0 part └─sda3 8:3 0 31.5G 0 part ├─pve-swap 253:0 0 3.9G 0 lvm [SWAP] ├─pve-root 253:1 0 7.8G 0 lvm / ├─pve-data_tmeta 253:2 0 1G 0 lvm │ └─pve-data 253:4 0 14G 0 lvm └─pve-data_tdata 253:3 0 14G 0 lvm └─pve-data 253:4 0 14G 0 lvm sdb 8:16 0 64G 0 disk sdc 8:32 0 64G 0 disk sr0 11:0 1 642M 0 rom root@prox1:~#
Assuming you see those two disks, and they are in fact ‘sdb’ and ‘sdc’ then you can create your zpool. Which you can think of as a kind of software RAID array. There’s way more to it than that, but that’s another post for another day when I know more about ZFS. For this exercise, I wanted to make a simulated RAID1 array, for “redundancy.” Set up the drives in a pool like so:
zpool create -f -o ashift=12 z-store mirror sdb sdc zfs set compression=lz4 z-store zfs create z-store/vmdata
In a later post we will use the zpool on each host for Storage Replication. The PVEVM files for each of your guest machines will copy themselves to the other hosts at regular intervals so when you migrate a guest from one node to another it won’t take long. This feature pairs very well with High Availability, where your cluster can determine if a node is down and spin up PVEVMs that are offline.
Now that your disks are configured, it’s time to move on to Part 3: Building A Cluster Network.Advertisements