In the last installment of this series, I discussed setting up the Proxmox VE hosts. Until now, you could do most of this configuring in triplicate with MobaXTerm. Now you can still use it to multicast, just be sure to disable it when you have to customize the configs for each host. This part of the process is a lethal combination of being really repetitive while also requiring a lot of attention to detail. This is also the point where it gets a bit like virtualization-inception: VirtualBox VMs which are PVE hosts to PVEVMs.
Network Adapter Configuration
I did my best to simplify the network design:
- There are 3 PVE hosts with corresponding management IP’s:
- prox1 – 192.168.1.101
- prox2 – 192.168.1.102
- prox3 – 192.168.1.103
- Each PVE host has 3 network adapters:
- Adapter 1: A Bridged Adapter that connects to the [physical] internal network.
- Adapter 2: Host only Adapter #2 that will serve as the [virtual] isolated cluster network.
- Adapter 3: Host only Adapter #3 that will serve as the [virtual] dedicated migration network.
- Each network adapter plugs into a different [virtual] network segment with a different ip range:
- Adapter 1 (enp0s3) – 192.168.1.0/24
- Adapter 2 (enp0s8) – 192.168.2.0/24
- Adapter 3 (enp0s9) – 192.168.3.0/24
- Each PVE hosts’ IP on each network roughly corresponds to its hostname:
- prox1 – 192.168.1.101, 192.168.2.1, 192.168.3.1
- prox2 – 192.168.1.102, 192.168.2.2, 192.168.3.2
- prox3 – 192.168.1.103, 192.168.2.3, 192.168.3.3
I have built this cluster a few times and my Ethernet adapter names (enp0s3, enp0s8, and enp0s9) have always been the same. That may be a product of all the cloning, so YMMV. Pay close attention here because this can get very confusing.
Open the network interface config file for each PVE host:
nano /etc/network/interfaces
You should see the entry for your first Ethernet adapter (the bridged adapter in VirtualBox), followed by the virtual machines' bridge interface with the static IP that you set when you installed Proxmox. This is a Proxmox virtual Ethernet adapter. The last two entries should be your two host only adapters, #2 and #3 in VirtualBox. These are the adapters that we need to modify. The file for prox1 probably looks like this:
nano /etc/network/interfaces auto lo iface lo inet loopback iface enp0s3 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.101 netmask 255.255.255.0 gateway 192.168.1.1 bridge_ports enp0s3 bridge_stp off bridge_fd 0 iface enp0s8 inet manual iface enp0s9 inet manual
Ignore the entries for lo, enp0s3, and vmbr0. Delete the last two entries (enp0s8 and enp0s9) and replace them with this:
#cluster network auto enp0s8 iface enp0s8 inet static address 192.168.2.1 netmask 255.255.255.0 #migration network auto enp0s9 iface enp0s9 inet static address 192.168.3.1 netmask 255.255.255.0
Now repeat the process for prox2 and prox3, changing the last octet for the IP's to .2 and .3 respectively. When you are finished your nodes should be configured like so:
- prox1 - 192.168.1.101, 192.168.2.1, 192.168.3.1
- prox2 - 192.168.1.102, 192.168.2.2, 192.168.3.2
- prox3 - 192.168.1.103, 192.168.2.3, 192.168.3.3
Save your changes on each node. Then reboot each one:
shutdown -r now
Network Testing
When your PVE hosts are booted back up, SSH into them again. Have each host ping every other host IP to make sure everything is working:
ping -c 4 192.168.2.1;\ ping -c 4 192.168.2.2;\ ping -c 4 192.168.2.3;\ ping -c 4 192.168.3.1;\ ping -c 4 192.168.3.2;\ ping -c 4 192.168.3.3
The result should be 4 replies from each IP on each host with no packet loss. I am aware that each host is pinging itself twice. But you have to admit it look pretty bad ass.
Once you can hit all of your IP's successfully, now it's time to make sure that multicast is working properly. This isn't a big deal in VirtualBox because the virtual switches are configured to handle multicast correctly, but it's important to see the test run so you can do it on real hardware in the future.
First send a bunch of multicast traffic at once:
omping -c 10000 -i 0.001 -F -q 192.168.2.1 192.168.2.2 192.168.2.3
You should see a result that is 10000 sent with 0% loss. Like so:
192.168.2.1 : unicast, xmt/rcv/%loss = 9406/9395/0%, min/avg/max/std-dev = 0.085/0.980/15.200/1.940 192.168.2.1 : multicast, xmt/rcv/%loss = 9406/9395/0%, min/avg/max/std-dev = 0.172/1.100/15.975/1.975 192.168.2.2 : unicast, xmt/rcv/%loss = 10000/9991/0%, min/avg/max/std-dev = 0.091/1.669/40.480/3.777 192.168.2.2 : multicast, xmt/rcv/%loss = 10000/9991/0%, min/avg/max/std-dev = 0.173/1.802/40.590/3.794
Then send a sustained stream of multicast traffic for a few minutes:
omping -c 600 -i 1 -q 192.168.2.1 192.168.2.2 192.168.2.3
Let this test run for a few minutes. Then cancel it with CTRL+C.
The result should again be 0% loss, like so:
root@prox1:~# omping -c 600 -i 1 -q 192.168.2.1 192.168.2.2 192.168.2.3 192.168.2.2 : waiting for response msg 192.168.2.3 : waiting for response msg 192.168.2.3 : joined (S,G) = (*, 232.43.211.234), pinging 192.168.2.2 : joined (S,G) = (*, 232.43.211.234), pinging ^C 192.168.2.2 : unicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.236/1.488/6.552/1.000 192.168.2.2 : multicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.338/2.022/7.157/1.198 192.168.2.3 : unicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.168/1.292/7.576/0.905 192.168.2.3 : multicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.301/1.791/8.044/1.092
Now that your cluster network is up and running, you can finally build your cluster. Up to this point, you have been entering identical commands into a SSH sessions. At this point, you can stop using the multi-exec feature of your SSH client.
First, create the initial cluster node on Prox1, like so:
root@prox1:~# pvecm create TestCluster --ring0_addr 192.168.2.1 --bindnet0_addr 192.168.2.0
Then join Prox2 to the new cluster:
root@prox2:~# pvecm add 192.168.2.1 --ring0_addr 192.168.2.2
Followed by Prox3:
root@prox3:~# pvecm add 192.168.2.1 --ring0_addr 192.168.2.3
One final configuration on Prox1 is to set the third network interface as the dedicated migration network by updating the datacenter.cfg file, like so:
root@prox1:~# nano /etc/pve/datacenter.cfg keyboard: en-us migration: secure,network=192.168.3.0/24
Now that the cluster is set up, you can log out of your SSH sessions and switch to the web GUI. When you open the web GUI for Prox1 (https://192.168.1.101:8006) and you should see all 3 nodes in your TestCluster:
Now you can manage all of your PVE Hosts from one graphical interface. You can also do cool shit like migrating VMs from one host to another, but before we can do that we need to set up some PVEVMs. There are more things to set up, but to see them in action, we need to build a couple of PVEVMS to work with, which I will cover in the next installment: Building PVE Containers.
You must be logged in to post a comment.