Using the Raspberry Pi to Turn an iPad into a Real Computer, part 5: The Networky Bits

Now that I have the Pi set up as both a wireless client, and a Wireless Access Point, it’s time to get the different network tools configured.

Frequently visited networks
The web GUI doesn’t handle connecting to networks. The GUI looks like it will, but it doesn’t actually accomplish anything. I am sure there is a way to configure around the problem, but I haven’t dug into it. Instead, in typical Chris fashion, I just use a super ugly hack based on like 15 minutes of research into the problem. I’ll figure out how to do it the right way in the future (yeah, right.) but for now I just change the SSID and PSK entries in /etc/wpa_supplicant/wpa_supplicant.conf and reboot the Pi.

I used this command to put the SSID info and passphrase into a file:

wpa_passphrase "Totally A Starbucks" LOLnotreallySBUX | tee sbux.txt

Where “Totally A Starbucks” is the SSID for your wireless network (put the name in quotes), and LOLnotreallySBUX is the pre-shared key for your wireless network. I created a different file for each network I want to connect to (home, work, etc.) and then created copies of wpa_supplicant.conf for each network. I call them, creatively enough, home, work, hotspot, and phone.

country=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
ap_scan=1

update_config=1

Delete any/all network entries, and then use the CTRL+R command in nano to read in the contents of your various files (sbux.txt, in the example above). Then save the file. The sbux.txt file in the example above will look like this:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
ap_scan=1

update_config=1
network={
        ssid="Totally A Starbucks"
        #psk="LOLnotreallySBUX"
        psk=3f825ee60dff2f77fccfd2a74ac08023d69e1a66918687ec513afe438a2bd1fd
}

You will need one “wpa_supplicant.conf” source file for each network. You could call them wpa_supplicant.conf.home, or just home. Then I created shell scripts to copy the home file to wpa_supplicant.conf like so:

#!/bin/sh
sudo cp /etc/wpa_supplicant/home /etc/wpa_supplicant/wpa_supplicant.conf
sudo reboot

I put the shell scripts in the /usr/local/bin directory, so that I could call them simply by typing home.sh or work.sh at the command prompt, and then wait for the Pi to boot back up. It ain’t pretty, but it works… every time.

Stupid (mobile) SSH tricks

Speaking of ugly shell scripts that ignore modern practices in favor of dubious hacks from 20 years ago, using SSH on a tablet is super glitchy. In order to conserve battery power, tablets and smart phones don’t like to run processes in the background. In order to conserve mobile data, tablets and smart phones don’t like to keep network connections open for any length of time. Most smartphone apps are front ends to websites or APIs, so you only need processing and network connection when the app is open. This is fine for just about every mobile app, but it murders SSH. There is a lot of talk about mosh in the Blink documentation. It’s literally a mobile-shell. It’s purpose-built to solve this problem.

So naturally, I am not going to use it yet. I’ll look into mosh at a later date (LOL). Instead I am going to create a Rube Goldberg contraption that is held together with awful shell scripts.

Because my SSH session to the Pi gets dropped a lot, I set up a host in Blink so that I can quickly connect to the Pi just by typing ssh@raspad.

On the Pi, I installed screen so that I can keep a session running and not lose whatever I was doing when the connection drops. To save a few keystrokes, I created another shell script in /usr/local/bin called “scr”:

#!/bin/sh
screen -DR 

I tried some different forms of alias, but this one actually works on the Pi.

Now, all I have to type is ssh raspad to connect to the Pi, and scr to connect to my existing screen session. And if no screen session is available, it creates a new one.

The virtual keyboard on the iPad is different in some ways from a “real” keyboard. There is no CRTL key, no arrow keys, and no F keys (F1, F2, etc.)

Most of the time, my primary workstation is a Windows PC. I have a special .screenrc that I use with PuTTy. For the life of me, I cannot figure out how to press F10 in Blink. So I just changed my .screenrc on the Pi to use F3-F6:

startup_message off

# Window list at the bottom.
# I got the long line of vars from https://bbs.archlinux.org/viewtopic.php?pid=423481#p423481
hardstatus alwayslastline
hardstatus string "%{.kW}%-w%{.W}%n %t%{-}%{=b kw}%?%+w%? %=%c %d/%m/%Y" #B&W & date&time

# From Stephen Shirley
# Don't block command output if the terminal stops responding
# (like if the ssh connection times out for example).
nonblock on

# Allow editors etc. to restore display on exit
# rather than leaving existing text in place
altscreen on

# bind F3 to detach screen session (to background)
bindkey -k k3 detach

# bind F4 to create a new screen
bindkey -k k4 screen

# Bind F5 and F6 to previous and next screen window
bindkey -k k5 prev
bindkey -k k6 next

Nah, fuck that. Just use Mosh.

Mobile Networking

The more work that goes into this little project, the more this is starting to look like a mobile home lab. While I do not have plans to remotely access the Pi from the Internet, nor do I have plans to serve anything from the Pi to the Internet, there are reasons to use dynamic DNS and an overlay network.

I have written about NeoRouter before as a means for gaining remote access to my home network. I also use it on my internal network to get access to my lab servers from my wireless network. My internal wired network (which is my lab, basically) is separated from the family wireless network. Most of the time, it’s to protect the family from my lab. Sometimes it’s the other way around. My modernized smuggling server sits on my lab network, and I use NeoRouter to access its various web interfaces.

Dynamic DNS is another thing that sounds like it’s mostly for remote access, but comes in handy for other things. I used to use it with my mobile phone to do VOIP when I traveled overseas, back what that was humanly possible. The tool that I prefer to use is DDclient.

sudo apt-get install ddclient

Configuring DDclient depends on the dynamic DNS provider that you are signed up with. But once you have it configured, you can test it with this command:

sudo ddclient -daemon=0 -debug -verbose -noquiet

These are the kinds of tools that you set up before you need them. I am not sure if I will ever need them, but it would be nice to have them running properly if I did.

Cool Networking Tools
Now that you can connect to the Pi reliably, and you can get the Pi to connect to the different wireless networks that you may come into contact with in a semi-automated fashion, it’s time to break out the nifty networking tools to run in your screen session.

  1. WaveMon
    For whatever reason, iOS lacks a decent WiFi scanner. Wavemon is a command line Wi-Fi analyzer. The Wi-Fi settings will show you nearby access points, and will use a couple of bars to show you the quality of your connection, but that’s it. To get useful signal info, you need to use Wavemon:

    sudo apt-get install wavemon

    And you run it from the command line like so:

    sudo wavemon

    You need root privileges to do scans for nearby access points. There are other mischievous tools that you can put to work from there, but mostly I use Wi-Fi scanners to see how crowded a given channel is when helping my friends and family set up their wireless networks.

  2. Nmap
    So you found a wireless network to connect to, let find out what’s on it. Nmap is probably the most complicated command line tool in existence. I am by no means and expert on using it. In fact, I really only know how to do like 3 things with it, so I’m not going to go into using Nmap pretty much at all. What I can tell you is that if the wireless network you are on has AP-host isolation enabled, you won’t see any of the wireless clients. Fortunately, the tool is small and requires very little power. This makes it ideal for running on the Raspberry Pi.
    To install Nmap:

    sudo apt-get install nmap

    To scan a single host (one IP address):

    sudo nmap 192.168.50.1

    I don’t remember if you need to be root to run Nmap effectively. Most of my experience with these tools is from Kali Linux (of which there is a Raspberry Pi distribution) where everything runs as root.
    To scan a a whole network (all the IP’s in a subnet):

    sudo nmap 192.168.50.0/24

  3. TCPdump
    We are on the wireless network and we have scanned it for cool things. Now let’s see what kind of chatter is happening. I don’t do it very often, but every once in a while, being able to monitor network traffic comes in handy. On a “real” computer, I prefer to use Wireshark, but tcpdump will work in a pinch. You install it like any other commandline tool:

    sudo apt-get install tcpdump

    And like most scanning and monitoring tools, you need to run it at root. Like Nmap, TCPdump is super complicated. If you want to monitor traffic on your hostAP network, you will need to specify the uap0 interface for your scans. You can filter your results by pretty much anything. For example, you filter ICMP traffic like this:

    sudo tcpdump -i uap0 protocol icmp

  4. iPerf3
    Now that you have seen what’s happening inside your wireless network, it’s time to test network thruput. For this task, I like to use iperf3. You need another computer to run iperf3 to send data to, but any Unix host should be capable of running it. I use it on my admin workstation when I am tinkering at home, and I run it on my hosted VM to test Internet links.
    Like WaveMon or Nmap, it’s dead simple to install:

    sudo apt-get install iperf3

    and also dead simple to run, assuming you have the right arguments:
    On your remote end (aka the hosted server):

    iperf3 -s

    On your local machine (in this case, the Pi):

    iperf3 -c hostname

Now that the Pi has expanded the iPad’s ability to connect and to troubleshoot networks, it’s time to add features that normal people will appreciate, like storage and media streaming.

Using the Raspberry Pi to Turn an iPad into a Real Computer, part 4: RaspAP

My previous post was about pre-configuring the Pi for headless booting that automatically connects to your wireless network.

This is fine for your home network, but it will be difficult to get connected to the Pi when you are traveling. Also, if you are planning to leave the Pi at home, and never use it while traveling, that is waste of a Raspberry Pi. There is a global shortage of Pi’s; they are pretty much impossible to get, even at Pandemic-Profiteering prices. If you just want to do occasional Unix shit on an iPad when you are at home, just use a VM. If you do want to use the Pi out in the field, I recommend RaspAP.

If the headless install went well, you should be able to log into the Pi via SSH and run the RaspAP installer script. I learned about RaspAP from this video:

All did not go well for me in the beginning. I couldn’t connect to the Pi wirelessly to save my life. If I plugged it in to a switch, connecting was not a problem. I tried and tried dozens of different things. I even tried a completely different Pi. It turns out that I was making a few mistakes:

  1. You absolutely have to use the 32bit version of Raspberry Pi OS lite. Not 64 bit. Not the default version that you click on accidentally. 32bit. Lite. No exceptions.
  2. I was getting owned by my own paranoid network security.

You see, the monster that we fear most is the one that we see in the mirror. Spies sweep for bugs, thieves keep things in safes, and hackers enable AP-Host isolation on their wireless networks. Host isolation keeps devices that are connected to an access point from talking to each other. They can see and talk to hosts that are on the same wired network as the AP, so you can connect to servers. The reverse is also true, servers on the wired network can see and talk to hosts that are connected to the AP. What absolutely doesn’t work with AP-Host isolation is an iPad that is connected via WiFi connecting to a RaspberryPi that is also connected to WiFi. You get nothing. Good day sir.

One way to remedy this situation is to use a server on the wired network as a jump box for SSH, and some Stupid SSH Tricks(tm) for mapping port 80 on the Pi to a local port on your iPad or laptop. Rather than deal with the keyboard situation on the iPad, I just used PuTTy on my windows laptop for the initial setup.

Begin the RaspAP install by running the quick installer:
curl -sL https://install.raspap.com | sudo bash

(I’m a Debian guy that still uses su instead of sudo, so I probably over use sudo in all of these examples.)

At this point, the order of operations is critically important. DO NOT START THE HOTSPOT! You need to configure the WiFi Client AP Mode under Hotspot|Advanced before you do anything. You can’t change the AP Mode while the AP is running, and running the AP shuts down wlan0 on the Pi, so you can’t connect to the Pi while the AP is up, and shutting off the AP cuts off your access to the Pi. I had to re-image my SD card a few times because of this error. I guess you can fix this using the wired Ethernet, but I haven’t figured out how.

To enable WiFi Client AP Mode, open the hotspot menu, set the SSID and pre-shared key options for your AP, and then click “save”. Then enable the WiFi Client AP Mode under the Advanced tab. THEN AND ONLY THEN can you start the hotspot.

Now configure hostapd to start at boot:

sudo update-rc.d hostapd defaults
sudo update-rc.d hostapd enable

Then reboot the Pi to make sure the AP comes up:

sudo shutdown -r now

It will take a few minutes for the AP to be visible to the iPad, and once the AP is visible, you should wait for a minute or two before connecting.

The thing to remember from here on out (that you can add as a host in Blink) is that when you are connected to the Pi AP, the Pi’s static IP is 192.168.50.1. From there you can add those sweet delicious network tools that are missing from iOS. My next post will cover some of those tools.

Using the Raspberry Pi to Turn an iPad into a Real Computer, part 3: Pi OS install

In my last post, I talked about setting up the iPad for access to the Raspberry Pi. Now the work on the Pi begins. Before you can do anything with the Pi, you need to install the basic OS. I tried a bunch of things, and in the end, I chose to go with a fully headless install. The Pi is going to be headless for the rest of its life, so it might as well start that way.

Imaging
Most Pi tutorials recommend Balena Etcher or Win32DiskImager to burn your image to an SD card. I have used both tool tons of times and they are both great. In fact, I have been using Win32DiskImager to read my SD cards back to an image file to make incremental backups of this project. Making backups when you hit a major milestone is super important. Also, SD cards fail all the time, so it pays to have good backups.

For this project, I went with Raspberry Pi Imager because you can configure a bunch of stuff during the imaging process. You do this by clicking the little gear icon to the lower right of the “Write” button.

Also, carefully note in the picture above that I chose the “Raspberry Pi Os Lite (32-Bit) option. You may be saying to yourself “Nah son, I’m 64bit for life.” and I am here to tell you that for this project, 64bit is a grave mistake. The RaspAP installer shits itself on 64bit Raspberry Pi OS, and you will catch hell trying to make it work. Don’t be like me. Just run the 32bit version.

Most of the initial config is going to happen with SSH, so scroll down to enable it:

Obviously the Pi needs to connect to the Internet for the initial setup, so you will want to enable Wifi, and program it to connect to your wireless network:

The next item is also important. Configuring the location sets the radio properties for the WiFi adapter, as well as the UTF-8 character set which can affect scripts running later:

At this point, you are ready to boot the Pi. You will need a way to find the IP for the Pi. You can look it up in the DHCP lease table on your router (or other DCHP server). You could run NMAP to scan your wireless subnet and look for an IP running SSHD. If you know the IP of the Pi’s wireless card in advance, you could set up a DHCP reservation. Since I am recycling this Raspberry Pi, I already had a reservation set up from it’s previous life as an amateur radio workstation.

What you don’t want to do is set a static IP just yet. In the next step, we will be setting a static IP, so that you never have to determine the IP of the Pi again.

One thing that I add to Raspberry Pi servers is to reboot them on a regular basis. This step probably isn’t necessary on newer model Pi’s, but between the small amount of memory and the unreliable nature of SD card storage, the 2b would lock up from time to time. To get around this problem I just set up a cron job to reboot the Pi every morning at 4am:
sudo crontab -e

At first you will be prompted to choose an editor. Personally, I prefer nano. Once the file is open in your editor, scroll to the bottom of the file and enter:
0 4 * * * /sbin/shutdown -r now

In my next post, I will cover installing RaspAP, which requires a full update and reboot:
sudo apt update
sudo apt full-upgrade
sudo shutdown -r now

Using the Raspberry Pi to Turn an iPad into a Real Computer, part 2: The iPad

In my last post, I talked about why I wanted to do this project, which was mostly about not wanting an old iPad to sit in a box with all my other crap. It was also about a clever use of a Raspberry Pi. While 99% of the work in this project is setting up the Pi, this post is about setting up the iPad.

Growing up the iPad
My daughter got the iPad when she was 4. We bought it refurbished from Amazon. As much as I dislike Apple as a company, and as a platform, the quality of their hardware is impressive. We put the iPad in a pink foam bumper case. It was subjected to all manner of child induced terrors: spilled milk, sticky fingers, being left to die in random places. Despite being 2 years old when we got it, and her using it for probably 5 years, it’s still in pretty good shape. The screen cracked in one corner.

I cleaned it up, and I put it in a cheap folio case.

Blink
The only real modification I made to the iPad was to install a Unix shell app called Blink. Blink has some essential tools like ping and ssh, along with the ability to map hostnames to IP addresses in a manner similar to /etc/hosts.

The app is $20 per year, it’s meant to be used for home automation, and includes a lot of that. If you aren’t into that you can just shake your iPad every day or two and keep using the app for free.

The iPad’s on screen keyboard doesn’t have some essential keys, like CTRL or arrow keys. If you are going to spend time in the Unix shell, you should probably have a hardware keyboard.

Keyboard
My wife has a Bluetooth keyboard that she uses to live-Tweet TV episodes sometimes. It doesn’t have a touchpad, it runs off of AAA batteries instead of being rechargeable, but it does have arrow keys. The F-keys (F1-F10) behave a little strangely, which I will go into more in the next post about configuring the Pi.

The most practical solution would be a keyboard with a dedicated CTRL key, arrow keys, and a touchpad. If I end up using this gear a lot, I might splurge on a cool keyboard.

I spent a little while shopping for 60% mechanical keyboards. Buying a keyboard is definitely outside of the “shit laying around the house” constraint. A retro-gray keyboard would give the iPad some cool Unix style. A 60% keyboard doesn’t solve the touchpad problem, however, and a lot of 60% layouts don’t have dedicated arrow keys.

Battery
The Raspberry Pi doesn’t have a built-in battery, so using it portably requires a battery bank. I have a small collection of batteries thanks to amateur radio and owning a couple of smartphones with terrible battery life.

The Raspberry Pi 4b requires more amperage than the previous models. I have plugged the Pi into the battery for testing purposes, but I haven’t tested how long the battery lasts with the Pi plugged in. These batteries can completely charge two tablets from 0 to 100% simultaneously, so I am pretty sure I can get several hours of use with the Pi. Of course, in a situation where I am charging the iPad and my 4G hotspot, that duration lowers significantly.

And now, the fun begins: Setting up the Raspberry Pi

Using the Raspberry Pi to Turn an iPad into a Real Computer, part 1: Prologue

This Christmas, we upgraded the kids’ iPads, and I inherited my daughter’s old iPad Air 2. I had an iPad years ago, but I didn’t like it.

I like tablets, I just didn’t like the iPad. Tablets fill this weird gap between a smartphone and a PC, where you can do what you do what you do on your phone (texts, memes, and games) only more comfortably. A laptop is best used when seated, preferably at a desk or table; it’s portable. The smartphone is great when you are out of the house or office and moving around; it’s mobile. The tablet fits into that middle space: seated but not at a desk or table, such as in bed, on the couch, on a long flight, or riding a train. Staring at a screen of any kind in a car for a long time makes me nauseous, so I prefer audio for car trips.

I also hate tablets because they come close to doing what a netbook used to, before 10 inch screens went extinct. (Yes the GPD Pocket is a thing, it’s also the price of a gaming laptop. I already have too many laptops as it is, without dropping 12 Benjamins on another one because it’s cute.) Netbooks are great for note taking in a meeting or class, or for doing light system administration tasks where you need basic networking tools like ping, ssh, or more serious tools like network scanners or wifi analyzers. Android tablets do ok in this regard, but the lineup of network tools for iOS are not great.

The problem with a tablet is that it isn’t a netbook. The problem with a netbook is that it a tablet.

Since inheriting this iPad has cost me nothing (well, I paid for it years ago) I am going to try it again. This time I am also re-creating the netbook experience using recycled technology that I already have. I am trying to create a portable (not necessarily mobile) computing setup that is smaller than a laptop, charges off of 5v DC, does Unix shit reliably, stores files and streams media without Internet access, and fits in my man purse. The theme of this project is “modular off-grid solar powered computing made with shit laying around the house.”

Modular
The essential difference between a tablet and a netbook is the keyboard. The dream is to have either a netbook with a removable screen, or a tablet with a detachable keyboard. Those purpose-built devices are nice, but they are also expensive. For this hand-me-down project I decided to kludge pieces together instead.

Using a tablet keyboard is usually pretty lame, especially a keyboard with no touchpad. Taking my hand off the keyboard to touch the screen is a major distraction. I had a touch screen laptop for years and rarely used that feature. I think I have a decent little Bluetooth keyboard somewhere, one with that ThinkPad nipple-looking thing. It’s probably sitting in a box with a bunch of broken tablets.

As much as I dislike membrane keyboards, they will be significantly better than typing on the iPad when it is propped up in that tilted landscape mode.

Off grid
Traveling with a laptop can be kind of a waste, especially when you end up not using it very much. Wasted suitcase space isn’t that big a deal anymore since haven’t been on an airplane in a couple of years. Anymore, the traveling I do is outdoor stuff like car trips and camping. Tech in these scenarios is great for keeping the kids busy when it’s rainy, cold, or on long car rides. We travel a few times each year to my in-laws lake house where there is tons of nature, but not much access to the Internet. Offline media requires the kind of storage that tablets are notoriously short on.


In the before times, when international travel was a thing, I used a cheap Andoroid tablet and a Chromebook. The Chromebook had a real keyboard and real web browser, while the Android could run arbitrary apps from the Google Play Store. The combination was a decent small toolkit. Between having kids and COVID, I haven’t gone over seas in several years. All that gear is probably obsolete now anyways.

I hate electronic waste, and yet I seem to produce a lot of it.

Shit laying around the house
This project began as a plan to reuse a hand-me-down iPad. I set the old iPad up purely to get access to FaceTime, and as I loaded my old apps on it, I discovered that it was still decently powerful.

I have also collected a few Raspberry Pi’s over the years. I have done maker stuff with them, used them to demonstrate things at 2600, including a Pi PBX one time as a proof of concept. They’re handy little things. As I get more into amateur radio, Pi’s come in handy for different digital and packet modes.

The Pi also runs off 5v DC, albeit at higher than 2a. This isn’t a problem with modern phone chargers and portable battery banks, of which I also have a couple.

Adding Solar
Amateur radio has taught me about the importance of charging batteries in the field. “Field rechargeable” is probably a better term than “solar”. Solar is more of a guideline. If something can charge from USB, you can probably charge it off of solar. If you can charge it off of solar, you can probably charge it off either 5v USB, or 12v car electrical. Wall and car chargers for smartphones are great sources of USB power, and in the family travel scenario, car and wall power make more sense. USB ports in computers can also charge USB devices, although they tend to do it very slowly. The Pi 4 can’t run reliably from a laptop USB.

I have a folding solar panel with a 12v power output and USB outputs. I normally use it to charge my portable solar generator. That’s a stupid name for the device. It’s just a big 12v battery, it doesn’t generate anything. I already have a collection of USB battery banks laying around the house, so one of those should run the Pi for a pretty long time. I even have a USB battery bank with an integrated solar panel, though it takes multiple days worth of good sunlight to fully charge it. I haven’t tried laying out the solar panel with the solar banks plugged into it to see how it charges, but I am hoping to try it out when the weather is nicer.

Stay tuned for the next installment where I get started configuring the iPad.

Modernizing the smuggling operation

The winter holidays are a depressing time for me, so the last month or so of the year I like to really throw myself into a game or project. After being ridiculed by one of my DnD bros about how inefficient and antiquated my piracy set up is, I decided to modernize by adding applications to automate the downloading and organizing of my stolen goods.

My old fashioned manual method was to search trackers like YTS, EZTV, or The Pirate Bay and then add the magnet links to a headless Transmission server that downloads the files to an NFS share on my file server. Once the download is complete, I would copy the files to their final destination, which is a Samba share on the file server. I don’t download directly to the Samba share because BitTorrent is rough on hard drives. My file server has several disks, some of them are nice (expensive) WD Red or Seagate Ironwolf disks, and some are cheap no-name drives. I use the cheap drives for BT and other forms of short term storage, and the nicer drives for long term storage.

For media playback, I used a home theater PC that would access the shared folder, and then play the files in VLC media player. This was the state of the art in 2003, but the game has gotten more fierce.

My new piracy stack now consists of Radarr, Sonarr, Lidarr, and Jackett. These are dedicated web apps for locating (Jackett), downloading (Transmission) and organizing movies (Radarr), Tv (Sonarr), and music (Lidarr). Once the media is downloaded, sorted, and properly renamed, it will be streamed to various devices using Plex.

Rather than run a different VM or Linux container for each app, a friend recommended that I use Docker. Docker is a way of packaging applications up into “containers.” It has taken me a good while to get my mind around what the difference is between a Linux container and a Docker container. I don’t have a good answer, but I do have a hot take: if you want an application/appliance that you can roll into a file and then deploy to different hosts, you can ask one of two people do do it for you: a system administrator or a developer. If you ask a system administrator to do it, and you will end up with LXC, where the Linux config is part of the package, and that package behaves like a whole server with RAM, and IP address, and something to SSH into. If you ask a developer to do it, you just get the app, a weird abstraction of storage and networking, and never having to deal with Unix file permissions. And that’s how you get Docker.

Because I am a hardware/operating system dude that dabbles in networking, LXC makes perfect sense to me. If you have a virtualization platform, like an emulator or a hypervisor, and you run a lot of similar systems on it, why not just run one Linux kernel and let the other “machines” use that? You get separate, resource efficient operating systems that have separate IP’s, memory allocation, and even storage.

The craziest thing about Docker is that if you start a plain container, like Debian, it will pull the image, configure it, start it up, and then immediately shut it down. And this is the expected behavior.

I like that Docker can pop up an application very quickly with almost no configuration on my part. Docker storage and networking feels weird to me, like a bunch of things stapled on after delivering the finished product. From a networking standpoint there is an internal IP scheme with randomly generated IPs for the containers that reminds me of a home network set up on a consumer grade router. If you want the container to have access to the “outer” network, you have to map ports to it. Storage is abstracted into volumes, with each container having a dedicated volume with a randomly generated name. You don’t mount NFS on each container/volume, instead you mount it on the host and point the container to the host’s mountpoint. It’s kind of like NFS, but internal to Docker using that weird internal network. Also, in typical developer fashion, there is very little regard for memory management. The VM that I am running docker on has 16gb of RAM, and its utilization is maxed out 24/7. Maybe Docker doesn’t actually use that RAM constantly, it just reserves it and manages it internally? It’s been chewing through my media collection for a couple of months now, and slowly but surely new things are showing up in Plex. Weird as the stack is, it’s still pretty rad.

All the randomly generated shit makes me feel like I don’t have control. I can probably dig a little deeper into the technology and figure out some manual configuration that would let me micromanage all those details, thereby defeating the whole entire purpose of Docker. Instead, I will just let it do its thing and accept that DevOps is a weird blend of software development and system administration.

Adventures in Proxmox Part 3: Chris don’t know shit about networking

When I first started messing with Proxmox, I crashed my home network. If you aren’t interested in the story of my journey of network sexual awakening, click here.

I have since spent the last several months learning about Proxmox networking using virtual box. I have also been working on a parallel project: upgrading my home network to be segregated using VLANs. Like my budget for server hardware, my budget for network gear is practically nonexistent, so I have been doing a lot of reusing things that should have been replaced years ago.

After a bit of consternation, I settled on a prosumer router and a smart switch, rather than a PC-based router and a managed switch. Mostly because I needed this to work for the family as well as for the lab, and I didn’t want to spend weeks relearning Cisco. Time to tear down the old home network!!

A New Router

My plan is to have 4 “real” networks for my “physical” equipment:

  1. The family’s wireless network – for phones, tablets, game consoles, and tv sticks.
  2. My wired network for my personal workstations and servers.
  3. A VOIP network for POE phones, ATAs, and my PBX.
  4. A server and network lab for me to wreck things.

When I say “real” I really mean “operated by humans” or perhaps “not a Proxmox host”. When I say “physical” I also mean “operated by humans” or perhaps “not a Proxmox host”. At least half of these “real” ports are VLANs, and at least half of these “physical” devices are VMs. In this scenario, “real” and “physical” networks and devices are the ones that I and the family use, compared to the networks that are dedicated to the Proxmox cluster.

The critical distinction is that all of these network segments connect to a different port on the router, and have firewall rules to keep them from connecting to each other. In this scenario, a dumb switch plugged into each port of the router will provide a physically separated network at layer 2 (Ethernet) and a logically separated network at layer 3 (IP). It is here that I have used my first batch of dumb old mini switches:

  1. eth1 – Family Wireless, 192.168.10.0/24
  2. eth2 – Personal Wired, 192.168.11.0/24
  3. eth3 – VOIP, 192.168.12.0/24
  4. eth4 – Lab, 192.168.13.0/24

The family wireless network consists of 2 wireless access points, both with 4 dumb gigabit Ethernet ports:

  1. WAP port 1 -> eth1 on the router, uplink to the Internet
  2. WAP port 2 -> eth0 on the NAS appliance
  3. WAP port 3 -> port 1 on the smart switch
  4. WAP port 4 -> port 1 on the other WAP

So, I had my router set up, and plugging a laptop in to each dumb switch let me pull an IP from the DHCP server for the respective network segment. I was also able to browse the Internet. Awesome. I have managed to convert a big, clunky, error-prone network into four smaller error-prone networks. This is progress?

As far as the family is concerned, eth1 on the router is the network. Wireless access to both the Internet and to the data and media stored on the NAS. If I never plug in the smart switch then only I would notice. I have the WAP’s dumb switch plugged in to the smart switch because I have a media server VM on the Proxmox cluster that I want to put onto the wireless network to stream video to tablets, mobile phones and smart TVs. Because the cluster nodes only have 4 network ports, I need to put multiple network connections on to 1 of those network ports. This is where VLANs come into play. This is also where upgrading my knowledge of routing, switching, and firewalls comes in to play with Proxmox: putting the cluster onto all 4 of my network segments using just one network port from each node.

VLANs: everything you hate about dozens of dumb switches, plus virtualization

With the new router working, it’s time to configure the networks’ core: the smart switch.

VLANs are a great way to divide up a big physical switch into smaller virtual networks. A 24 port switch could be broken down into 4 networks, with a a varying number of ports in each network. You can also put a single switch port onto more than one VLAN. The network traffic gets put into the appropriate virtual network by using tags. You can even put a given port into “all” of the VLANs, this is sometimes referred to as a “trunk.” Trunks are used to connect multiple switches together, passing all tags between them.

Dumb switches can’t tag traffic. So, if you want to mix a smart switch that does VLANs with a dumb switch that doesn’t, you need to make sure that your untagged traffic is going out of the right ports. In the hypothetical 24 port managed switch in the example above, if you put port 2 into VLAN 2, and then plug a dumb switch into port 2, then port 2 needs to know what to do with untagged traffic. Traffic coming out of the dumb switch won’t have tags, and traffic going into to the smart switch will lose its tags. This is the essence of “VID” and “PID/PVID”. A VID is a VLAN ID, PVID is a Port VLAN ID. All the ports on the smart switch need to treat all traffic as tagged, even when it’s not. Untagged traffic needs to be treated differently than tagged traffic, basically meaning that “untagged” is just a special category of “tagged”. The PVID is a kind of “untagged == special tag” way for ports to deal with untagged traffic. This is the exact moment that I developed a migraine.

Star Trek guy with severe head pain.I have done a decent job keeping the family wireless network packets away from everything, and everything away from the family by putting each network segment on its own dumb switch. Now it is time to blur those boundaries a bit by plugging each of those dumb switches into the smart switch. My network is broken into 4 subnets, so my VLANs will break down something like this:

  • VLAN 1 – Family Wireless
  • VLAN 2 – Personal Wired
  • VLAN 3 – VOIP
  • VLAN 4 – Lab

I probably don’t need a separate /24 (class C) network for each VLAN, but I am not very clever and I have zero confidence in my ability to design networks or IP schemes. I know how routing works when you are using /24’s so for my implementation VLAN == /24. Also, as I learned in the Virtual Box lab, network designs get real confusing real fast, so having the VLAN tag roughly correspond to /24 subnet helps me to not go completely insane.

The smart switch is configured by a web interface. This interface has a default IP of 192.168.0.1, so I set a static IP on the Ethernet port of my laptop and logged in. This part of the configuration is important, and it will come into play again later. Once I have all the VLANs set up, I still need to be able to access the switch on this IP address.

I configured the first 4 ports on the switch as access ports or up-links to the dumb switches. Because the dumb switches don’t tag traffic, I needed the uplink ports to treat all “untagged” traffic as tagged to a single VLAN, using the PVID:

  • switch port 1 – VLAN 1, PVID 1
  • switch port 2 – VLAN 2, PVID 2
  • switch port 3 – VLAN 3, PVID 3
  • switch port 4 – VLAN 4, PVID 4

So now, if I change port 5 to VLAN 1 and PVID 1, I can plug in my Windows laptop and pull an IP from the wireless network. Then I can change port 5 to VLAN 2 and PVID 2, and now I can pull an IP from the wired network. Now I need to figure out how to get my Prox cluster nodes to sit on all 4 networks at the same time using a single switch port for each node.

Enter the Management Workstation

Up to this point, I was able to set up my dumb switches and my VLANs with a Windows laptop. I just disabled the WiFi and plugged the Ethernet adapter into the various switches and ports. This was fine for scenarios where one switch port corresponded to just one network segment. But it turns out that Windows can’t do VLANs without proper hardware and software support for the NIC. If you have a VLAN-aware NIC and the Intel or HP enterprise app to configure it, I guess it works fine, but there is no Windows 10 app for the Intel NIC in my crashtop.

In my Virtual Box Proxmox lab, I learned that life is just easier when you have a Linux box dedicated to managing the cluster and testing your network setup, so I decided that before I set up the cluster, I should set up a “Management Workstation.” For the BoxProx lab, I used a Virtual Box VM running a GUI to administer the cluster because I needed a browser on the host only network. Technically, I could have run the management workstation without a GUI and just used SSH tunneling to access the web management interfaces for the Proxmox VMs, but I didn’t want to spend any time doing stupid SSH tricks. I also don’t have the actual hardware cluster running yet, so I need to do this with actual hardware. The hope is that once I get the VLANS and network bridges configured, the workstation will be superfluous. Therefore, the workstation doesn’t have to be powerful at all. Literally any old laptop or desktop that is laying around will do nicely.

My operating system of choice is Turnkey Linux Core. Set up an old desktop on port 5 of the smart switch. For the initial install, I left port 5 configured for VLAN 1 and PVID 1. I was able to pull an IP address from the wireless network, install and update the OS, and configure SSH.

Remote access is important because I can’t sit in my basement all day; Internet access is important because I need to install some network tools.

First step is to get the VLAN tools installed:

apt-get install vlan

Then enable VLAN support in the kernel:

echo 8021q | tee -a /etc/modules

Then add your tagged network interfaces:

nano /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 192.168.0.10
    netmask 255.255.255.0

auto eth0.1
iface eth0.1 inet static
    vlan-raw-device eth0
    address 192.168.1.10
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-nameservers 8.8.8.8 8.8.4.4

Then reboot the machine. I know there is a bunch of crap that you can do to avoid that, but this is the only way I can be sure that it works. I also know that if you name the interface eth0.N you probably don't have to mark the 'vlan-raw-device' but the Debian VLAN tutorial did it so I did it too.

What this does is change the IP of untagged interface eth0 to 192.168.0.10 (remember the IP of the switch from before?) and add eth0.1 (VLAN 1) with an IP of 192.168.1.10 and configured a default gateway and DNS for that interface.

Now, the machine should still be connected to the Internet, and you can modify port 5 on the smart switch to be in VLAN 1 and PVID 1.

If you can ping the IP for the smart switch (192.168.0.1), the IP of something on your wireless network (like an access point) as well as Google's DNS (8.8.8.8) then you are in good shape.

At this point, I left the basement and went upstairs. I connected my laptop to the family wireless network (192.168.1.0/24) to SSH into the workstation. Since I will be making modifications to the smart switch configuration, as well as the management workstation, I decided to configure PuTTy to drop a local port and forward it to 192.168.0.1:80 so that I can access the web interface of the smart switch from my laptop, and the unencrypted HTTP traffic will be secured by the SSH tunnel.

Now I just need to move the Internet access to the 'Lab" VLAN and add the remaining VLANS to /etc/network/interfaces:

nano /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 192.168.0.10
    netmask 255.255.255.0

auto eth0.1
iface eth0.1 inet static
    vlan-raw-device eth0
    address 192.168.1.10
    netmask 255.255.255.0

auto eth0.2
iface eth0.2 inet static
    vlan-raw-device eth0
    address 192.168.2.5
    netmask 255.255.255.0

auto eth0.3
iface eth0.3 inet static
    vlan-raw-device eth0
    address 192.168.3.5
    netmask 255.255.255.0

auto eth0.4
iface eth0.4 inet static
    vlan-raw-device eth0
    address 192.168.4.5
    netmask 255.255.255.0
    gateway 192.168.4.1
    dns-nameservers 8.8.8.8 8.8.4.4

The last step is to make sure that smart switch port 5 is part of VLANs 1, 2, 3, and 4, with PVID 1. If all goes well, the workstation can ping the smart switch IP, Google DNS, and servers on all 4 VLANs.

The next step is to use this same network setup for the management NIC on the Proxmox cluster. Using the 4 VLAN interfaces for the network bridges (VMBR1-VMBR4).

Building a Proxmox Test Cluster in VirtualBox Part 5: Shit Happened; Lessons Were Learned

Jesus, it’s been almost a year since I posted part 1 of this series.

Hacking stuff is one of the ways that I cope with depression. Like going to the gym and getting stronger, learning new skills is a productive activity that improves my mind and my career. Also like going to the gym, hacking stuff requires a certain level of energy and focus. When I am having a depressive episode, I just can’t make myself do much more than watch TV. I have emerged from my Fallout 4 binge and I am eager to get this hardware cluster off the ground.

Learning Lessons

In my pursuit of a working Virtual Box + Proxmox cluster (Boxmox? ProxBox? BoxProx!) I discovered a few fatal flaws:

  • My testbed is a single laptop, and I used static IP’s that sat on my internal wireless network.
  • That meant that I could hack and demo the cluster at home, but not out in the world, like at Cinci2600.
  • Ergo, the “Management interface sitting on the internal network” question that I excluded from the exercise should not have been excluded.
  • Thus, the laptop-based lab for this project was missing a few things:
    1. 3 “Host Only” networks for the management interface, cluster network, and migration network.
    2. A router VBVM to route traffic bound for the Internet via a NAT interface.
    3. A management workstation VBVM with a GUI, for managing the router and the BoxProx CLI and UI.

The reason that I have been doing all of this in Virtual Box, is because it’s easy to recover from these sorts of mistakes. You can think of this exercise as the “Lab Before The Lab”, or the development phase, before going to an actual hardware lab. I actually gave up on keeping my lab environment separate from my home network because I was always limited by one thing or another. At this point, it’s as much lab as it is production, pretty much everywhere.

Shit Happening
Another component of this exercise that I have not documented is the redesign of my home/lab network to accommodate the new cluster. The old “cluster” is down to two old Proxmox servers that aren’t clustered together. It works for getting shit done for the family (PBX, Plex, Bittorrent, OpenVPN, etc.) but it’s not optimal, nor is the network sufficiently segregated to my satisfaction. So, as I have been doing this, I have also been upgrading the home network and learning more about things like VLANs.

So, the material of the first 4 parts of the series is valid, I just wanted to include the router and workstation bits, which you will probably only need if you want your lab to be portable, and work on wireless networks other than your home.

Modification to the network design

In the first installment, I recommended using a bridged adapter for the management interface. This worked great at home, but once I went anywhere else, the wheels fell off the whole process. I tried things like adding a static IP to my wireless adapter in Windows, and I came to the conclusion that Windows just doesn’t do virtual networking like it’s supposed to.

Hal turns on a light, but the bulb is broken. He takes a new light bulb from the shelf, but the shelf is also broken.

So, when you build your PVE hosts, use 3 host only networks, and use a router VM to connect the cluster to the Internet. Also be sure to disable the DHCP service on all of your host-only networks, like so:

The router

I know I have made simple routers from Debian VMs but for this experiment I spent a fair amount of time in the weeds. So do yourself a favor and just use PFSense. Yes it’s waaaay overkill for what you want to use it for, but it will route packets between two networks with minimal configuration, and that’s really what you want.

Hal gets a screwdriver to fix the shelf, and the drawer is squeaky. He picks up the WD40 but it's empty.

  1. Put the first interface of the PFSense VM on a NAT network.
  2. Make sure to disable the DHCP server on your host-only network interfaces.
  3. Put the second interface for the PFSense VM on the FIRST host-only network interface.
  4. Once you have the VBVM booted up, configure the WAN interface on the NIC that was configured by DHCP, and the LAN interface on the other NIC.
  5. Using the console on the router VBVM, configure the LAN for DHCP. Use a small address pool because there will probably be only one DHCP client ever. Using DHCP is an easy way to make sure that you are looking at the right NIC/virtual network.
  6. I can tell you from experience that if you find yourself twiddling with PFSense settings, you are doing it wrong. Just factory reset the config and move on. This is a BoxProx lab, not a PFSense lab.

The Workstation

Ok, so now you have a small network on host-only adapter 1, and router that connects it to the NAT network on your computer. All these NATs make the cluster network portable, but all but useless for anything else. That’s fine. All you want at this point is for your Linux workstation VBVM to access the Internet despite the fact that its only network interface is sitting on a host-only network.

Lois asks Hal to fix the light bulb and he is under the car yelling.

For the management workstation, you don’t need more than a browser and an SSH client, so literally any distro will work for you. I am a Debian guy, so when I want a no-frills GUI workstation with zero time spent configuring, I use one of the Ubuntu breeds meant for low end computers, like Lubuntu or Ubuntu Mate.

Regardless of the distro, you will be doing some repetitive typing in SSH. On Windows, I recommended MobaXTerm so you can type into multiple terminals at the same time and feel like a super hacker. In the Linux world, the app that you want to use is called “Terminator”. Like everything else on this blog, there is way more to Terminator that I won’t bother with. Just know that you can split your term into two equal parts horizontally and vertically by right clicking, and you can turn on and turn off broadcasting to all your keystrokes by pressing ALT+A and ALT+O respectively. Sorry Terminator/TMux/TWM fans, but I got shit to do.

This phase of the lab is a success if you can boot your Linux VBVM and use a browser to access Google as well as the web UIs for PFSense. You are now free to begin the lab again from Part 1.

Building a Proxmox Test Cluster in VirtualBox Part 4: Containers, Storage, and Replication

In the last installment in this series, I showed you how to build a cluster with separate interfaces dedicated to the cluster heartbeat traffic and to virtual machine migration. In order to see these in action, you need for your PVE hosts to run some VMs of their own. Once there are a three VMs running, we can play with the really great features of a Proxmox Cluster, like storage replication and high availability.

During the initial setup of the Proxmox hosts on VirtualBox, you probably saw an error message about KVM virtualization. I dismissed it as not a big deal. The truth is that this whole “virtualization inside of virtualization” exercise won’t produce any useful PVEVMs. I haven’t done much troubleshooting, but I am fairly certain that while the PVEVMs boot up and you can log into their consoles, they don’t really talk to each other or your physical internal network. That’s not as cool as I had hoped, but the point of this exercise was to figure out the network design for a Proxmox Cluster, not to play with cool double-layer virtual machines. I am sure that you can so some cool router Kung Fu to get them going for real, but for this exercise, PVEVMs booting up is enough. We’re just going to move them around the cluster so see how it all works.

Also, because each PVE cluster node is woefully under-powered, RAM is really at a premium:

So even if they did work as advertised, the PVEVM’s probably wouldn’t work well. Linux Containers are awesomely efficient, especially with RAM, but they can’t squeeze blood from a stone.

Downloading Templates and Building Containers

To build your PVEVMs you need to download a container template. It can be any of them. I prefer the LXC Debian 9 template or the Turnkey Linux Core template. How you want to go about building the test PVEVMs is your business, but the goal is 3 PVEVMs up and running. At different points in this exercise there will be one PVEVM running on each PVE cluster node, and all 3 PVEVMs running on one node. It’s up to you if you want to build them out:

  • Download the template to one node, build the container, and then migrate it to another node, OR
  • Download the template to each node and build each PVEVM from there, OR
  • Download the template, build the container, and then clone it.
  • The process is low as hell no matter how you slice it.

Once you have decided your approach, build 3 new containers. It takes a long time, but it demonstrates your need for a central data store for non-VM files. This is where a NAS would be handy. You could set up the NAS to store ISOs, container templates, and backups so that they would be accessible to all the nodes.

It’s important to note that you should build privileged containers. You do this by UN-checking the UN-privileged box. It’s stupid; I know.

At this point, you can migrate a container from one node to the other, but it takes a long time because you have to wait for the container to completely shut down, and then for the container files to completely copy from one cluster node to the next. On real hardware, this process will probably go a bit faster, but this is a good illustration of why we need storage replication.

Storage Replication

Before we can enable replication, we need to set up the ZFS storage properly. In the Building the Cluster Nodes post, we set up a ZFS array called ZStore, and now it’s time to set it up for the whole cluster.

In the Storage View for the cluster you have the option of adding storage. Here you will add a ZFS type and include the zstore/vmdata. Make sure to add all 3 nodes. I called mine “ZVMZ.” At this point it should be apparent that a RAID1 mirror, that is going to be replicated to 2 other mirrors is probably overkill in terms of redundancy, so you should probably do something different on your real hardware. If you have small but fast disks, you might RAID0 them to get nice write speeds, then replicate them for redundancy. Or do whatever.

Once the ZFS storage is set up for all of your nodes, you can move the storage for your PVEVMs to the ZVMZ storage. This will take a long time as well because everything has to copy over. This should be the last time you have to sit through a full copy of anything. Now we can set up replication.

Replication is done on a per VM and per host basis. So you will want to make sure that each VM has a job created to replicate to the other nodes. You only need two jobs for each VM. If you migrate a VM to another host, the replication job will update. The first time the job runs it will take a while. There isn’t a progress bar or anything, so you will have to check the ZVMZ storage on each node to make sure that there are copies of all of your VMs.

With replication set up, even if a cluster node fails, you will only lose 15 minutes (assuming you went with the default schedule) worth of data on the server and you can start up your server on another node with the snapshot. Migrate some VMs and see for yourself, and stay tuned for the next installment: High Availability.

Building a Proxmox Test Cluster in VirtualBox Part 3: Building The Cluster

In the last installment of this series, I discussed setting up the Proxmox VE hosts. Until now, you could do most of this configuring in triplicate with MobaXTerm. Now you can still use it to multicast, just be sure to disable it when you have to customize the configs for each host. This part of the process is a lethal combination of being really repetitive while also requiring a lot of attention to detail. This is also the point where it gets a bit like virtualization-inception: VirtualBox VMs which are PVE hosts to PVEVMs.

Network Adapter Configuration
I did my best to simplify the network design:

  • There are 3 PVE hosts with corresponding management IP’s:
    1. prox1 – 192.168.1.101
    2. prox2 – 192.168.1.102
    3. prox3 – 192.168.1.103
  • Each PVE host has 3 network adapters:
    1. Adapter 1: A Bridged Adapter that connects to the [physical] internal network.
    2. Adapter 2: Host only Adapter #2 that will serve as the [virtual] isolated cluster network.
    3. Adapter 3: Host only Adapter #3 that will serve as the [virtual] dedicated migration network.
  • Each network adapter plugs into a different [virtual] network segment with a different ip range:
    1. Adapter 1 (enp0s3) – 192.168.1.0/24
    2. Adapter 2 (enp0s8) – 192.168.2.0/24
    3. Adapter 3 (enp0s9) – 192.168.3.0/24
  • Each PVE hosts’ IP on each network roughly corresponds to its hostname:
    1. prox1 – 192.168.1.101, 192.168.2.1, 192.168.3.1
    2. prox2 – 192.168.1.102, 192.168.2.2, 192.168.3.2
    3. prox3 – 192.168.1.103, 192.168.2.3, 192.168.3.3

I have built this cluster a few times and my Ethernet adapter names (enp0s3, enp0s8, and enp0s9) have always been the same. That may be a product of all the cloning, so YMMV. Pay close attention here because this can get very confusing.

Open the network interface config file for each PVE host:

nano /etc/network/interfaces

You should see the entry for your first Ethernet adapter (the bridged adapter in VirtualBox), followed by the virtual machines' bridge interface with the static IP that you set when you installed Proxmox. This is a Proxmox virtual Ethernet adapter. The last two entries should be your two host only adapters, #2 and #3 in VirtualBox. These are the adapters that we need to modify. The file for prox1 probably looks like this:

nano /etc/network/interfaces

auto lo
iface lo inet loopback

iface enp0s3 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.101
        netmask 255.255.255.0
        gateway 192.168.1.1
        bridge_ports enp0s3
        bridge_stp off
        bridge_fd 0

iface enp0s8 inet manual

iface enp0s9 inet manual

Ignore the entries for lo, enp0s3, and vmbr0. Delete the last two entries (enp0s8 and enp0s9) and replace them with this:

#cluster network
auto enp0s8
iface enp0s8 inet static
        address 192.168.2.1
        netmask 255.255.255.0
#migration network
auto enp0s9
iface enp0s9 inet static
        address 192.168.3.1
        netmask 255.255.255.0

Now repeat the process for prox2 and prox3, changing the last octet for the IP's to .2 and .3 respectively. When you are finished your nodes should be configured like so:

  1. prox1 - 192.168.1.101, 192.168.2.1, 192.168.3.1
  2. prox2 - 192.168.1.102, 192.168.2.2, 192.168.3.2
  3. prox3 - 192.168.1.103, 192.168.2.3, 192.168.3.3

Save your changes on each node. Then reboot each one:

shutdown -r now

Network Testing
When your PVE hosts are booted back up, SSH into them again. Have each host ping every other host IP to make sure everything is working:

ping -c 4 192.168.2.1;\
ping -c 4 192.168.2.2;\
ping -c 4 192.168.2.3;\
ping -c 4 192.168.3.1;\
ping -c 4 192.168.3.2;\
ping -c 4 192.168.3.3

The result should be 4 replies from each IP on each host with no packet loss. I am aware that each host is pinging itself twice. But you have to admit it look pretty bad ass.

Once you can hit all of your IP's successfully, now it's time to make sure that multicast is working properly. This isn't a big deal in VirtualBox because the virtual switches are configured to handle multicast correctly, but it's important to see the test run so you can do it on real hardware in the future.

First send a bunch of multicast traffic at once:

omping -c 10000 -i 0.001 -F -q 192.168.2.1 192.168.2.2 192.168.2.3

You should see a result that is 10000 sent with 0% loss. Like so:

192.168.2.1 :   unicast, xmt/rcv/%loss = 9406/9395/0%, min/avg/max/std-dev = 0.085/0.980/15.200/1.940
192.168.2.1 : multicast, xmt/rcv/%loss = 9406/9395/0%, min/avg/max/std-dev = 0.172/1.100/15.975/1.975
192.168.2.2 :   unicast, xmt/rcv/%loss = 10000/9991/0%, min/avg/max/std-dev = 0.091/1.669/40.480/3.777
192.168.2.2 : multicast, xmt/rcv/%loss = 10000/9991/0%, min/avg/max/std-dev = 0.173/1.802/40.590/3.794

Then send a sustained stream of multicast traffic for a few minutes:

omping -c 600 -i 1 -q 192.168.2.1 192.168.2.2 192.168.2.3

Let this test run for a few minutes. Then cancel it with CTRL+C.

The result should again be 0% loss, like so:

root@prox1:~# omping -c 600 -i 1 -q 192.168.2.1 192.168.2.2 192.168.2.3
192.168.2.2 : waiting for response msg
192.168.2.3 : waiting for response msg
192.168.2.3 : joined (S,G) = (*, 232.43.211.234), pinging
192.168.2.2 : joined (S,G) = (*, 232.43.211.234), pinging
^C
192.168.2.2 :   unicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.236/1.488/6.552/1.000
192.168.2.2 : multicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.338/2.022/7.157/1.198
192.168.2.3 :   unicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.168/1.292/7.576/0.905
192.168.2.3 : multicast, xmt/rcv/%loss = 208/208/0%, min/avg/max/std-dev = 0.301/1.791/8.044/1.092

Now that your cluster network is up and running, you can finally build your cluster. Up to this point, you have been entering identical commands into a SSH sessions. At this point, you can stop using the multi-exec feature of your SSH client.

First, create the initial cluster node on Prox1, like so:

root@prox1:~# pvecm create TestCluster --ring0_addr 192.168.2.1 --bindnet0_addr 192.168.2.0

Then join Prox2 to the new cluster:

root@prox2:~# pvecm add 192.168.2.1 --ring0_addr 192.168.2.2

Followed by Prox3:

root@prox3:~# pvecm add 192.168.2.1 --ring0_addr 192.168.2.3

One final configuration on Prox1 is to set the third network interface as the dedicated migration network by updating the datacenter.cfg file, like so:

root@prox1:~# nano /etc/pve/datacenter.cfg

keyboard: en-us
migration: secure,network=192.168.3.0/24

Now that the cluster is set up, you can log out of your SSH sessions and switch to the web GUI. When you open the web GUI for Prox1 (https://192.168.1.101:8006) and you should see all 3 nodes in your TestCluster:

Now you can manage all of your PVE Hosts from one graphical interface. You can also do cool shit like migrating VMs from one host to another, but before we can do that we need to set up some PVEVMs. There are more things to set up, but to see them in action, we need to build a couple of PVEVMS to work with, which I will cover in the next installment: Building PVE Containers.