Adventures in Proxmox Part 1: Words About Boxes

The Proxmox logo
It’s been a few weeks since I exorcised HyperV from my life like an evil demon. I have replaced it with Proxmox and so far it’s been mostly great. With a couple of serious caveats.

Plastic dinosaurs betraying each other.My transition to Proxmox has been a rather involved, not so much because Proxmox is hard to set up (it’s not), but because I am tired of slapping old junky hardware together and hoping it doesn’t die, and then scrambling to fix it when it inevitably betrays me. Unlike most dudes with home servers and labs, most of my acquisitions were made years ago to support an MMO habit. Specifically multiboxing.

PC case made from peg board.

I call them “computers” because they are computers in the sense that they have CPU’s, RAM, and HDD’s. But they were low-budget things when they were assembled years ago. The upgrade path works something like this:

  1. A computer begins its life as my main gaming machine that will run my favorite game at a satisfactory speed and resolution.
  2. Then I find a new favorite and upgrade the gaming machine’s guts to run the new game.
  3. The old gaming guts get transplanted in to my “server” where they are *barely* able to run a few VMs and things like that.
  4. The final stage is when the server guts are no longer up to the task of running VMs. I then add a few old network cards and the “server” then becomes my “router”.
  5. The old router guts then get donated somewhere. They’re not really useful to anyone, so they probably get shipped to Africa where they get mined for gold and copper by children at gunpoint.

Breaking the [Re]Cycle of Violence
Wall-E holding a pile of scrapIn the years since then, I have taken to playing epic single player games like Skyrim. These games really only need one machine. The rest of the gear I used to run little “servers” for one thing or another, which I have slowly replaced with VMs. The problem with using old junky computers as servers is when you run them balls out 24 hours a day. In my search for a replacement VM host, I spent a lot of time researching off-lease servers. My goal was to have 8 cores and 32gb of ram, with the ability to live migrate VMs to another [lesser] host in an emergency, something that my HyperV setup was lacking. After a lot of consternation, I decided that since a single VM would never actually use more than 4 cores or 8gb of RAM, why not use 2 [or more] desktops?

A room full of old PCs.I found some old off-lease quad-core Intel desktops for about the same retail price as a low end server processor. I used the RAM from my older gaming machines/VMservers and some hard drives from some old file servers to build out my “new” Proxmox cluster. With two quad core desktops running maxed-out memory(16GB each) I managed to satisfy my need to be like the other kids with “8 cores with 32GB of RAM” for about the price of an off-lease server chassis, with the added bonus having a cluster. The goal is to add nodes to grow the cluster to 16 cores and 64GB of RAM, while also adding clustered storage via Ceph to make use of old hard drives from file servers.

New hot servers is old and busted. Old busted clusters is the new hotness.
For me, the clustered model is better, in my opinion for a number of reasons. It mostly has to do with modularity:

  1. You can build out your infrastructure one paycheck at a time. Part of the problem with off-lease servers is that while the chassis is cheap, the components that go in it are expensive and/or hard to find. The deal with servers is that the cost of the motherboard and CPU are nothing compared to what you will spend on RAM. I was looking for something I could start using for less than $200, and a refurb desktop and RAM from old gaming boxes got me going at that price point.
  2. Desktops stack on top of each other for free. I don’t have any server or telco racks, so in addition buying ECC RAM, I would also be buying a rack, rails, and all of the other stuff that goes with them. This would easily eat up my $200 startup budget before I powered on a single box.
  3. Moar boxes == moar resiliency. My gear at home is part lab and part production environment. Yes, I use it to hack stuff and learn new things, but my family also uses it in their daily lives. Network shares stream cartoons; VOIP phones connect friends; keeping these things going is probably as important as my day job. Being able to try bold and stupid things without endangering the “Family Infrastructure” is important to my quality of life.
  4. Scaling out is probably more important than Scaling Up. A typical I.T. Department/Data Center response to capacity problems is to regularly stand up newer/more powerful [expensive] gear and then dump the old stuff. I guess this is a good approach if you have the budget. It certainly has created a market for used gear. I don’t have any budget to speak of, so I want to be able to increase capacity by adding servers while keeping the existing ones in play. There are still cost concerns with this approach, mainly with network equipment. In addition to upping my server game, I am going to have to up my networking game as well.

It works…ish

I have my two cluster nodes *kind of* working, with most of my Linux guests running as containers, which is very memory and CPU efficient. I am running two Windows VMs, PORTAL for remote access and dynamic DNS, and MOONBASE which I am using for tasks that need wired network access. All of my desktops are currently in pieces, having donated their guts to the “Cluster Collective” so I am mostly using my laptop for everything. I am not really in the habit of plugging it in to Ethernet, or leaving it turned on, so for now I am using a VM in place of my desktop for long running tasks like file transfers.

I say that the cluster is only kind of working because my home network isn’t very well segmented and the cluster heartbeat traffic straight up murders my little switch. It took me a while to figure out the problem. So the cluster works for a few days and then my core switch chokes and passes out, knocking pretty much everything offline. For now, the “cluster” is disabled and the second node is powered off until my new network cards arrive and I can configure separate networks for the clustering, storage, and the VMs.

Coming soon: Adventures in Proxmox part 2: You don’t know shit about networking.

Advertisements