Remote Access Shenanigans in 2021

A couple of years ago, I wrote about how I get access to my home network. In a previous job, I worked nights for a big financial company with a very restrictive network. I often connect to the work network from home (which I call telecommuting) and to the home network from work (which I call reverse telecommuting). Most of the time it’s to fix stuff, sometimes it’s because there is a downtime window: for work that is at night when everyone has gone home, at home it’s during the day when everyone is at work/school.

My dream is to be able to sit at a desk, anywhere in the world, and do whatever it is that I need to do, with minimal fuss on my part, and with no impact on the people (coworkers and family) that I support. It’s a lofty goal that is beset by overprotective firewalls, pandemics, and crappy laptops.

When in doubt, SSH

Most of my remote administration tasks involve logging in to either a system administration web GUI, or logging into a command shell. For that, SSH tunneling works great. I have port 22 opened on my firewall and mapped to a Linux server. That host does nothing except serve as a jumpbox into my lab network. Once I can SSH in, I can drop a local port to SSH to my management workstation that sits on the other VLANs. The reason I don’t forward port 22 directly to the management workstation is that I have concerns about my internal VLANs being a single hop from the Internet. It’s not really a security measure so much as an obscurity measure.

I haven’t done much traveling in the last 2 years, and on the one trip that I did take, I didn’t have much time for hacker shit. But when I am away from home, and able to do hacker shit, NeoRouter comes in handy.

NeoRouter on a hosted server

I have also written about cloud hosted VMs. Some of these services are fairly inexpensive, but not at all reliable, and some of them are quite reliable, but they are very expensive. I would put Cloud At Cost in the first category, and Digital Ocean in the second. Cloud hosting is an important upgrade to my remote access arsenal, because in a world of NAT and firewalls, having something directly connected to the Internet with a static IP is a game changer.

In my network travels, I came across the free tier of Google Compute Engine. It does what it says on the tin: a shared CPU Linux container with a static IP. It won’t cost you much for the first year, but it is extremely under powered. Fortunately, NeoRouter will provide access to plenty of resources hosted on my Proxmox cluster at home, and the service itself doesn’t take much compute power. After the free year, the VM costs me $4 give or take, sometimes it goes up to almost $6 to run the box 24×7. You can shave off a dollar or so each month by scheduling downtime. For me that was 12:30am to 6:30am. It took me a couple of hours to get it working, which I guess is more about principal than actual savings. If you value your time, just get a Digital Ocean droplet for like $5 and change per month and get on with your life.

With NeoRouter running on a hosted VM, it creates an overlay network that allows my Windows desktops and Linux servers to communicate with each other, even though they are on different physical and logical networks.

I have also begun experimenting with graphical Linux desktops instead of Linux servers or Windows desktops, but I will save that for a later post.

VirtualProx experiments 2021

I’ve written a ton of posts about running a Proxmox cluster in VirtualBox.

Part of why I write these things is to help me record work that I did in the past, kind of like a journal. Part of it is the hope that someone will read it and benefit from it. Mostly, building home lab shit and writing about it is how I cope with… *gestures vaguely*

The new Proxmox 7.X version is out, and the Proxmox Backup server has also been released. So I set up another Proxmox cluster in Virtual Box. Here are some observations and things that I learned from the exercise.

  1. I learned enough about host only networks in Virtualbox to eliminate the need for a management workstation.
    I am a big fan of setting up a workstation with a GUI to test network configurations during the network construction phase. In the old days of hardware that meant using an old garbage PC, even though it was a waste of electricity. Now that we have virtualization, I still put a low powered VM on different subnets for troubleshooting. In those same old days when I was growing my Unix skills, I almost always used a Windows PC with multiple network cards because Windows has historically been completely stupid about VLANs and the like.

    Also, using a workstation with an OS that you are very comfortable with lets you focus on what you are learning. Trying to figure out a new OS while also figuring out networking, or virtualization, or scripting/programming is overwhelming. So, in previous labs, I recommended spinning up a basic VM that sat on the host only network for doing firewall/network administration tasks. Well, no more!

    It turns out that the IP address that you assign in the VirtualBox host network manager app is just the static IP address that your physical host has on that network interface. It’s not any sort of network configuration. I know that should have been obvious, but like the management workstation mentioned above, I am figuring this out as I go.

    So, when you are setting up your host only network interfaces, just pick any IP in the range that you want to use. I love the number 23, so that is the last octet that I pick for my physical host. If you set that IP to something other than .1 or .254 or any IP in your DHCP range, you can use the browser on your host computer to configure the ProxMox cluster. You will still need static IPs and multiple network interfaces for management, clustering, and the like.

  2. Doing hardcore system administration tasks via the web UI has gotten a lot better
    My Unix/Linux skills are decent. Not as great as professional sysadmins, but better than most professional IT types. The same goes for my knowledge of networking and virtualization. I can hold a conversation with the folks that specialize in it. So, when I am trying to figure out ProxMox shit, I prefer the web UI so I am not getting out into the weeds chasing down Linux sytax issues or finding obscure things in config files, which I like to call Config File Fuckery(tm).

    You can configure the IPs for your interfaces with the UI, which didn’t work well in the past. You can also define your VLANs in the web UI, and change their names. I like for the VLAN ID/tag to correspond to the third octet of the assigned IP, so VLAN200 would have an IP of 192.168.200.0/24. Yes you can do vlan0 -> 192.168.0.0/24, but that’s no fun 🙂

    I have not yet figured out how to create a ZFS pool on a host using the web UI. You can create the pool as storage in the web UI, configuring your disks for use in the pool still requires the command line, as far as I can tell.

    Creating the cluster in the web UI is super simple now, but specifying a network for VM migration to another cluster node still requires editing the datacenter.cfg file as outlined in Part 3: Building the Cluster.

  3. Proxmox backup server is just for backups

    Having a dedicated server for hosting backups is a great idea. Normally, I set up an NFS server as shared storage between the nodes, where I put container templates, ISO files, and snapshots of machines.

    Proxmox Backup Server integrates into your Proxmox datacenter as storage, and you can use it as a destination for backups. That part is pretty slick, but you can ONLY set it up as a target for backups.

    The other shared storage stuff, doesn’t look like it’s an option. At least not in the web UI.

    I am sure there is a reason for having one server for backups and another for shared storage, which probably has to do with tape drives. For my use case, I would like to download ISOs and container templates to one place and have it be available to all the cluster nodes, which requires an NFS server somewhere. I also want to use shared storage for backups, which could be a Proxmox Backup server OR the same NFS server that I would need for shared storage.

  4. Running a backup server and a NAS seems like a waste
    I have seen forum posts about mounting an NFS share and using it as the datastore. I was more interested in doing the opposite, which is exporting an NFS share to the cluster nodes. It’s Debian Linux under the hood, and I can absolutely just create a directory on the root filesystem and export it. That’s not the point.

    I have also seen forum posts where users run the backup server as a VM. This is probably the use case for the NFS data store: keeping the files on a NAS and the backup software on a VM. I am contemplating doing the opposite, which is running the backup server on bare metal, and running the file server as a VM. I already have a hardware NAS that I am currently using as the shared storage for my hardware Proxmox cluster.

    In hardware news, I have acquired 3 rackmount servers for my hardware cluster. I don’t have a rack or anything to put them in, so stay tuned for some DIY rack making!