Using the Raspberry Pi to Turn an iPad into a Real Computer, part 2: The iPad

In my last post, I talked about why I wanted to do this project, which was mostly about not wanting an old iPad to sit in a box with all my other crap. It was also about a clever use of a Raspberry Pi. While 99% of the work in this project is setting up the Pi, this post is about setting up the iPad.

Growing up the iPad
My daughter got the iPad when she was 4. We bought it refurbished from Amazon. As much as I dislike Apple as a company, and as a platform, the quality of their hardware is impressive. We put the iPad in a pink foam bumper case. It was subjected to all manner of child induced terrors: spilled milk, sticky fingers, being left to die in random places. Despite being 2 years old when we got it, and her using it for probably 5 years, it’s still in pretty good shape. The screen cracked in one corner.

I cleaned it up, and I put it in a cheap folio case.

The only real modification I made to the iPad was to install a Unix shell app called Blink. Blink has some essential tools like ping and ssh, along with the ability to map hostnames to IP addresses in a manner similar to /etc/hosts.

The app is $20 per year, it’s meant to be used for home automation, and includes a lot of that. If you aren’t into that you can just shake your iPad every day or two and keep using the app for free.

The iPad’s on screen keyboard doesn’t have some essential keys, like CTRL or arrow keys. If you are going to spend time in the Unix shell, you should probably have a hardware keyboard.

My wife has a Bluetooth keyboard that she uses to live-Tweet TV episodes sometimes. It doesn’t have a touchpad, it runs off of AAA batteries instead of being rechargeable, but it does have arrow keys. The F-keys (F1-F10) behave a little strangely, which I will go into more in the next post about configuring the Pi.

The most practical solution would be a keyboard with a dedicated CTRL key, arrow keys, and a touchpad. If I end up using this gear a lot, I might splurge on a cool keyboard.

I spent a little while shopping for 60% mechanical keyboards. Buying a keyboard is definitely outside of the “shit laying around the house” constraint. A retro-gray keyboard would give the iPad some cool Unix style. A 60% keyboard doesn’t solve the touchpad problem, however, and a lot of 60% layouts don’t have dedicated arrow keys.

The Raspberry Pi doesn’t have a built-in battery, so using it portably requires a battery bank. I have a small collection of batteries thanks to amateur radio and owning a couple of smartphones with terrible battery life.

The Raspberry Pi 4b requires more amperage than the previous models. I have plugged the Pi into the battery for testing purposes, but I haven’t tested how long the battery lasts with the Pi plugged in. These batteries can completely charge two tablets from 0 to 100% simultaneously, so I am pretty sure I can get several hours of use with the Pi. Of course, in a situation where I am charging the iPad and my 4G hotspot, that duration lowers significantly.

And now, the fun begins: Setting up the Raspberry Pi

Using the Raspberry Pi to Turn an iPad into a Real Computer, part 1: Prologue

This Christmas, we upgraded the kids’ iPads, and I inherited my daughter’s old iPad Air 2. I had an iPad years ago, but I didn’t like it.

I like tablets, I just didn’t like the iPad. Tablets fill this weird gap between a smartphone and a PC, where you can do what you do what you do on your phone (texts, memes, and games) only more comfortably. A laptop is best used when seated, preferably at a desk or table; it’s portable. The smartphone is great when you are out of the house or office and moving around; it’s mobile. The tablet fits into that middle space: seated but not at a desk or table, such as in bed, on the couch, on a long flight, or riding a train. Staring at a screen of any kind in a car for a long time makes me nauseous, so I prefer audio for car trips.

I also hate tablets because they come close to doing what a netbook used to, before 10 inch screens went extinct. (Yes the GPD Pocket is a thing, it’s also the price of a gaming laptop. I already have too many laptops as it is, without dropping 12 Benjamins on another one because it’s cute.) Netbooks are great for note taking in a meeting or class, or for doing light system administration tasks where you need basic networking tools like ping, ssh, or more serious tools like network scanners or wifi analyzers. Android tablets do ok in this regard, but the lineup of network tools for iOS are not great.

The problem with a tablet is that it isn’t a netbook. The problem with a netbook is that it a tablet.

Since inheriting this iPad has cost me nothing (well, I paid for it years ago) I am going to try it again. This time I am also re-creating the netbook experience using recycled technology that I already have. I am trying to create a portable (not necessarily mobile) computing setup that is smaller than a laptop, charges off of 5v DC, does Unix shit reliably, stores files and streams media without Internet access, and fits in my man purse. The theme of this project is “modular off-grid solar powered computing made with shit laying around the house.”

The essential difference between a tablet and a netbook is the keyboard. The dream is to have either a netbook with a removable screen, or a tablet with a detachable keyboard. Those purpose-built devices are nice, but they are also expensive. For this hand-me-down project I decided to kludge pieces together instead.

Using a tablet keyboard is usually pretty lame, especially a keyboard with no touchpad. Taking my hand off the keyboard to touch the screen is a major distraction. I had a touch screen laptop for years and rarely used that feature. I think I have a decent little Bluetooth keyboard somewhere, one with that ThinkPad nipple-looking thing. It’s probably sitting in a box with a bunch of broken tablets.

As much as I dislike membrane keyboards, they will be significantly better than typing on the iPad when it is propped up in that tilted landscape mode.

Off grid
Traveling with a laptop can be kind of a waste, especially when you end up not using it very much. Wasted suitcase space isn’t that big a deal anymore since haven’t been on an airplane in a couple of years. Anymore, the traveling I do is outdoor stuff like car trips and camping. Tech in these scenarios is great for keeping the kids busy when it’s rainy, cold, or on long car rides. We travel a few times each year to my in-laws lake house where there is tons of nature, but not much access to the Internet. Offline media requires the kind of storage that tablets are notoriously short on.

In the before times, when international travel was a thing, I used a cheap Andoroid tablet and a Chromebook. The Chromebook had a real keyboard and real web browser, while the Android could run arbitrary apps from the Google Play Store. The combination was a decent small toolkit. Between having kids and COVID, I haven’t gone over seas in several years. All that gear is probably obsolete now anyways.

I hate electronic waste, and yet I seem to produce a lot of it.

Shit laying around the house
This project began as a plan to reuse a hand-me-down iPad. I set the old iPad up purely to get access to FaceTime, and as I loaded my old apps on it, I discovered that it was still decently powerful.

I have also collected a few Raspberry Pi’s over the years. I have done maker stuff with them, used them to demonstrate things at 2600, including a Pi PBX one time as a proof of concept. They’re handy little things. As I get more into amateur radio, Pi’s come in handy for different digital and packet modes.

The Pi also runs off 5v DC, albeit at higher than 2a. This isn’t a problem with modern phone chargers and portable battery banks, of which I also have a couple.

Adding Solar
Amateur radio has taught me about the importance of charging batteries in the field. “Field rechargeable” is probably a better term than “solar”. Solar is more of a guideline. If something can charge from USB, you can probably charge it off of solar. If you can charge it off of solar, you can probably charge it off either 5v USB, or 12v car electrical. Wall and car chargers for smartphones are great sources of USB power, and in the family travel scenario, car and wall power make more sense. USB ports in computers can also charge USB devices, although they tend to do it very slowly. The Pi 4 can’t run reliably from a laptop USB.

I have a folding solar panel with a 12v power output and USB outputs. I normally use it to charge my portable solar generator. That’s a stupid name for the device. It’s just a big 12v battery, it doesn’t generate anything. I already have a collection of USB battery banks laying around the house, so one of those should run the Pi for a pretty long time. I even have a USB battery bank with an integrated solar panel, though it takes multiple days worth of good sunlight to fully charge it. I haven’t tried laying out the solar panel with the solar banks plugged into it to see how it charges, but I am hoping to try it out when the weather is nicer.

Stay tuned for the next installment where I get started configuring the iPad.

The Decline of Facebook

I have read and listened to Joseph Bernstein talk about Facebook being a propaganda tool. There is a lot of defining what misinformation is, and what disinformation does, but it’s mostly political. In my opinion, that makes it propaganda. My take away from Bernstein is that the existential threat of Facebook is also propaganda, from Facebook.

One very interesting fact about the Facebook whistleblower disclosures to the SEC, and one that got almost no press attention, is that she claims, based on internal Facebook research, that they were badly misleading investors in the reach and efficacy of their ads. And to me, the most damaging thing you could say about Facebook is that this kind of industrial information machine doesn’t actually work.

Knowing that some of Facebook’s power is a lie is somewhat reassuring. It’s not reassuring enough for me to go back or anything, but it does help me to not look down on the people that still use it.

I am also a big fan of Cory Doctorow’s writing. A few years ago, he blasted Facebook for basically ruining western civilization in order to make money off of advertising. There’s way more to it than that, but basically the huge troves of information that Facebook gathers in the interest of targeted advertising has cause all kinds of negative externalities. Much in the way that hydrocarbons have destroyed the environment for the purposes of selling cheap plastic shit.

Facebook isn’t a mind-control ray. It’s a tool for finding people who possess uncommon, hard-to-locate traits, whether that’s “person thinking of buying a new refrigerator,” “person with the same rare disease as you,” or “person who might participate in a genocidal pogrom,” and then pitching them on a nice side-by-side or some tiki torches, while showing them social proof of the desirability of their course of action, in the form of other people (or bots) that are doing the same thing, so they feel like they’re part of a crowd.

I have done more than my fair share of my own complaining about Facebook. It would appear that Facebook is not the all-powerful propaganda machine that it has been made out to be, and if its stock price is any indication, Facebook may very well be on the decline.

Facebook now has to somehow retain users who are fed up to the eyeballs with its never-ending failures and scandals, while funding a pivot to VR, while fending off overlapping salvoes of global regulatory challenges to its business model, while paying a massive wage premium to attract and retain the workers that it needs to make any of this happen. All that, amid an exodus of its most valuable users and a frontal regulatory assault on its ability to extract revenues from those users’ online activities.

Modernizing the smuggling operation

The winter holidays are a depressing time for me, so the last month or so of the year I like to really throw myself into a game or project. After being ridiculed by one of my DnD bros about how inefficient and antiquated my piracy set up is, I decided to modernize by adding applications to automate the downloading and organizing of my stolen goods.

My old fashioned manual method was to search trackers like YTS, EZTV, or The Pirate Bay and then add the magnet links to a headless Transmission server that downloads the files to an NFS share on my file server. Once the download is complete, I would copy the files to their final destination, which is a Samba share on the file server. I don’t download directly to the Samba share because BitTorrent is rough on hard drives. My file server has several disks, some of them are nice (expensive) WD Red or Seagate Ironwolf disks, and some are cheap no-name drives. I use the cheap drives for BT and other forms of short term storage, and the nicer drives for long term storage.

For media playback, I used a home theater PC that would access the shared folder, and then play the files in VLC media player. This was the state of the art in 2003, but the game has gotten more fierce.

My new piracy stack now consists of Radarr, Sonarr, Lidarr, and Jackett. These are dedicated web apps for locating (Jackett), downloading (Transmission) and organizing movies (Radarr), Tv (Sonarr), and music (Lidarr). Once the media is downloaded, sorted, and properly renamed, it will be streamed to various devices using Plex.

Rather than run a different VM or Linux container for each app, a friend recommended that I use Docker. Docker is a way of packaging applications up into “containers.” It has taken me a good while to get my mind around what the difference is between a Linux container and a Docker container. I don’t have a good answer, but I do have a hot take: if you want an application/appliance that you can roll into a file and then deploy to different hosts, you can ask one of two people do do it for you: a system administrator or a developer. If you ask a system administrator to do it, and you will end up with LXC, where the Linux config is part of the package, and that package behaves like a whole server with RAM, and IP address, and something to SSH into. If you ask a developer to do it, you just get the app, a weird abstraction of storage and networking, and never having to deal with Unix file permissions. And that’s how you get Docker.

Because I am a hardware/operating system dude that dabbles in networking, LXC makes perfect sense to me. If you have a virtualization platform, like an emulator or a hypervisor, and you run a lot of similar systems on it, why not just run one Linux kernel and let the other “machines” use that? You get separate, resource efficient operating systems that have separate IP’s, memory allocation, and even storage.

The craziest thing about Docker is that if you start a plain container, like Debian, it will pull the image, configure it, start it up, and then immediately shut it down. And this is the expected behavior.

I like that Docker can pop up an application very quickly with almost no configuration on my part. Docker storage and networking feels weird to me, like a bunch of things stapled on after delivering the finished product. From a networking standpoint there is an internal IP scheme with randomly generated IPs for the containers that reminds me of a home network set up on a consumer grade router. If you want the container to have access to the “outer” network, you have to map ports to it. Storage is abstracted into volumes, with each container having a dedicated volume with a randomly generated name. You don’t mount NFS on each container/volume, instead you mount it on the host and point the container to the host’s mountpoint. It’s kind of like NFS, but internal to Docker using that weird internal network. Also, in typical developer fashion, there is very little regard for memory management. The VM that I am running docker on has 16gb of RAM, and its utilization is maxed out 24/7. Maybe Docker doesn’t actually use that RAM constantly, it just reserves it and manages it internally? It’s been chewing through my media collection for a couple of months now, and slowly but surely new things are showing up in Plex. Weird as the stack is, it’s still pretty rad.

All the randomly generated shit makes me feel like I don’t have control. I can probably dig a little deeper into the technology and figure out some manual configuration that would let me micromanage all those details, thereby defeating the whole entire purpose of Docker. Instead, I will just let it do its thing and accept that DevOps is a weird blend of software development and system administration.

Remote Access Shenanigans in 2021

A couple of years ago, I wrote about how I get access to my home network. In a previous job, I worked nights for a big financial company with a very restrictive network. I often connect to the work network from home (which I call telecommuting) and to the home network from work (which I call reverse telecommuting). Most of the time it’s to fix stuff, sometimes it’s because there is a downtime window: for work that is at night when everyone has gone home, at home it’s during the day when everyone is at work/school.

My dream is to be able to sit at a desk, anywhere in the world, and do whatever it is that I need to do, with minimal fuss on my part, and with no impact on the people (coworkers and family) that I support. It’s a lofty goal that is beset by overprotective firewalls, pandemics, and crappy laptops.

When in doubt, SSH

Most of my remote administration tasks involve logging in to either a system administration web GUI, or logging into a command shell. For that, SSH tunneling works great. I have port 22 opened on my firewall and mapped to a Linux server. That host does nothing except serve as a jumpbox into my lab network. Once I can SSH in, I can drop a local port to SSH to my management workstation that sits on the other VLANs. The reason I don’t forward port 22 directly to the management workstation is that I have concerns about my internal VLANs being a single hop from the Internet. It’s not really a security measure so much as an obscurity measure.

I haven’t done much traveling in the last 2 years, and on the one trip that I did take, I didn’t have much time for hacker shit. But when I am away from home, and able to do hacker shit, NeoRouter comes in handy.

NeoRouter on a hosted server

I have also written about cloud hosted VMs. Some of these services are fairly inexpensive, but not at all reliable, and some of them are quite reliable, but they are very expensive. I would put Cloud At Cost in the first category, and Digital Ocean in the second. Cloud hosting is an important upgrade to my remote access arsenal, because in a world of NAT and firewalls, having something directly connected to the Internet with a static IP is a game changer.

In my network travels, I came across the free tier of Google Compute Engine. It does what it says on the tin: a shared CPU Linux container with a static IP. It won’t cost you much for the first year, but it is extremely under powered. Fortunately, NeoRouter will provide access to plenty of resources hosted on my Proxmox cluster at home, and the service itself doesn’t take much compute power. After the free year, the VM costs me $4 give or take, sometimes it goes up to almost $6 to run the box 24×7. You can shave off a dollar or so each month by scheduling downtime. For me that was 12:30am to 6:30am. It took me a couple of hours to get it working, which I guess is more about principal than actual savings. If you value your time, just get a Digital Ocean droplet for like $5 and change per month and get on with your life.

With NeoRouter running on a hosted VM, it creates an overlay network that allows my Windows desktops and Linux servers to communicate with each other, even though they are on different physical and logical networks.

I have also begun experimenting with graphical Linux desktops instead of Linux servers or Windows desktops, but I will save that for a later post.

VirtualProx experiments 2021

I’ve written a ton of posts about running a Proxmox cluster in VirtualBox.

Part of why I write these things is to help me record work that I did in the past, kind of like a journal. Part of it is the hope that someone will read it and benefit from it. Mostly, building home lab shit and writing about it is how I cope with… *gestures vaguely*

The new Proxmox 7.X version is out, and the Proxmox Backup server has also been released. So I set up another Proxmox cluster in Virtual Box. Here are some observations and things that I learned from the exercise.

  1. I learned enough about host only networks in Virtualbox to eliminate the need for a management workstation.
    I am a big fan of setting up a workstation with a GUI to test network configurations during the network construction phase. In the old days of hardware that meant using an old garbage PC, even though it was a waste of electricity. Now that we have virtualization, I still put a low powered VM on different subnets for troubleshooting. In those same old days when I was growing my Unix skills, I almost always used a Windows PC with multiple network cards because Windows has historically been completely stupid about VLANs and the like.

    Also, using a workstation with an OS that you are very comfortable with lets you focus on what you are learning. Trying to figure out a new OS while also figuring out networking, or virtualization, or scripting/programming is overwhelming. So, in previous labs, I recommended spinning up a basic VM that sat on the host only network for doing firewall/network administration tasks. Well, no more!

    It turns out that the IP address that you assign in the VirtualBox host network manager app is just the static IP address that your physical host has on that network interface. It’s not any sort of network configuration. I know that should have been obvious, but like the management workstation mentioned above, I am figuring this out as I go.

    So, when you are setting up your host only network interfaces, just pick any IP in the range that you want to use. I love the number 23, so that is the last octet that I pick for my physical host. If you set that IP to something other than .1 or .254 or any IP in your DHCP range, you can use the browser on your host computer to configure the ProxMox cluster. You will still need static IPs and multiple network interfaces for management, clustering, and the like.

  2. Doing hardcore system administration tasks via the web UI has gotten a lot better
    My Unix/Linux skills are decent. Not as great as professional sysadmins, but better than most professional IT types. The same goes for my knowledge of networking and virtualization. I can hold a conversation with the folks that specialize in it. So, when I am trying to figure out ProxMox shit, I prefer the web UI so I am not getting out into the weeds chasing down Linux sytax issues or finding obscure things in config files, which I like to call Config File Fuckery(tm).

    You can configure the IPs for your interfaces with the UI, which didn’t work well in the past. You can also define your VLANs in the web UI, and change their names. I like for the VLAN ID/tag to correspond to the third octet of the assigned IP, so VLAN200 would have an IP of Yes you can do vlan0 ->, but that’s no fun 🙂

    I have not yet figured out how to create a ZFS pool on a host using the web UI. You can create the pool as storage in the web UI, configuring your disks for use in the pool still requires the command line, as far as I can tell.

    Creating the cluster in the web UI is super simple now, but specifying a network for VM migration to another cluster node still requires editing the datacenter.cfg file as outlined in Part 3: Building the Cluster.

  3. Proxmox backup server is just for backups

    Having a dedicated server for hosting backups is a great idea. Normally, I set up an NFS server as shared storage between the nodes, where I put container templates, ISO files, and snapshots of machines.

    Proxmox Backup Server integrates into your Proxmox datacenter as storage, and you can use it as a destination for backups. That part is pretty slick, but you can ONLY set it up as a target for backups.

    The other shared storage stuff, doesn’t look like it’s an option. At least not in the web UI.

    I am sure there is a reason for having one server for backups and another for shared storage, which probably has to do with tape drives. For my use case, I would like to download ISOs and container templates to one place and have it be available to all the cluster nodes, which requires an NFS server somewhere. I also want to use shared storage for backups, which could be a Proxmox Backup server OR the same NFS server that I would need for shared storage.

  4. Running a backup server and a NAS seems like a waste
    I have seen forum posts about mounting an NFS share and using it as the datastore. I was more interested in doing the opposite, which is exporting an NFS share to the cluster nodes. It’s Debian Linux under the hood, and I can absolutely just create a directory on the root filesystem and export it. That’s not the point.

    I have also seen forum posts where users run the backup server as a VM. This is probably the use case for the NFS data store: keeping the files on a NAS and the backup software on a VM. I am contemplating doing the opposite, which is running the backup server on bare metal, and running the file server as a VM. I already have a hardware NAS that I am currently using as the shared storage for my hardware Proxmox cluster.

    In hardware news, I have acquired 3 rackmount servers for my hardware cluster. I don’t have a rack or anything to put them in, so stay tuned for some DIY rack making!

Getting in to Ham Radio

I decided to pick up a new quarantine hobby besides playing Fallout 4: Amateur radio. I won’t be posting a bunch of radio stuff here because I want to dedicate a site specifically to that. I also have done a bit of traveling, and I want to document my adventures there too. I did a similar thing with writing about telephones. I didn’t want to clutter up my personal blog with a bunch of radio or telephone shit.

It turns out that Amateur radio has a lot of overlap with computers. Hams are big into Raspberry Pi’s and I even found a VOIP service specifically for hams!

There is a lot of electronics in amateur radio as well, which is a real weak point for me. I have messed with Arudinos a bit, but I’ve mostly stuck to computers. It’s a fascinating hobby, full of different things to learn about antennas, batteries, and even connecting computers over a radio.

So there was an attempted coup

I thought this 2020 bullshit was over with the arrival of 2021.

I probably said this before, but I don’t see how the next couple of years don’t bring about more sectarian violence. My anxiety since the election began is that we are headed for one of 3 outcomes:

  1. A Trump Win – which would ramp up Black Lives Matter protests even more than this summer, with even more crackdowns.
  2. A Biden Win – which ramps up the 2nd amendment and anti-lockdown protests at state houses and the like <-you are here
  3. No Clear Winner – which results in a full blown civil war.

I was not expecting to be right, or for the bullshit to kick in so fast. It really made my head swim.

Now I have to pack up my shit and be ready to go fight somewhere. I’ve been learning about amateur radio, and I think that I could revive some of my military communications skills and serve as an Radio Telephone Operator for BLM protests and the like. I also had some emergency medicine training, so maybe I could put that to use. I’d really like to get through this without a gun if the universe would allow it.

Machines appearing and disappearing in NeoRouter

I just spent the last half hour scratching my head at a weird problem that I was having with NeoRouter. Two windows hosts kept appearing and disappearing in my NeoRouter network. Both machines could log in successfully, but neither machine could see the other in the list of computers. They seemed to be knocking each other out of the network, as if they were knocking each other off.

It turns out that if you clone a Windows machine with Neo Router pre-installed, you end up with IP conflicts, even if you set different static IPs for each host. So if you decide to clone hosts, be sure that you install Neo Router *after* you clone the hosts.

The Back Story

With my new upgraded VLAN home network, plus my quarantined/working from home/life circumstances, I used to have a desktop computer that was on all the time to support all of my remote access shenanigans. In the old flat network days I had one desktop computer that ran 24×7 and sat on the same network as all of my servers. Mostly the goal of remote access is either:

  1. a shell on a server or router
  2. a webpage on an appliance like a router, switch, or file server or
  3. a desktop on a Windows machine that would then provide me 1 or 2

With my new network design I have two VLANs for my servers:

  1. a DMZ for things that ultimately face the Internet, and
  2. A personal internal network that is visible to neither the family wireless network nor the Internet

If you will recall, I have a network management workstation that I can use as a jump box to get into each segment. However, this host isn’t accessible via the Internet. For that I have a couple of Internet facing hosts that I call ‘hubs’. One host is a bottom tier Google Compute instance, the other is a host sitting in the DMZ with a bunk port forwarded to it. Under the most extreme circumstances, I can tunnel through the Google hub, into the DMZ hub, to get a shell on the network management workstation, where I can either set up a socks proxy for internally hosted web management pages, or drop a remote port for RDP to a Windows host.


OR, I could just use Neo Router. When the networking gods are smiling on me, my Windows laptop and Windows desktop can talk to each other directly via the NR overlay network. With Neo Router, I can have hosts on different VLANs which are not accessible via the Internet, become accessible to other members of the NR network. When I use Windows or Linux machines that can run browsers, there is no need for Stupid SSH Tricks(tm).

The idea was simple: spin up 2 virtual machines (VMs) running Graphical Desktops (GUIs), one GUIVM on the DMZ network, and one GUIVM on the internal wired network. This way I can do arbitrary tasks sitting on either network by connecting to the appropriate GUIVM. I will call these machines “Portals”. Portal-DMZ will sit on the DMZ network, and Portal-Int will sit on the private internal network.

Since I am spinning these VMs up on Proxmox, I could just build one GUIVM, configure it, and then clone it. I used Windows to get it done fast, but ultimately I would like to conserve RAM by using low powered Linux machines.

Turns out the cloning was the source of my strange problem. Apparently there is some sort of signature that makes each node unique that cannot be duplicated without all hell breaking loose.

Dan Harmon on the Sexuality of Fallout 4

At my core, I am a tabletop role player. I play a lot of video games, but table top RPGs are my jam. To paraphrase an obscure Fight Club trailer, after tabletop DND, playing video games is “like watching porn when you could be having great sex.”

I so when I *ahem* play solo video games, there is some part of me that wants to put a backstory to either my choices or the choices the game makes for me. When I would marry Mjoll the Lioness in Skyrim and her boy Aerin showed up with her, I figured we were doing some kind of Viking age polyamory type thing.

In Fallout, you get the option to romance a number of your companions, and so I just assumed that my dude was a pan-sexual version of Captain Kirk. Once I loaded the mod that keeps your wife from dying, and she becomes a companion that you can romance… well it was Mjoll the Lioness all over again.

Apparently Dan Harmon also plays FO4 and he agonizes over the choice to romance the companions, and his guilt over it is hilarious.

When I play FO4, my survivor identifies as a white male. I also run the “Nora Spouse Companion” mod that allows me to also run Nora. Once I have reached affinity with her and then with Codsworth, I run with Preston. I also have the Gunners vs. Minutemen pack from Creation Club and I take Preston with me for the entire quest, once we have retaken The Castle. 

Between retaking the Castle, helping out settlements, and paying the gunners back for the Quincy massacre, I usually hit full affinity with Preston. I then go for the gusto with the romance options and then finally, I set him up as the leader of his own settlement via Sim Settlements, usually Starlight Drive-in.

Preston has had a rough go of things. He watched his idols, the Minutemen, fall apart. He had all of the soldiers and most of the civilians he was responsible for die on his watch, and he ended up in a siege at the Museum Of Freedom at Concord. It not a stretch to say that when you meet Preston Garvey, it is on the worst day of his life. If anyone deserves love and fire support, it’s Preston.