Building a Proxmox Test Cluster in VirtualBox Part 2: Configuring the Hosts

In the last installment of this series, I discussed setting up the Proxmox VE hosts in VirtualBox. At this stage in the exercise there should be 3 VirtualBox VMs (VBVMs) running, in headless mode.

Before you can set up the cluster, storage replication, and high availability, you need to do a bit of housekeeping on your hosts. In this post, I will go over those steps making sure that the hosts are up to date OS wise, that the network interfaces are setup and communicating with eachother, and that your storage is properly configured. Most of these steps can be accomplished via the Web UI, but using SSH will be faster and more accurate. Especially when you use an SSH client like SuperPuTTY or MobaXTerm that lets you type in multiple terminals at the same time.

Log in as root@ip-address for each PVE node. In the previous post, the IPs I chose were 192.168.1.101, 192.168.1.102, and 192.168.1.103.

I don’t want to bog this post down with a bunch of Stupid SSH Tricks, so just spend a few minutes getting acquainted with MobaXTerm and thank me later. The examples below will work in a single SSH session, but you will have to paste them into 3 different windows, instead of feeling like a superhacker:

Step 1 – Fix The Subscription Thing

No, not the nag screen that pops up when you log into the web UI, the errors that you get when you try to update a PVE host with the enterprise repos enabled.

All you have to do is modify a secondary sources.list file. Open it with your editor, comment out the first line and add the second line:

nano  /etc/apt/sources.list.d/pve-enterprise.list

#deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription

Save the file, and run your updates:

apt-get update; apt-get -y upgrade

While you are logged in to all 3 hosts, you might as well update the list of available Linux Container templates:

pveam update

Finally, if you set up your virtual disk files correctly according to the last post, you can set up your ZFS disk pool:

  1. List your available disks, hopefully you see two 64GB volumes that aren’t in use on /dev/sdb and /dev/sdc:
    
    root@prox1:~# lsblk
    NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda                  8:0    0   32G  0 disk
    ├─sda1               8:1    0 1007K  0 part
    ├─sda2               8:2    0  512M  0 part
    └─sda3               8:3    0 31.5G  0 part
      ├─pve-swap       253:0    0  3.9G  0 lvm  [SWAP]
      ├─pve-root       253:1    0  7.8G  0 lvm  /
      ├─pve-data_tmeta 253:2    0    1G  0 lvm
      │ └─pve-data     253:4    0   14G  0 lvm
      └─pve-data_tdata 253:3    0   14G  0 lvm
        └─pve-data     253:4    0   14G  0 lvm
    sdb                  8:16   0   64G  0 disk
    sdc                  8:32   0   64G  0 disk
    sr0                 11:0    1  642M  0 rom
    root@prox1:~#
    

    Assuming you see those two disks, and they are in fact ‘sdb’ and ‘sdc’ then you can create your zpool. Which you can think of as a kind of software RAID array. There’s way more to it than that, but that’s another post for another day when I know more about ZFS. For this exercise, I wanted to make a simulated RAID1 array, for “redundancy.” Set up the drives in a pool like so:

    
    zpool create -f -o ashift=12 z-store mirror sdb sdc
    zfs set compression=lz4 z-store
    zfs create z-store/vmdata
    

    In a later post we will use the zpool on each host for Storage Replication. The PVEVM files for each of your guest machines will copy themselves to the other hosts at regular intervals so when you migrate a guest from one node to another it won’t take long. This feature pairs very well with High Availability, where your cluster can determine if a node is down and spin up PVEVMs that are offline.

    Now that your disks are configured, it’s time to move on to Part 3: Building A Cluster Network.

    Advertisements

Building a Proxmox Test Cluster in VirtualBox Part 1: Building The Hosts

In my last post, I set the stage for why I built the virtualbox cluster, and now it it time to discuss the how.

In researching the best way to design a network for a Proxmox cluster, the bare minimum is one network connection. This one link does the following:

  1. Hosts the web server for the management GUI – The web UI is pretty slick, and it’s great for viewing stats and checking for errors.
  2. Hosts the network bridge for guest VMs – This bridge acts as a kind of virtual network switch for your PVEVMs to talk to the outside world.
  3. Connects the host to the Internet – The PVE host needs to download security updates, Linux container templates, and install packages.

This one network interface is sort of the lifeline for a Proxmox host. It would be a shame if that link got bombed by incessant network traffic. As I discovered (the hard way) one possible source of incessant network traffic is the cluster communication heartbeat. Obviously, that traffic needs to go on its own network segment. Normally, that would be a VLAN or something, but I have some little dumb switches and the nodes have some old quad port NICs, so I wanted to just assign an IP to one port, and plug that port into a switch that is physically isolated from “my” network.

Once a cluster is working, migrating machines happens over the cluster network link. This is OK, but if your cluster network happens to suck (like when some jackass plugs it into a 10 year old switch) it can cause problems with determining if all the cluster nodes are online. So, now I want to set up an additional interface for VM migration. Migration seems like the kind of thing that happens only occasionally, but when you enable Storage Replication, the nodes are copying data every 15 minutes. Constant cluster chatter, plus constant file synchronization, has the potential to saturate a single network link. This gets even worse when you add High Availability, and there is a constant vote on if a PVEVM is up and running, followed by a scramble to get it going on another node.

So, at minimum we will need 3 network interfaces for the test cluster on VirtualBox. I didn’t want to spend a lot of time tinkering with firewall and NAS appliances, so I am leaving the “Prox management on its own network segment” and the “Dedicated network storage segment” discussions out of this exercise. I can’t decide if the management interface for my physical Proxmox cluster should sit on my internal network, or on its own segment. For this exercise, the management interface is going to sit on the internal network. My Synology NAS has 4 network ports, so I am definitely going to dedicate a network segment for the cluster to talk to the NAS, but that won’t be a part of this exercise.

[Virtual] Hardware Mode(tm)

Once you are booted up and VirtualBox is running, you can start building your VBVMs. I recommend building one VBVM to use as a template and then cloning it 3 times. I found that I kept missing important things and having to start over, so better to fix the master and then destroy the clones.

I called my master image “proxZZ” so it showed up last in the list of VBVMs. I also never actually started up the master image, so it was always powered off and the ZZ’s made it look like it was sleeping.

Create proxZZ with the following:

  • First, make sure that you have created 2 additional Host Only Network Adapters in VirtualBox. In this exercise you will only use two, but it can get confusing when you are trying to match en0s9 to something, so do yourself a favor and make three. Make sure to disable the DHCP server on both adapters.
  • Create a new virtual machine with the following characteristics :
    1. Name: ProxZZ
    2. Type: Linux
    3. Version: Debian 64bit (Proxmox is Debian under the hood.)
    4. Memory Size: 2048MB
    5. Hard drive: dynamically allocated, 32GB in size.
  • Make sure that you have created 3 total virtual hard disks as follows:
    1. SATA0: 32GB. This will be your boot drive and system disk. This is where Proxmox PVE will be installed. Static disks are supposed to be faster, but this isn’t even remotely about speed. My laptop has a 240gb SSD, so I don’t have a ton of space to waste.
    2. SATA1: 64GB, dynamically allocated. This will be one of your ZFS volumes.
    3. SATA2: 64GB, dynamically allocated. This will be your other ZFS volume. Together they will make a RAID1 array.
  • WHile you are in the storage tab, make sure to mount the Proxmox installer ISO
  • Make sure that you have created 3 network interfaces as follows:
    1. Adapter 1: Bridged Network – this will be your management interface.
    2. Adapter 2: Host Only Network Adapter #2 – this will be your cluster interface.
    3. Adapter 3: Host Only Network Adapter #3 – this will be your VM migration interface.
    4. You may be tempted to do something clever like unplugging virtual cables or something. Don’t. You will be cloning this machine in a minute and you will have a hard time keeping all of this straight.
  • Before you finish, make sure that the machine is set to boot from the hard drive first, followed by the CD/Optical drive. This seems stupid, but you will be booting these things in headless mode, and forgetting to eject the virtual CD rom is super annoying. So fix it here and stop being bothered with it.

When it’s done, it should look something like this:

Once you are sure your source VM is in good shape, make 3 clones of it. Don’t install Proxmox yet. SSH keys and stuff will play a major role in this exercise later, and I am not sure if VirtualBox is smart enough to re-create them when you clone it. I ran into this a few times so just clone the powered off VBVM. I called the clones prox1, prox2, and prox3.

[Virtual] Software Mode(tm)

Now it is time to start your 3 clones. This can get pretty repetitive, especially if you start the process over a couple of times. While you will appreciate cloning the servers, there isn’t really a simple way that I have discovered to build the PVE hosts. In a few iterations of this exercise, I misnamed one of the nodes (like pro1 or prx2) and it’s super annoying later when you get the cluster set up and see one of the nodes named wrong. There is a procedure to fix the node name after you build it, but seriously just take your time and pay attention.

As you do the install, select your 32gb boot drive and configure your IP addresses.
I went with a sequence based on the hostname:
prox1 – 192.168.1.101
prox2 – 192.168.1.102
prox3 – 192.168.1.103
Like I said before, go slowly and pay attention. This part is super repetitive and it’s easy to make a stupid mistake that you have to troubleshoot later. At some point, I guarantee that you will give up, destroy the clones, and start over 🙂

Send In The Clones

Once your hosts are installed, it’s time to shut them down and boot them again, this time in headless mode. This is where fixing the boot order on ProxZZ pays off. With all 3 VBVMs are started up, you are ready for the next stage of the exercise: configuring your hosts.

Adventures in Proxmox Part 2: Building a Test Cluster with Virtualbox

If you have read my previous post about my first foray into Proxmox, you know that the infrastructure of my home network is, as the Irish would say, not the best. I have been tinkering with routers and smart switches, learning about VLANs and subnets and all kinds of other things that I thought I understood, but it turns out I didn’t.

Doing stuff with server and network gear at home is a challenge because the family just doesn’t get Hardware Mode(tm). Hardware means being sequestered in the workshop, possibly interfering with our access to the Internet. I have to wait for those rare occasions when I am: 1) at home and 2) not changing diapers and 3) not asleep and 4) no one is actively using the Internet. I have been putting things in place, one piece at a time, but my progress is, well, not the best.

Part of my networking woes are design. I don’t know how to build a network for a Proxmox cluster, because I don’t know the right way to build a Proxmox cluster. I also can’t spend hours in my basement lab tinkering. I need to be upstairs with the family. So I decided to build a little portable test cluster, on my laptop, using VirtualBox.

The network design at my house looks a bit like a plate of spaghetti, with old, unmanaged switches in random spots, like meatballs. Little switches plugged into big ones. No tiers, no plan, just hearty Italian improvisimo. Last year, when I fired up two Proxmox nodes, with no consideration for what might happen… Mamma mia!It took a couple of days before the network completely crashed, and a couple of more days to figure out the problem.

The great thing about VirtualBox is that you can build Host Only Networks. A host only network behaves like a physical switch with no uplink. VirtualBox virtual machines (VMs) can talk to each other, and to the physical host without talking to the outside world. This seemed like a decent facsimile of my plan to use a small unmanaged switch to isolate cluster traffic from the rest of the network.

The other great thing about VirtualBox is that you can add lots of network interfaces to a VM in order to simulate network interactions. You can build a router using a Linux or BSD distro and use it to connect your various host only networks to a bridge into your real physical network. I tried that at first, I am not sure that it’s necessary for this exercise.

And last, but not least, VirtualBox lets you clone a VM. As in, to make a procedurally generated copy of a VM, and then start it up along side it. This is a great feature for when you are screwing up configs and installs.

It is the combination of these features that allowed me to create a little virtual lab on a PC so I could figure out how to set up all the cool stuff that Proxmox can do, and figure out what kind of network I will need for it.

Phase 1: The plan

The plan for this exercise is to figure out how to use several features of Proxmox VE. The features are as follows:

  1. Online Backup and Restore – Proxmox has the ability to take and store snapshots of VMs and containers. This is a great feature for a home lab where you are learning about systems and you are likely to make mistakes. Obviously, I use this feature all the time.
  2. Clustering – Proxmox has the ability to run multiple hosts in tandem with the ability to migrate guest VMs and Linux containers from one host to another. In theory, using a NAS as shared storage you can migrate a VM without shutting it down. Since the point of this exercise is to build Proxmox hosts and not NAS appliances, we are going to focus on offline migrations where you either suspend the host or shut it down prior to migrating.
  3. Storage Replication – Proxmox natively supports ZFS, and can use the ZFS Send and Receive commands to make regular copies of your VMs onto the other cluster nodes. Having a recent copy of the VM makes migrations go much faster, and saves you from losing more than a few minutes worth of data or configuration changes. I wish I had this feature working when I was building my Swedish Internet router.
  4. High Availability – If you have 3 or more PVE nodes in your cluster, you can set some of your VMs to automatically migrate if there is an outage on the node the VM is hosted on. The decision to migrate is based on a kind of voting system that uses a quorum to decide if a host is offline. I want to use this feature to ensure that my access devices are up and running to support my remote access shenanigans.

Phase 2: Preparation

To build the lab, you will need the following:

  1. A desktop or laptop computer with 2 or more cores and at least 8gb of RAM. You could probably pull this off with 4gb if you are patient. My laptop has an old dual core i5 and 8gb and it was pretty much maxed out the whole time, so your mileage may vary.
  2. A working OS with a web browser and SSH client. Linux would probably be best, but my laptop was running Win10pro. I recommend a tabbed SSH client capable of sending keystrokes to multiple SSH sessionsLike Moba XTerm.
  3. VirtualBox installed and running.
  4. The latest Proxmox VE ISO file.

With the plan in place, and the necessary software gathered, it’s time to proceed to Building A Proxmox Test Cluster in VirtualBox, Part 1: Building The Nodes.

Upgrading a hosted Debian 8 VM to Debian 9

A long time ago, I extolled the virtues of Cloud at Cost’s developer cloud. It’s a good tool for spinning up a box to mess with, but it’s far from being reliable enough for “production” use. What it is great for is having a box that isn’t constrained by a network (like a VM at work might be), but for which access to it may require modifications to a local firewall (like a VM at home might be), while avoiding the cost of a “real” production VM on Digital Ocean or Amazon.

Using a VM this way is a bit like building your house out of straw. It goes up fast, but it comes down fast too. So I have gotten used to setting up machines quickly and then watching them be corrupted and blowing them away.

Sometimes I do something stupid to corrupt them, sometimes they go corrupt all on their own.

The base Debian install on C@C is getting a bit long in the tooth, so part of my normal setup is now upgrading Debian 8.something all the way to Debian 9.whatever. This procedure will take a pretty long time. A long enough time that you will probably have to leave home to go to work, or vice versa. I recommend locking down SSH and then installing screen so you can attach and detach sessions that are running in the background.

Step 1 – update your sources and update your current version:

First, you should check your version of Debian, to make sure that you are on some version of Jesse, and not an earlier version for some reason:

# cat /etc/debian_version

The sources on a brand new C@C Debian 8 box are woefully out of date. Use your favorite editor (mine is nano; fight me) to edit the sources list.


# nano /etc/apt/sources.list

### Remove the entries and paste these in ####
deb http://httpredir.debian.org/debian jessie main
deb http://httpredir.debian.org/debian jessie-updates main
deb http://security.debian.org jessie/updates main

Once you have the list updated, save the file and run the upgrade scripts like so:

# apt-get update
# apt-get upgrade
# apt-get dist-upgrade

On a new install this will take a long time. Note that if you are having trouble installing screen or fail2ban, you probably have to do this step before installing them.

Step 2 – See how bad the damage is

Now we see what kind of hell we will be unleashing on this poor little machine by upgrading just about everything. First, see what packages are broken:

# dpkg -C

On a fresh debian 8 box, there shouldn’t be a lot to report. If there is you need to fix those packages. Assuming that you got no messages about messed up packages, you can see what’s been held back from upgrade like so:

# apt-mark showhold

If you got a message that packages are obsolete, you can remove them like so:

# apt-get autoremove

Hopefully you don’t have any messed up packages, and you can proceed to the next step.

Step 3 – Do the thing

Now it’s time to change the sources from Jesse to Stretch and basically do step 1 all over again.

First you update the sources.list file:


# nano /etc/apt/sources.list

### Remove the entries and paste these in ####
deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org stretch/updates main

And then tag packages to be updated:

# apt-get update

Now do a dry run to see if it will blow up or not:

# apt list --upgradable

Assuming there are no flashing red lights or whatever it’s time to pull the trigger.

Step 4 – Hold on to your butt

Once you run the next set of commands, you will be asked if you want to restart services without asking. Assuming that you are doing this in screen, you can lose your SSH connection and the process will still run. In the event of a catastrophic failure, you can probably open the console and attach to your screen session, so say yes and then buckle up.

TIMES UP! LET’S DO THIS! LEEEEEEEEEERRRRROOOOOOOYYYYY:

# apt-get upgrade
# apt-get dist-upgrade

This will take a long time. Like a really long time. It’ll look cool tho. Having a command line window with text rolling by always makes me feel like Neo from the Matrix.

Step 5 – ??? Profit

Once it’s done, check the Debian version and revel in your victory:

# cat /etc/debian_version

Then check for obsolete packages, for which there will probably be a bunch:

# aptitude search '~o'

And then finally remove them all, like so:

# apt-get autoremove

Just to be safe, you should probably update and upgrade one last time:

# apt-get update
# apt-get upgrade

Step 6 – Diversify your backups

Now that you have gone through all of the difficulty of upgrading your house made of straw, it would be a shame for a big bad wolf to blow it down. For this reason, I recommend an old school Unix backup with tar, and keeping a copy of your backup on another computer. For this second part we will be using scp, and I recommend setting up SSH Keys on another Unix host. This might be a good time to set up ssh key pairs without passphrases for your root accounts to use.

The security model looks something like this:

  1. No one can log into any of the hosts via SSH as root.
  2. No one can log into any of the hosts without a private key.
  3. Your plain user account’s private key should require a passphrase.
  4. Your root password should be super strong, probably randomly generated by and stored in password manager like KeePass.
  5. If you want to scp a file as root without a passphrase, you should have logged in as a plain user with a private key with a passphrase and then used su to become root.
  6. If you can get past all those hurdles, a second public key passphrase isn’t going to protect much.

Change to the root of the file system (/) and run a giant compressed backup job of the whole filesystem (except for the giant tarball that you are dumping everything into).

# cd /
# tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system / 

This will also take a long time, so you should seriously be using screen. Also, there is a lot of stuff in the backup that doesn’t actually need to be backed up, so you could add additional –exclude=/shit/you/dont/need statements to shrink the size of your backup file.

Once once the backup is done you can then change the name of the backup file to that of your machine name and use SCP to copy off the backup file to another Unix host. In the example below I am calling the backup randoVM. You should change the name because you may be backing up multiple VMs to the same source. I like to use my HUB VM at home because it has a lot of [virtual] disk space compared to my hosted VMs.

# mv /backup.tar.gz /ranoVM.tar.gz
# scp randoVM.tar.gz steve@home.stevesblog.com:~/backups/ranoVM.tar.gz 

You can leave the big tarball on your VM’s file system, or you can delete it. There are merits to doing either. You will want to repeat this backup procedure periodically as you add features and services to the VM.

If you find yourself needing to restore the VM because you or the big bad wolf did something stupid, you can simply pull the backup down and expand it.

# cd /
# scp steve@home.stevesblog.com:~/backups/ranoVM.tar.gz .
# tar -xvpzf ranoVM.tar.gz

Network File Systems and VMs: old school Unix meets the new school virtualization

I have been replacing low end servers with virtual machines for a while now, and it’s been kinda rad. In a previous post I mentioned replacing a physical server with a VM for Bittorrent. The results were fantastic.

The typical problem with BT is that it devours bandwidth and gets you busted by Hollywood. The other problem is that it also devours disk space. I solved the first problem using Swedish Internets, but my disk problem was actually exacerbated by using a VM.

In the past, I would just throw a big drive into a dinky little Atom CPU box and snarf torrents all day. When I set up my Proxmox cluster, my VMs were still using local drives. For a while, my Turnkey Linux Torrent Server VM had a 500GB virtual disk. That worked ok. I would grab videos and whatnot and copy them to my NAS for viewing, and once I seeded my torrents back 300%, I would delete them. This was fine until I set up a RetroPie and started grabbing giant ROM sets from a private tracker.

Private trackers are great for making specialized warez easy to find. The problem is that they track the ratio of what you download compared to what you upload, and grabbing too much without seeding it back is a no-no. I now find myself grabbing terabytes of stuff that I have to seed indefinitely. Time to put more disk(s) into the cluster.

I spent way too much money on my NAS to keep fretting about the hard drives on individual machines, virtual or otherwise. So the obvious choice was to toss a disk in and attach it to the VM through the network. I like using containers for Linux machines because the memory efficiency is insane. My research indicated that the best move with containers was to use CIFS. I couldn’t get that to work, so I went with the tried and true way: NFS. NFS is really the way to go for Unix to Unix file sharing. It’s fast, and fairly easy to setup. It also doesn’t seem to work with Proxmox containers, because kernel mode something or another… based on the twenty minutes I spent looking into the situation.

So I rebuilt my torrent server as a VM, and used NFS to mount a disk from my NAS like so:

In the /etc/fstab on my torrent server I added this line:

192.168.1.2:/volume2/Downloads /srv/storage nfs rw,async,hard,intr,noexec 0 0

Where –

  1. 129.168.1.2 is the IP address of my NAS
  2. /volume2/Downloads is the NFS export of the shared folder. I have a Synology, so your server config will probably be different.
  3. /srv/storage is the folder that I want the torrent server to mount the shared folder as. On the Turnkey Torrent Server this is where Transmission BT stores its downloaded files by default.
  4. The rest of the permissions mean it’s read/write and that basically anyone can modify the contents. These are terrible permissions to use for file shares the require privacy and security. They’re fine for stolen videos and games tho.

Once that is in place, you can mount it:

mount /srv/storage

And you’re set.

Because the disk is on my NAS, I can also share it using CIFS, and mount it to my Windows machines. This is handy for when I download a weekly show, I can watch it directly from the Downloads folder and then delete it once it’s done seeding. I like doing this for programs that will end up on Netflix, where I just want to stay current, rather than hanging on to the finished program.

This worked out so well that I decided to spin up a Turnkey Linux Media Server. For this little project, I basically duplicated the steps above, using the folder I have my videos shared on. So far, I have it working for serving cartoons to my daughter’s Roku TV, and my Amazon Fire Stick. I have plans to set the Emby app up on the kid’s Amazon Fire Tablets soon, once I figure out the app situation which is probably going to involve side loading or some other kind of Android fuckitude.

Of course, my media files aren’t properly named or organized, so I will have to write a script to fix all of that 🙂

UPDATE: During the holidays, the private tracker in question did an event where you could download select ROM sets for free and get a bonus for seeding them, so the brand new disk I bought filled up and I had to buy another. I couldn’t migrate a single disk to RAID0, so I had to move the data off the disk, build the new array, and then move the data to it. An operation that took something like 36 hours for 4TB via USB 3.

Also, not being able to use NFS with a container is apparently a Proxmox limitation that has been remedied in the latest release.

The Great Big Thing(tm): Ra-Ra-Russian Facebook Edition

I have spent the last year angry at basically everyone I know for participating in this Facebook Fake News/Russian psychological warfare campaign. I washed out of Facebook because convincing my friends and relatives that they’d been conned was slowly bleeding away my will to live. I also left because of the revelations about Cambridge Analytica’s role in swaying the election through targeted marketing. Just logging in to Facebook made me complicit in the whole campaign. As time goes on, it would seem that the HyperNormalization of Facebook users has been manipulated to one extent or another to sway a few key votes. Not sure how that helps me to be honest.

I don’t yet see a causal link between Cambridge Analytica, the Guccifer2.0 email leaks and APT28 hacking the DNC, but there does seem to be a fair amount of correlation. If it wasn’t a coordinated effort, at least a few people must have known they were happening. At the very least, some decision maker was advised by someone who was aware of these operations. A fair amount of my existential angst over the past year has been the polarization of the views of my Facebook friends. I don’t know if it helps me or hurts me more to know that at least part of it was a deliberate and cynical attempt by well funded groups to make it happen. On the one hand I can feel smug and superior that I am a freethinker that didn’t get caught up in Russian PsyOp. On the other hand, feeling smug is pretty much all I’ve got, and it’s just not worth it. The way that HyperNormalization works is being for a group or against the group doesn’t matter. All that matters is that you are interacting with whatever it is. It could be Nationalism, it could be political corruption, it doesn’t matter because it needs both supporters and opponents to continue to exist. Fighting it only makes you a part of it. Also, being in exile from Facebook has forced me to confront the fact that I’m a really shitty friend.

The Russia Thing

So my friends and family aren’t idiots, they’re just pawns in someone else’s game. That’s a relief, I guess?

That still leaves the whole Mueller investigation and the steady flow of soul destroying information that comes with it. As I have stated before, I don’t care about the Trump presidency. They had Richard Nixon dead to rights and all he did was resign and be pardoned by his successor. All the pee tapes and white-hot smoking guns in the world won’t make Trump suffer a single bit. So it’s not worth even daydreaming about. Also, if you think Trump is empowering White Power and Christian Identity Nationalism, wait until President Pence takes over.

What I *do* care about the collateral damage Russia and Cambridge Analytica have caused and will continue to cause. I also care about how we as an electorate deal with the huge vulnerability social media poses to democracy. Allowing the United States to descend into fascist tyranny is a hell of a price to pay for a couple of tax cuts. I like it when the government stays out of basically everything, but replacing the government with unchecked corporate power isn’t even remotely a good idea. On the list of “Top 10 Corporations that shouldn’t be in charge of the United States” Facebook owns at least two of them.

The thing that worries me the most is the fact that 12 of the Russians indicted in the Mueller investigation are officers with Russian Military Intelligence. Meaning that this was basically a military operation. Maybe GRU is military in the way that the NSA is – national level not operational level – but that still doesn’t make me feel good. If Russia was a country full of brown people, the US would be invading it right now. Fortunately for the world, US foreign policy is pretty racist and Russia is a Caucasian ethno state, so two nuclear superpowers probably aren’t going to go to war directly.

Intelligence isn’t evidence

My involvement with the American intelligence community was minimal at its very best about 20 years ago. I am not an expert on the subject, but I do understand a couple of basic concepts of military intelligence, such as casus belli. Before you can perform a military operation, you need to have intelligence to justify its necessity.

The difference between intelligence and evidence is that intelligence is information that something either has happened, is happening, or will happen. Evidence is proof of what happened and of the damage done as a result. As we learned about the Gulf Wars, the burden of proof to justify a war is far lower than the burden of proof for a judgement in an American court of law. Also, an indictment is a far cry from a conviction, but an indictment of 12 foreign nationals probably still requires more evidence than it takes to sanction any kind of military operation. Usually spies just get traded back to their home countries under diplomatic cover.

This is why the Russia thing is so concerning. Of course the US interferes in foreign elections. Of course the US is conducting PsyOps of its own against other countries. Of course the US is conducting like 6 shadow wars in Central Asia and North Africa. All that shit is proxy for conflict with Russia. APT28 and APT29 are taking direct action against the US which is threatening the delicate balance of terror between the US and Russia. Even without full-scale invasions, proxy wars between the US and Russia are sending the world into chaos. Just look at the baffling situation in Syria.

I can’t help but fear that the pieces on the chess board are lining up into an Assassination of Archduke Ferdinand-type situation. The elements are starting to form: trade disputes, economic sanctions, complicated [even contradictory] diplomatic alliances, and clandestine military operations on foreign soil. The scary thing isn’t a new WWI, it’s the genocidal chaos that followed WWI. Oh, and that it led to WWII and even more genocide.

Stupid SSH Tricks

I use this site for a number of reasons. One them is to keep a little diary of things that I have figured out so that I can reference them later. The problem is that those little discoveries are buried in rambling posts about why I chose to do something.

I have given 2600 talks about SSH tunnels but I really don’t have a permanent record of my various [mis]uses of SSH, so I thought I would put them all in a post for future reference. I have written about securing SSH with asymmetric encryption keys, but there are many more things that you can do with SSH.

IMPORTANT NOTE: – none of these tricks work unless your host has TCP port forwarding enabled. This option is usually enabled by default, but you should double check your /etc/ssh/sshd_config file:

nano /etc/ssh/sshd_config

Then make sure that the option is uncommented and enabled:

 AllowTcpForwarding yes

Also, if all this putty crap looks tough to do, just remember that all of this is way easier with UNIX. Particularly using the -J option for jumping SSH connections.

Dynamic Ports: slip past content filters with this one weird trick

This is the go-to use for an SSH tunnel. University and corporate networks are often overseen by petty tyrants who limit access to sites of questionable content, or sites that are prone to eating bandwidth, like social and streaming media. Ironically, most of these networks allow outbound SSH. SSH kills content filters dead.

You can create a SOCKS proxy using an SSH connection by creating a dynamic port. You can then point your browser to use your local address (127.0.0.1) as a SOCKS proxy to smuggle all of your browser traffic thru the SSH tunnel.

I like using Firefox specifically for this task because Chrome or IE does dumb things when you mess with the proxy settings. Or rather, they did dumb things that one time I used them to tunnel 10 years ago and then decided on using FireFox. If you want to also tunnel your DNS queries, in Firefox type “about:config” in the browser bar and then find the setting “network.proxy.socks_remote_dns” and set it to “true”. There’s probably a more modern way to to that, but that’s the tried and true way. Tunneling DNS isn’t strictly necessary, but it does help you stay under the radar on a restrictive network.

It should be noted that once you tunnel your browser traffic through the tunnel, you are likely to lose access to intranet sites on your local subnet. For this and a number of other reasons, I like to run Brave (Chrome) for my normal stuff, and Firefox for the tunneled stuff. Running two browsers at the same time seems wasteful, but it saves a bunch of headaches.

On Windows, you can drop the port one of three ways:

If you are into radio buttons and text boxes, you can configure PuTTY to open one every time you connect. On the tree control to the left, click Connection -> SSH -> Tunnels. You’ll need to select dynamic and enter a source port (5555 in the picture to the right) and click ‘Add’. Your new port should appear in the list above as ‘D5555’. Be sure to go back to your session and click ‘Save’ of you want the port to be created every time you open the session. As long as you are messing with your PuTTY session, you might as well set your terminal colors to green text so you look like a real hacker.

If using the GUI makes you feel like a wimp, you can just script it with a handy batch file:

putty -D 5555 user@ssh.server.com

You can also feel like a real UNIX badass by copying putty.exe to c:\Windows\System32 and renaming it to ssh.exe so you can kick off your session from DOS like a real console cowboy.

OR you can dump the whole interactive shell pretense and use plink.exe to make your connection and drop your ports without the whole pesky PuTTY window getting in your way:

plink -D 5555 user@ssh.server.com

Plink is bascially PuTTY with no window. It functions basically the same in all other respects.

If you are using a Unix or Linux workstation, you can set up your dynamic port with a similar syntax:

ssh -C -D 5555 user@ssh.server.com

Note: the -C switch compresses traffic going thru the tunnel, theoretically increasing network speeds through the tunnel.

Local Ports: I heard u like shells so we tunneled SSH thru yo SSH so you can get a shell while you gettin a shell

You can use SSH to secure more than just your web browsing. You can use it to secure pretty much any TCP connection. I like using it to secure notoriously insecure VNC and X sessions.

Another use is to get around port restrictions. Some networks may allow outbound SSH, but only on port 22. Some home ISP’s get shitty about running servers on reserved ports like 22. This means you have to forward some bunk port on your home router, like 62222 to the SSH server on your home network. I used to do this when I had Time Warner cable. The problem would get worse when I was trying to connect remotely from a restrictive network that only let SSH out on port 22.

To get around this problem, I would have to SSH to a public access Unix system like the Super Dimensional Fortress on port 22 and then drop a local port that forwarded to the bunk SSH port on the IP of my home router. When I did that with different windows that had different text colors it looked like I was on CSI: Hacker Girl back tracing the killer’s IP address.

The setup is pretty much the same as for the dynamic port, only you have to specify a destination IP and port as well. The idea is to connect to a remote host that is on the other side of a restrictive firewall and using the SSH tunnel to make something accessible to your local your local network. It forwards all traffic to your local port to the remote destination thru the tunnel. In the example above it was SSH but it could be RDP or any other TCP connection. I’ll use RDP (port 3389) in the example below.

To tunnel a Microsoft Remote Desktop session through SSH using the PuTTY gui, use the tree control to the left, click Connection -> SSH -> Tunnels. You’ll need to select local and enter a source port (13389 in the picture to the right), set the destination address or host+domain name and the port (192.168.1.10:3389 in the picture to the right) and click ‘Add’. 13389 will be the port on your workstation that is now connected to the RDP port on the remote network (3389). Your new port should appear in the list above as ‘L13389 192.168.1.10:3389’. Be sure to go back to your session and click ‘Save’ of you want the port to be created every time you open the session. In your RDP client, you would connect to 127.0.0.1:13389.

If you are scripting this setup, use the -L switch along with your source port, destination IP/host and the destination port. Using the scenario from above, you forward local port 13389 to 192.168.1.10 port 3389 like this, where your SSH username is ‘alice’ and your home network’s dynamic DNS hostname is casa-alice.dynamic.DNS:

putty -L 13389:192.168.1.10:3389 alice@casa-alice.dynamic.DNS

And finally, the syntax is the same with plink:

plink -L 13389:192.168.1.10:3389 alice@casa-alice.dynamic.DNS

You can actually specify multiple local ports to remote destinations. I do this with PuTTY to get direct access to the web interface on my Proxmox cluster and to RDP to a Windows host using just the one tunnel and without having to mess with my proxy settings.

Remote Ports: SSH kills firewalls dead.

In the local port scenario, you are connecting to a remote host behind a firewall and using the SSH tunnel to make a host inside the remote firewall accessible to your local your local network. You can also do the opposite, which is to connect to a remote host outside of your firewall and use the SSH tunnel to make a host inside your local firewall accessible to either the Internet or to hosts on a remote network.

You do this by dropping a port at the other end of the tunnel. On the remote host. The obvious use is to temporarily punch a hole in the local firewall to expose an internal web server to the Internet. If the remote host that you are connecting to is directly connected to the Internet, (like a hosted VM from Cloud At Cost) you can temporarily open a port on the remote server to tunnel traffic to the web server on your internal network.

A more nefarious use for a remote port would be for a leave-behind box (formerly known as a dropbox before the term became a brand name for cloud storage) to phone home from a target network. Basically you build a cheap single board PC, like a Raspberry Pi or a plug server that you physically plug into a network that you plan on hacking testing for security holes. This approach saves a ton of time reverse engineering firewalls to gain access. There are two basic approaches, load up the box with tools and drop it, or use a minimal box as a router/pivot for tools you are running outside the target network.

To do this with the PuTTY gui, it’s basically the same as setting up a local port. Use the tree control to the left, click Connection -> SSH -> Tunnels. You’ll need to select remote and enter a source port (58080 in the picture to the right), set the destination address or host+domain name and the port (192.168.1.10:80 in the picture to the right) and click ‘Add’. You also need to click the check both of the boxes next to “Local ports accept connections from other hosts” and “Remote ports do the same (SSH-2 only)”. Your new port should appear in the list above as ‘R58080:192.168.1.10:80’. Be sure to go back to your session and click ‘Save’ of you want the port to be created every time you open the session.

If you are scripting this setup, use the -R switch along with your source port, destination IP/host and the destination port. Using the scenario from above, you forward remote port 58080 on ssh.server.com to port 80 on your internal home web server with the IP 192.168.1.10 like this:

putty -R 58080:192.168.1.10:80 alice@ssh.server.com

And finally, the syntax is the same with plink:

putty -R 58080:192.168.1.10:80 alice@ssh.server.com

The only gotcha with scripting your remote port drop with putty/plink is that I don’t think that there is a command line switch for enabling connections from other hosts, so you have to enable the remote port sharing on the SSH server side.

Making Local and Remote Ports Accessible to other hosts

Sharing your remote and local hosts lets you set up your SSH tunnel on one host and then connect to the tunnel from a different host.

In the case of a local port, you could initiate the SSH session on your home Linux server, and then connect to that port from your Windows workstation. This is handy if you are tunneling RDP and you don’t have PuTTY available on your Windows box. Although PuTTY is super portable so it’s dead simple to smuggle it onto the most locked down of Windows machines.

In the case of the remote port, it’s pretty much mandatory for the web server or dropped box use cases. You can still script the connection, you just have to modify your sshd_config file on your SSH server. On a Debian-esque server you do this by using sudo or su to become root and then type:

nano /etc/ssh/sshd_config

You then add the GatewayPorts option. You can put it anywhere in the file, but I prefer to keep it in the first few lines of the file where entries for port configuration are.

# What ports, IPs and protocols we listen for
Port 22
GatewayPorts yes

Then save the file and restart SSH:

systemctl restart ssh

Or on Debian 9, you use ‘service’ to restart SSH:

service ssh restart

I am sure that this option is a big security risk, so I recommend a cheap low powered VM dedicated specifically to bouncing SSH connections. I also recommend securing it with SSH keys. If you are looking to script SSH connections that are secured with SSH keys, I recommend not setting a passphrase on your private key. You can include your private key on the putty/plink commandline with the -i switch:

putty -L 13389:192.168.1.10:3389 alice@casa-alice.dynamic.DNS -i c:\path\to\your\key\key.ppk

Turnkey Torrents and Swedish Internets

A few months ago, I wrote about using a Turnkey Linux OpenVPN appliance to route network traffic thru Sweden. Since that time I have gotten my BitTorrent machine running. The other post was mostly about getting the VPN tunneling and routing to work. This post will mostly be about setting up the torrent server.

The Turnkey Torrent Server is neat because it’s a minimal Debian machine with a pre-configured Transmission BitTorrent Client, a web interface for managing BitTorrent, a working Samba server, and a webDAV client so you can use a browser to download files. Basically, you use the web interface to grab things, the Samba server to makes them accessible to your media players on your internal network, and webDAV makes the files accessible to the rest of the world, assuming you have the right ports forwarded. My preferred method for watching torrented videos is on a PC plugged into a TV running VLC Media player controlled with a wireless keyboard. I know I should be using Plex and shit like that, but I’m old school.

The Swedish Connection
For some of my friends who aren’t pirates (especially the friends that are into British TV) I am like their coke dealer except I deal in movies and TV shows. That means that sometimes I get asked to find things when I’m not at home. Like a third of my remote access shenanigans, A.K.A. reverse telecommuting, is so that I can pull up BitTorrent and snarf shit for friends and relatives when I’m not at home. Being able to expose the uTorrent remote interface to the web was great for letting my more technical non-hacker friends grab torrents without any assistance from me.

My VPN provider gives me the option of forwarding ports. When I was running uTorrent on a dedicated Windows machine, those forwarded ports were easy to configure. I would just set them up on the VPN site and map them to the ports I configured in uTorrent. One was for BitTorrent transfers to make sure that my ratios reported correctly on private trackers. The other was for the uTorrent web interface. For a long time I ran Windows for torrenting because I used PeerBlock to help me fly under the radar. Times change tho. Real time block lists is old and busted. VPNs is the new hotness. Unfortunately, with this VPN router setup it messes up forwarding ports. When I set up port forwarding on the VPN provider side, the forwarded ports hit the doorway server rather than the torrent server, so that has to be fixed with more IPTables kung fu on the doorway server.

I know I said that I wasn’t going to write anymore about the doorway server, but I lied. I needed to configure the doorway server to open those ports and then forward them to the torrent server. Let’s assume that my internal network is a 192.168.1.0/24 subnet (a class A block, a range of addresses from 192.168.1.1 to 192.168.0.254) with a default gateway of 192.168.1.1. All of my traffic goes through my local router and hits the Internet from my ISP, in the US. If a device asks for an IP via DHCP, this is the network configuration that it will receive, along with red-blooded American Internets. Here is an awful network diagram because why not?

The doorway server has a static IP of 192.168.1.254 and it’s configured to route all of its traffic through the VPN tunnel to Sweden. Any device that is configured to have a default gateway of 192.168.1.254 will also hit the Internet via the tunnel to Sweden, thereby receiving Swedish Internets. At this point, all the configuration is done, and your torrent server will work, but there won’t be any ports forwarded to it, which is lame. No forwarded ports is especially lame when you are using private trackers because it can really mess with your ratios. Now, you could just open a port on your firewall for the web interface on the American side, but that’s also pretty lame. If you want to use your torrent server, you should strictly be using Swedish Internets.

Welcome to Swedish Internet
To forward those ports, first set them up in Transmission, then with your VPN provider. The web interface port [12322] is already configured for you by Turnkey Linux. You can set the other port in the Preferences->Network->Listening Port blank. Once the entry points and the end points are configured, it’s time to do more iptables kung fu.

Let’s assume the following:

  1. The web interface port for Transmission is 12322.
  2. The listening port in Transmission to 9001.
  3. The static IP for your torrent server is 192.168.1.10
  4. The doorway server IP is 192.168.1.254.
  5. The forwarding ports you were able to get from your VPN provider are 9000 and 9001.
  6. You want to use port 9000 on the VPN side for the Transmission web interface.
  7. You wand to use port 9001 on the VPN side for the Transmission listening port.

What needs to happen is for the VPN tunnel interface (tun0) to listen on ports 9000 and 9001, then forward traffic on those ports to 192.168.1.10. Then, you want any traffic on those same ports that comes from the doorway’s internal network interface (eth0) to be modified so that it doesn’t look like it came from the tunnel interface. This is super important for TCP handshakes.

First create your rules for accepting/forwarding connections on the VPN side:


iptables -A FORWARD -i tun0 -o eth0 -p tcp --syn --dport 9000 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -p udp --dport 9001 -m conntrack --ctstate NEW -j ACCEPT

This was probably configured fine in the doorway server post, but this specifically allows all the traffic that passes between your VPN and the local network connections once a connection is establshed:


iptables -A FORWARD -i eth0 -o tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Now add the rules to rewrite packets destined to the web interface and then rewrite the responses:


iptables -t nat -A PREROUTING -i tun0 -p tcp --dport 9000 -j DNAT --to-destination 192.168.1.10:12322
iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 9000 -d 192.168.1.10 -j SNAT --to-source 192.168.1.254

Add the rules to rewrite all the BitTorrent packets, including responses:


iptables -t nat -A PREROUTING -i tun0 -p udp --dport 9001 -j DNAT --to-destination 192.168.1.10:9001
iptables -t nat -A POSTROUTING -o eth0 -p udp --dport 9001 -d 192.168.38.37 -j SNAT --to-source 192.168.1.254

All the strict rewriting probably isn’t a big deal for the BitTorrent traffic because it’s UDP, and UDP don’t give a fuck.

If it’s working, point your browser to https://the-ip-address-of-your-vpn-server:9000 and you should be prompted to log in to the web interface. Once you’re sure it’s all good, then it’s time to save your working iptables config:

iptables-save | tee /etc/iptables.up.rules

Make sure that your rules work well after you reboot your VM. And then run your backups to make sure that they have your latest config because there’s nothing worse than trying to piece all this crap together for the third time.

You can skip having to remember the IP by registering it as a subdomain somewhere, either with a dynamic DNS service, or with the registrar for a domain that you own.

In the unlikely event that I made this, or any other technical thing look easy, rest assured that it took me at least a couple hours. Also, I had it working a months ago, but I forgot to update my snapshot and had to redo it again because I am not a smart man. Then during this second go around I had to restore the VM from a backup because iptables just isn’t my bag. Thankfully BitTorrent is my bag. Happy pirating!

The Great Big Thing(tm): Weaponized Autism Edition

It’s been a little while since I have been destroyed by existential dread, but when it comes to meaningless suffering, the western world never seems to disappoint. My issue dejour is that some beta chauvinist spent too much time on 4chan and then drove a van through a crowd of people. In an age of mass shootings, murdered journalists, and white power marches, I now have to contend with radicalized misogyny as well. Thanks Obama!

Somewhere along the way, weird white dudes stopped being trolls and started being terrorists. Somehow the Autism spectrum has been weaponized.

As a weird white dude, this disturbs me more than Nazi Bullshit because I can’t help but feel that something that I was once a part of has been co-opted for truly awful purpose. The alt-right using memes to spread their bullshit was bad enough. Memes never did anything to anyone but be awesome, so why drag them into your racism? This is something much worse. It’s some kind of gateway drug to indoctrinate nerds into this weird form of Radical Fuckery.

Back story time
You will have to forgive my linking to one of those Alt-underground blogs. I am keenly aware of the tendency of crazy blogs to reference other crazy blogs. This particular post captures something that I have been thinking about for a couple of years now: the radicalization of the bowels of the Internet, my former home. Years ago, before I found a home with the hacker community, life “Away From the Keyboard” was tough for me because I felt very much like an outsider. I felt that I was connected to something not of this world. Not just to technology, but to the pro free speech, pro privacy, anti intellectual property and anti corporate counter-culture of the Internet. It was a connection that made me feel like some sort of alien in my Midwestern/corporate/suburban surroundings.

I also felt (and still feel) that the Internet is being slowly ruined by a kind of corporate-led gentrification. The Internet was once the wild west. It was full of weird, dangerous, and scary things that corporations have felt the need build firewalls around, in both the technological sense and the metaphorical. Google safe search and the Facebook news feed are the ultimate expressions of those same metaphorical firewalls. These companies are complicit in the algorithmic dismantling of the open Internet in to “TV with a buy button”. They are hijacking people’s thought processes. And, they are neutering one of the last places in the world where Free Speech is possible. In response, I was determined to “keep it weird” by trolling the “Normal People” that would wander in to deep end of the pool. I and others like me would ridicule them for being, for lack of a better word, unenlightened. Trolling people was my way of “Freaking out Squares” like Homer Simpson did in that one episode of The Simpsons:

“Copyright is based on censorship man!”

I was having a few laughs at Normal People on the Gentrified Internet who weren’t at all equipped to deal with “The Real Internet” creeping into polite society. Dabbling in a bit of satirical and ironic homophobia is not a nice thing to do, but back then, I was not nice. I was angry and territorial. As coping mechanisms go, going on the Internet and ruining someone’s day is basically like shooting Heroin. Life Away From the Keyboard was filled with Normal People which was a source of frustration and alienation. Pointing out that Normal People don’t belong on the Internet because they’d be happier somewhere else was form of stress relief for me. I mean, I always knew that everyone belonged on the Internet, I just didn’t want the normies to accidentally fuck it up for the rest of us by confusing the Internet with television.

Fake Internet points are cool and all, but have you ever made someone really mad? When I finally found a place to belong to, I mostly put trolling behind me. Mostly. I had matured. Mostly. I learned to let other people enjoy things. I learned that being yourself on the Internet is actually really brave and that ridiculing normies was just me being one of those Gen X Cool Guys that doesn’t believe in anything. I also learned that while starting arguments and saying crazy shit in public forums is fun, that same behavior is being directed without satire or sarcasm at people who are trying to make the world a better place. Also, deadpan sarcasm is great way to make your Facebook friends think that you have severe mental problems.

They don’t think it be like it is but it do
My point here is that there is a major difference between rudely reminding someone that you can Internet better than they can and what is happening today. Like so major.

You see, the awful parts of the Internet used to be a place of perpetual flux. Sure, there were people stumbling in there to be weird and angry at the world, but there were others there who were making fun of those weirdos and celebrating their failures. Whatever you tried to do, it failed. Being an EdgeLord and trying to make a shocking statement always drew mockery and criticism. Either someone found fault in your logic and you got mocked for it, or someone went harder at it than than you and they mocked your lack of conviction.

There was no recognition; there was only mockery. In that mockery, I think that growth was supposed to happen. Getting housed by people that Internet better than you forces you to think harder about what you are doing and saying. It sounds awful, but the process of being mercilessly mocked [hopefully] matured you into a calmer, more enlightened person. At least that’s what it did for me.

Today 4chan and other awful Internet spaces are basically terror training camps for weird white dudes to become… Some kind of Autistic version of Al Qaeda I guess? For a lot of these dudes, once being white and male is no longer a competitive advantage, they won’t be able to compete at all. Sure, they’re the master race or whatever, but based on the pictures I’ve seen of their Nazi marches, those bros are inferior specimens. Take away their racism and sexism and all you have left is crippling anxiety and bad skin.

Something happened in the decade between my time as a troll and now. It went off the rails somewhere. Maybe too many people like me abandoned the Real Internet and the EdgeLords took over? I parted ways with that form of Internet culture years ago, and now I feel like a significant piece of my history has been stolen from me. And, maybe I am partially responsible? I don’t really know.

What I do know is that what I once was, is not what this is. Doing it for the lulz, however mean spirited, is not the same as doing it specifically to harm others. Even if there are people still doing it for the lulz, those lulz are somehow empowering other people to do awful things. It was lulzy when I did it, but it’s not lulzy anymore. I was not an incel. I was not a Nazi. These assholes have stolen my history.

MCU Captain America is Best Captain America

Film as a medium is in a state of decline and it’s the fault of people like me. I don’t turn up to the theater except for big productions like Star Wars and The Avengers. That means that market forces have driven films into being flashy CGI messes. I accept my responsibility for that. I am not perfect, I just don’t have the time and money to turn up for films that I can more easily enjoy on my TV at home. I’m flawed.

In talking to a friend about Flawed Paladins I remarked that taking the whole Dudley Do-Right idea and adding flaws and nuance made MCU Cap one of the best characters ever. I love that MCU cap is an exemplar of the American Spirit who is now at odds with modern American society and government. He’s a manifestation of our WWII American Exceptional Narrative of the US saving the world from insensate evil. Cap is fictional, but so is a good deal of the narrative of American Exceptionalism. Cap is an all-American kid from Brooklyn, desperate to serve his country in the face of unfathomable evil. He sees people being hurt, and he steps up. Like 70 years later, he gets thawed out and he’s appalled by what he sees. He says “When I went under, the world was at war. I wake up, they say we won. They didn’t say what we lost.”

In Cap’s heart, and at the heart of the narrative, is the idea of freedom. I would define this freedom as the freedom of speech and expression, freedom of religion, freedom from fear, and freedom from need. I would posit that modern America runs on religion and fear, is perplexed by freedom of expression, and actively hates the idea of freedom from need. Obviously you need the press and courts and all that other bullshit, but the blueprint is those four basic freedoms. MCU Cap is the personification of the idea of America and his “America is great, but this shit here isn’t America” struggle makes him perfectly imperfect. He has to do what he thinks is right, even if it means working for a group like S.H.I.E.L.D. that he doesn’t trust.

MCU Cap’s internal conflict between his duty as an American hero and the shift in American society after The Avengers [a metaphor for 9/11] is absolutely brilliant. He is at odds with Tony Stark when he hacks S.H.I.E.L.D.’s computers but ends up at odds with Nick Fury by the time he sees what Fury is really up to [a possible metaphor for Snowden/Manning]. Then all that gets pushed aside by the attack on New York. By the time we see cap again in The Winter Soldier, Cap has made a compromise: he is being a hero for America by working for S.H.I.E.L.D. but he is deeply uneasy about the duplicity he keeps seeing. By the time we see him in Civil War, Cap is completely done with S.H.I.E.L.D. (and presumably with being a hero) in order to help Bucky, and they’re coming to get him.

I can’t think of a better criticism of corpofascist America than an all-powerful private army trying to take over the country, and hunting down two of America’s original war heroes in order to do it. Sure, there’s Hydra and Ultron manipulating everything, but the real story is Cap trying to reconcile loving his country, mistrusting his government, and looking out for his best friend, none of which ever truly get reconciled. I can’t think of anything more human than that.

In other posts I have bemoaned aspects of our government, our society, or our political process. I don’t know that I have ever stated that the reason that I hate all of it: the NSA, the TSA, the drones, the torture… Obviously it violates our privacy, free speech, and our freedom from fear. But I also hate all of it because that’s not what America means to me.