Adding a Small Web Server to the Smuggling Operation

One problem with using a single Docker server for a modern smuggling operation is that I end up running a bunch of web applications on different port numbers that I can’t remember. The other challenge is needing to connect to that server from a number of different methods.

I might be connecting through local ports on SSH tunnels, NeoRouter, or via a hostname.

Putting a bunch of links to the different server ports on a webpage *seemed* simple enough: just grab a basic Apache container, fire it up, and create a basic webpage full of hyperlinks. Turns out, there are several challenges with this:

  1. You don’t know what network you will be accessing the server from. The IP, FQDN, or hostname could be different every time you access the webpage. A hyper link to 192.168.1.211 is of no help if that IP is inaccessible to the client. This *could* be solved by using relative paths in the hyperlinks *but*
  2. Apache adds a leading slash to a relative path. That means that a link “:1234″ will point to http://example.com/:1234”
  3. I haven’t created a web page without using a content management system in *at least* 15 years. I am just a bit behind the kids today with their hula-hoops and their rock-and-roll.

So I did what I always do when presented with a technical challenge: fall back on a piece of knowledge that spent like 30 minutes learning that one time, like 20 years ago.

A long time ago, in a galaxy far away, there used to be these crappy free web hosts like Geocities where people could make their own websites. You could do all kinds of things with Java and JavaScript, but you couldn’t do anything that ran on the web server, like CGI scripts or server-side-includes. Server side includes were important, because you could commonly used code (like the header and footer for the page) in a couple of files, and if you changed one of those files, the change would replicate over your whole site.

You could do something similar with JavaScript. You put the script that you want on every page, and tell JavaScript to load it from a single file. Like so:

<script language="JavaSscript" src="header.js"/>

In the header.js file, I would put in a ton of document.write statements to force the client browser to write out the HTML of the head and body sections of the web page. I called this horrible technique “client-side includes”:

document.write('<body bgcolor="000000">');

For the current challenge, I just have to rewrite the URL for each hyperlink, based on some variables on the page:

< script language="JavaScript" >
     document.write('< a href="' + window.location.protocol + '//' + window.location.hostname + ':8989' + 
window.location.pathname + '"> Sonarr < / a > </script >

Sorry for some of the weird spacing, convincing WordPress to mark up JavaScript without actually executing it is kind of fiddly. The code examples look better here.

The solution works most of the time. I like to browse for torrents with a browser that blocks ads and JavaScript, so I have to enable JS for that tab, and then browse in that tab with caution. Sonarr, Radarr, and the like all rely heavily on JavaScript, so I prefer to use Brave’s shields wherever possible.

Modernizing the smuggling operation

The winter holidays are a depressing time for me, so the last month or so of the year I like to really throw myself into a game or project. After being ridiculed by one of my DnD bros about how inefficient and antiquated my piracy set up is, I decided to modernize by adding applications to automate the downloading and organizing of my stolen goods.

My old fashioned manual method was to search trackers like YTS, EZTV, or The Pirate Bay and then add the magnet links to a headless Transmission server that downloads the files to an NFS share on my file server. Once the download is complete, I would copy the files to their final destination, which is a Samba share on the file server. I don’t download directly to the Samba share because BitTorrent is rough on hard drives. My file server has several disks, some of them are nice (expensive) WD Red or Seagate Ironwolf disks, and some are cheap no-name drives. I use the cheap drives for BT and other forms of short term storage, and the nicer drives for long term storage.

For media playback, I used a home theater PC that would access the shared folder, and then play the files in VLC media player. This was the state of the art in 2003, but the game has gotten more fierce.

My new piracy stack now consists of Radarr, Sonarr, Lidarr, and Jackett. These are dedicated web apps for locating (Jackett), downloading (Transmission) and organizing movies (Radarr), Tv (Sonarr), and music (Lidarr). Once the media is downloaded, sorted, and properly renamed, it will be streamed to various devices using Plex.

Rather than run a different VM or Linux container for each app, a friend recommended that I use Docker. Docker is a way of packaging applications up into “containers.” It has taken me a good while to get my mind around what the difference is between a Linux container and a Docker container. I don’t have a good answer, but I do have a hot take: if you want an application/appliance that you can roll into a file and then deploy to different hosts, you can ask one of two people do do it for you: a system administrator or a developer. If you ask a system administrator to do it, and you will end up with LXC, where the Linux config is part of the package, and that package behaves like a whole server with RAM, and IP address, and something to SSH into. If you ask a developer to do it, you just get the app, a weird abstraction of storage and networking, and never having to deal with Unix file permissions. And that’s how you get Docker.

Because I am a hardware/operating system dude that dabbles in networking, LXC makes perfect sense to me. If you have a virtualization platform, like an emulator or a hypervisor, and you run a lot of similar systems on it, why not just run one Linux kernel and let the other “machines” use that? You get separate, resource efficient operating systems that have separate IP’s, memory allocation, and even storage.

The craziest thing about Docker is that if you start a plain container, like Debian, it will pull the image, configure it, start it up, and then immediately shut it down. And this is the expected behavior.

I like that Docker can pop up an application very quickly with almost no configuration on my part. Docker storage and networking feels weird to me, like a bunch of things stapled on after delivering the finished product. From a networking standpoint there is an internal IP scheme with randomly generated IPs for the containers that reminds me of a home network set up on a consumer grade router. If you want the container to have access to the “outer” network, you have to map ports to it. Storage is abstracted into volumes, with each container having a dedicated volume with a randomly generated name. You don’t mount NFS on each container/volume, instead you mount it on the host and point the container to the host’s mountpoint. It’s kind of like NFS, but internal to Docker using that weird internal network. Also, in typical developer fashion, there is very little regard for memory management. The VM that I am running docker on has 16gb of RAM, and its utilization is maxed out 24/7. Maybe Docker doesn’t actually use that RAM constantly, it just reserves it and manages it internally? It’s been chewing through my media collection for a couple of months now, and slowly but surely new things are showing up in Plex. Weird as the stack is, it’s still pretty rad.

All the randomly generated shit makes me feel like I don’t have control. I can probably dig a little deeper into the technology and figure out some manual configuration that would let me micromanage all those details, thereby defeating the whole entire purpose of Docker. Instead, I will just let it do its thing and accept that DevOps is a weird blend of software development and system administration.

My Life with Multitops: using multiple types of laptops

It’s the end of the year, and I have a lot on my mind. So rather than deal with it, I am going to write about laptops. I have owned many laptops over the years, most of them have been refurbished or re-purposed from some other role. In many ways, I am a bit like a crazy cat lady, but instead of cats, I am surrounded by laptops. I tend to own and operate a few laptops because I have a few specific use cases with different hardware requirements. Rather than calling them laptops, I like to refer to them by the purpose that they serve for me.

  1. TypetopA big laptop that is suited for long typing sessions. In the past I wrote (and hacked, and coded) a lot more than I do now. I used to write papers for school, reports or emails for work, blog posts, or creative works. While my ideal writing environment is an office chair, large monitor and a buckling spring keyboard, any table with laptop that has a full-sized keyboard will do. I don’t consider these large and rather heavy machines to be mobile so much as portable. Of my fleet of laptops, the ones optimized for typing also tend to be the most expensive. This is the model that I normally go for when an employer is picking up the tab.
  2. NotetopA tiny laptop that is suited for note taking. I have spent many hours in lecture halls and the like taking notes for classes. I don’t really use a laptop for notes at work, unless I am the designated minutes-taker, for example when I worked at a startup company out west, or in my time on the board of directors at Hive13. For class room notes, nothing beats a small netbook, especially if you are also carrying around textbooks and paper notebooks. I found that the accessory pocket in a backpack kept the laptop from being smashed by textbooks. It’s too bad that the iPad pretty much destroyed the market for cheap netbooks, because I dearly loved those old MSI’s.
  3. JettopA burner laptop for travel. I used to travel to hacker conferences like DefCon, and you would occasionally need a laptop, but there was always a chance that something awful might happen to it. It might get stolen, it might get confiscated by law enforcement at an international border, it might get hacked by someone with way better skills than mine, or someone [like me] might drunkenly vomit on it or throw it out of a window. To minimize this risk, I would take a cheap laptop with minimal personal information and strong encryption. Once I started carrying a smartphone, I would also travel with an old flip phone, just to be safe. Later on, I would just take my work phone and turn off WiFi and Bluetooth. In later years, I bought a refurbished Chromebook and traveled with it. I found that a Chromebook along with a small Android tablet combined to make a good, lightweight, toolkit.
  4. ShoptopA laptop for hardware hacking. In the years I spent with Hive13, I was always in need of multiple ports to connect to things around the shop. I would use multiple serial or USB ports to connect to hacker hardware like Arduinos or old copiers and printers. Even today I occasionally need to plug in multiple large external hard drives to share pirated goods at events like 2600. In the past, I have found older laptops to be indispensable in these “workshop” environments due to their legacy ports. For me, workshops are also fairly dangerous places, where laptops get exposed to power tool mishaps, fire, and on more than one occasion, blood. It is these dangers, combined with a need for old ports, that I prefer to keep older laptops around, however under-powered they may become. I am not sure what I will do in the future, when even my eldest laptop has only a couple of USB ports. I suppose that a shoptop is the kind of thing that I should probably build myself. I keep wanting to get back into electronics, maybe a DIY shoptop would be a good way to get started.
  5. CrashtopA laptop for network configuration and troubleshooting Pretty much always the secondary function of a shoptop, looking into network crashes pretty much always requires a laptop. For a dude that tinkers with computers, I like to think that I have a decent grasp of networking. Not just cabling, but also routing, switching and even telephones. My home network is as much a lab as it is anything else. My main router has a console port, and while most of the network configuring I do is with SSH or a browser, sometimes you just need a laptop that you can physically plug in to a device. Of all the legacy ports to disappear from a modern laptop, I will miss the gigabit Ethernet port the most. Sure there are USB serial and Ethernet adapters, but those just aren’t the same as having the gear built right in. Also like the shoptop, I often think about either building a device, or maybe refurbishing a vintage device to troubleshoot networks with. I have always wanted a very industrial-looking 80’s device like the old Informer 213 for terminal-type stuff. At one point in my life, I had an old laptop that had a voice modem in it so that I could also mess with analog telephone lines.
  6. I am not in the market for a new laptop just yet. My typetop plays Skyrim and Fallout 4 decently. Plus it’s time for me to get into consoles again 🙂

Network File Systems and VMs: old school Unix meets the new school virtualization

I have been replacing low end servers with virtual machines for a while now, and it’s been kinda rad. In a previous post I mentioned replacing a physical server with a VM for Bittorrent. The results were fantastic.

The typical problem with BT is that it devours bandwidth and gets you busted by Hollywood. The other problem is that it also devours disk space. I solved the first problem using Swedish Internets, but my disk problem was actually exacerbated by using a VM.

In the past, I would just throw a big drive into a dinky little Atom CPU box and snarf torrents all day. When I set up my Proxmox cluster, my VMs were still using local drives. For a while, my Turnkey Linux Torrent Server VM had a 500GB virtual disk. That worked ok. I would grab videos and whatnot and copy them to my NAS for viewing, and once I seeded my torrents back 300%, I would delete them. This was fine until I set up a RetroPie and started grabbing giant ROM sets from a private tracker.

Private trackers are great for making specialized warez easy to find. The problem is that they track the ratio of what you download compared to what you upload, and grabbing too much without seeding it back is a no-no. I now find myself grabbing terabytes of stuff that I have to seed indefinitely. Time to put more disk(s) into the cluster.

I spent way too much money on my NAS to keep fretting about the hard drives on individual machines, virtual or otherwise. So the obvious choice was to toss a disk in and attach it to the VM through the network. I like using containers for Linux machines because the memory efficiency is insane. My research indicated that the best move with containers was to use CIFS. I couldn’t get that to work, so I went with the tried and true way: NFS. NFS is really the way to go for Unix to Unix file sharing. It’s fast, and fairly easy to setup. It also doesn’t seem to work with Proxmox containers, because kernel mode something or another… based on the twenty minutes I spent looking into the situation.

So I rebuilt my torrent server as a VM, and used NFS to mount a disk from my NAS like so:

In the /etc/fstab on my torrent server I added this line:

192.168.1.2:/volume2/Downloads /srv/storage nfs rw,async,hard,intr,noexec 0 0

Where –

  1. 129.168.1.2 is the IP address of my NAS
  2. /volume2/Downloads is the NFS export of the shared folder. I have a Synology, so your server config will probably be different.
  3. /srv/storage is the folder that I want the torrent server to mount the shared folder as. On the Turnkey Torrent Server this is where Transmission BT stores its downloaded files by default.
  4. The rest of the permissions mean it’s read/write and that basically anyone can modify the contents. These are terrible permissions to use for file shares the require privacy and security. They’re fine for stolen videos and games tho.

Once that is in place, you can mount it:

mount /srv/storage

And you’re set.

Because the disk is on my NAS, I can also share it using CIFS, and mount it to my Windows machines. This is handy for when I download a weekly show, I can watch it directly from the Downloads folder and then delete it once it’s done seeding. I like doing this for programs that will end up on Netflix, where I just want to stay current, rather than hanging on to the finished program.

This worked out so well that I decided to spin up a Turnkey Linux Media Server. For this little project, I basically duplicated the steps above, using the folder I have my videos shared on. So far, I have it working for serving cartoons to my daughter’s Roku TV, and my Amazon Fire Stick. I have plans to set the Emby app up on the kid’s Amazon Fire Tablets soon, once I figure out the app situation which is probably going to involve side loading or some other kind of Android fuckitude.

Of course, my media files aren’t properly named or organized, so I will have to write a script to fix all of that 🙂

UPDATE: During the holidays, the private tracker in question did an event where you could download select ROM sets for free and get a bonus for seeding them, so the brand new disk I bought filled up and I had to buy another. I couldn’t migrate a single disk to RAID0, so I had to move the data off the disk, build the new array, and then move the data to it. An operation that took something like 36 hours for 4TB via USB 3.

Also, not being able to use NFS with a container is apparently a Proxmox limitation that has been remedied in the latest release.

The Great Big Thing(tm): Lost Religion Edition

One of the unique ideas posed by “Hypernormalization” is the contrast between John Perry Barlow’s Declaration of the Independence of Cyberspace and William Gibson’s Neuromancer where corporate networks control all commerce in secret. For most of my adult life, I have always been able to reconcile both ideas: that the compulsion of corporations to amass data in secret – which translates to wealth and power – and the duty of hackers to expose it, and for pirates to redistribute it. I was comfortable with the idea of the gentrified surface web was a front for the deep web (not necessarily the dark web), where the real shit gets done. Lately, I feel like I have lost that faith in… pretty much all of it.

Over the years, the struggle hasn’t just been about The Web. As software consumes more and more of our lives, there is no real difference between life at the keyboard and away from it. Most of us carry at least one Internet-connected computer our person most of the time; before you know it, it’s no longer going to be the Internet of Things, it’s going to be Software Defined Existence. Before I get all singularity on this, I want to call attention to the idea that advocacy for privacy and free speech, and against copyright and surveillance is rapidly becoming less about protecting people’s online lives and more about protecting “the real world” from being eaten by shitty software. I feel like I fought hard against Things That Suck on the Internet, just to have all of those things spill out into my daily physical life.

There’s a war going on outside no man is safe from

My life as a hacker, a pirate, and a crypto-anarchist has always centered on the belief that I was part of a movement that was changing things. I knew that the corporations and governments would do their best to to turn the Web into “TV with a BUY button.” But, I also knew that people like me would keep Barlow’s “Home Of Mind” alive by resisting that gentrification at every turn. There’s a war going on outside no man is safe from, and I was part of a kind of “Fifth Column” of pro-privacy, anti-copyright, and pro-free expression dissidents, rallying others to fight that war. The people like me were the tip of the spear, but there were also larger and mainstream forces at work. Mainstream forces like Silicon Valley were also doing the pushing. Sure, Google and Facebook were slowly eating our privacy for their own ends, but that was just the surface. Deep below the surface, the hackers, the pirates, and the crypto-anarchists were all keeping it real.

Lately I can’t help but feel like that is no longer the case. Silicon Valley *is* the Gentrified Web. It’s Google Safe Search. It’s the Facebook news feed.. It’s Amazon’s Choice for buying cheap plastic shit. It’s using Instagram to post pictures of the things that we love most: ourselves, at the expense of the things that matter the most: everyone else. Silicon Valley betrayed us. It was bad enough that Hollywood tricked us into working jobs that we hate to buy shit that we don’t need to impress people we don’t even like. Silicon Valley has managed to weaponize that very same cocktail of envy and ennui to the point that we are living under the tyranny that is Fear Of Missing Out. The revolution is over. The good guys lost. Nothing left to do now but take a bunch of Xanax and watch American Idol on television watch clips of other people going to Coachella on your phone.

Occupy Wall Street and the Anti-SOPA movement were the peak. It got everyone organized, but no one could get their minds around the idea of a real conversation between real people. They can’t do it because no one can really imagine anything other than submission to the same old power of the centrally planned, corporate-sponsored, government state. Big Tech is just going to keep doing the same old rent-seeking and extraction-capitalism that everyone else has done for centuries. Big Tech isn’t revolutionary. It’s evolutionary. They will keep doing it because no one has any idea what something else looks like. Revolutions have been fought, but the infection of the old tyranny persists. The broken machine will stay broken; it doesn’t matter who is sitting in the drivers’ seat.

I don’t got time for your petty thinking mind, son. I’m bigger than those…

I guess that this is the essence of The Great Big Thing(tm): that it doesn’t matter what you do, you are part of it. If you support these broken systems, you are part of it. If you fight the broken systems, you are *still* part of it. There is no “capital T” Truth, there is just the pro-machine propaganda locked in a scorched-earth conflict with the anti-machine propaganda. No one can see a way around it; everyone just seeks to stabilize it. The thing is, it won’t stabilize – because it’s broken. Broken systems do not function as designed. They malfunction.

For every good thing the hacker community does, there are these epic dramas between [fragile] egos, and the [toxic] cliques built around those egos. It’s exhausting to be part of it. Part of washing out of Facebook was also washing out of the hacker community. I just don’t have any more patience for dorks with Asperger’s syndrome failing at interacting with other dorks. It’s a lot of talking, and not a lot of hacking. There are a few people out there (most of them female, BTW) that are doing things, but for the most part it’s 10% doing something once, and then 90% holding court. I just can’t do it anymore.

Turnkey Torrents and Swedish Internets

A few months ago, I wrote about using a Turnkey Linux OpenVPN appliance to route network traffic thru Sweden. Since that time I have gotten my BitTorrent machine running. The other post was mostly about getting the VPN tunneling and routing to work. This post will mostly be about setting up the torrent server.

The Turnkey Torrent Server is neat because it’s a minimal Debian machine with a pre-configured Transmission BitTorrent Client, a web interface for managing BitTorrent, a working Samba server, and a webDAV client so you can use a browser to download files. Basically, you use the web interface to grab things, the Samba server to makes them accessible to your media players on your internal network, and webDAV makes the files accessible to the rest of the world, assuming you have the right ports forwarded. My preferred method for watching torrented videos is on a PC plugged into a TV running VLC Media player controlled with a wireless keyboard. I know I should be using Plex and shit like that, but I’m old school.

The Swedish Connection
For some of my friends who aren’t pirates (especially the friends that are into British TV) I am like their coke dealer except I deal in movies and TV shows. That means that sometimes I get asked to find things when I’m not at home. Like a third of my remote access shenanigans, A.K.A. reverse telecommuting, is so that I can pull up BitTorrent and snarf shit for friends and relatives when I’m not at home. Being able to expose the uTorrent remote interface to the web was great for letting my more technical non-hacker friends grab torrents without any assistance from me.

My VPN provider gives me the option of forwarding ports. When I was running uTorrent on a dedicated Windows machine, those forwarded ports were easy to configure. I would just set them up on the VPN site and map them to the ports I configured in uTorrent. One was for BitTorrent transfers to make sure that my ratios reported correctly on private trackers. The other was for the uTorrent web interface. For a long time I ran Windows for torrenting because I used PeerBlock to help me fly under the radar. Times change tho. Real time block lists is old and busted. VPNs is the new hotness. Unfortunately, with this VPN router setup it messes up forwarding ports. When I set up port forwarding on the VPN provider side, the forwarded ports hit the doorway server rather than the torrent server, so that has to be fixed with more IPTables kung fu on the doorway server.

I know I said that I wasn’t going to write anymore about the doorway server, but I lied. I needed to configure the doorway server to open those ports and then forward them to the torrent server. Let’s assume that my internal network is a 192.168.1.0/24 subnet (a class A block, a range of addresses from 192.168.1.1 to 192.168.0.254) with a default gateway of 192.168.1.1. All of my traffic goes through my local router and hits the Internet from my ISP, in the US. If a device asks for an IP via DHCP, this is the network configuration that it will receive, along with red-blooded American Internets. Here is an awful network diagram because why not?

The doorway server has a static IP of 192.168.1.254 and it’s configured to route all of its traffic through the VPN tunnel to Sweden. Any device that is configured to have a default gateway of 192.168.1.254 will also hit the Internet via the tunnel to Sweden, thereby receiving Swedish Internets. At this point, all the configuration is done, and your torrent server will work, but there won’t be any ports forwarded to it, which is lame. No forwarded ports is especially lame when you are using private trackers because it can really mess with your ratios. Now, you could just open a port on your firewall for the web interface on the American side, but that’s also pretty lame. If you want to use your torrent server, you should strictly be using Swedish Internets.

Welcome to Swedish Internet
To forward those ports, first set them up in Transmission, then with your VPN provider. The web interface port [12322] is already configured for you by Turnkey Linux. You can set the other port in the Preferences->Network->Listening Port blank. Once the entry points and the end points are configured, it’s time to do more iptables kung fu.

Let’s assume the following:

  1. The web interface port for Transmission is 12322.
  2. The listening port in Transmission to 9001.
  3. The static IP for your torrent server is 192.168.1.10
  4. The doorway server IP is 192.168.1.254.
  5. The forwarding ports you were able to get from your VPN provider are 9000 and 9001.
  6. You want to use port 9000 on the VPN side for the Transmission web interface.
  7. You wand to use port 9001 on the VPN side for the Transmission listening port.

What needs to happen is for the VPN tunnel interface (tun0) to listen on ports 9000 and 9001, then forward traffic on those ports to 192.168.1.10. Then, you want any traffic on those same ports that comes from the doorway’s internal network interface (eth0) to be modified so that it doesn’t look like it came from the tunnel interface. This is super important for TCP handshakes.

First create your rules for accepting/forwarding connections on the VPN side:


iptables -A FORWARD -i tun0 -o eth0 -p tcp --syn --dport 9000 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -p udp --dport 9001 -m conntrack --ctstate NEW -j ACCEPT

This was probably configured fine in the doorway server post, but this specifically allows all the traffic that passes between your VPN and the local network connections once a connection is establshed:


iptables -A FORWARD -i eth0 -o tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Now add the rules to rewrite packets destined to the web interface and then rewrite the responses:


iptables -t nat -A PREROUTING -i tun0 -p tcp --dport 9000 -j DNAT --to-destination 192.168.1.10:12322
iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 9000 -d 192.168.1.10 -j SNAT --to-source 192.168.1.254

Add the rules to rewrite all the BitTorrent packets, including responses:


iptables -t nat -A PREROUTING -i tun0 -p udp --dport 9001 -j DNAT --to-destination 192.168.1.10:9001
iptables -t nat -A POSTROUTING -o eth0 -p udp --dport 9001 -d 192.168.38.37 -j SNAT --to-source 192.168.1.254

All the strict rewriting probably isn’t a big deal for the BitTorrent traffic because it’s UDP, and UDP don’t give a fuck.

If it’s working, point your browser to https://the-ip-address-of-your-vpn-server:9000 and you should be prompted to log in to the web interface. Once you’re sure it’s all good, then it’s time to save your working iptables config:

iptables-save | tee /etc/iptables.up.rules

Make sure that your rules work well after you reboot your VM. And then run your backups to make sure that they have your latest config because there’s nothing worse than trying to piece all this crap together for the third time.

You can skip having to remember the IP by registering it as a subdomain somewhere, either with a dynamic DNS service, or with the registrar for a domain that you own.

In the unlikely event that I made this, or any other technical thing look easy, rest assured that it took me at least a couple hours. Also, I had it working a months ago, but I forgot to update my snapshot and had to redo it again because I am not a smart man. Then during this second go around I had to restore the VM from a backup because iptables just isn’t my bag. Thankfully BitTorrent is my bag. Happy pirating!

Additional Remote Access Shenannegans

In my previous post, I expanded on my preferred methods for gaining remote access to my home network. Since then, I have decided to quit using Hyper-V because it’s awful.

I have now decided to move to ProxMox on my server. Proxmox is pretty cool, although the documentation sucks. I recently started using Linux containers for my remote access servers instead of VMs, which ProxMox supports out of the box. A truly compelling feature of Proxmox is its integration with Turnkey Linux. You can download Turnkey Linux Container Templates directly in Proxmox and spin them up quickly. I used the Turnkey OpenVPN template to rebuild GATE, my OpenVPN server.

The performance improvement is remarkable. On Hyper-V, each Linux VM ate 512MB of RAM just to sit idle 99.9% of the time. So far I have 3 containers configured with 512MB of ram each, but they use roughly 25-50MB each and they leave the rest for the server. PORTAL, my Windows VM, still takes his share of the RAM and doesn’t give it back, but that’s nothing new.

Moar RAM == moar servers!
On the plus side, efficient use of memory means that I can feel better about running a dedicated Linux box (container) for each application. Dedicated boxes mean that when I inevitably screw something up, it doesn’t affect the other applications that are running (that I haven’t screwed up yet.) Also, with pre-built containers and snapshots, you can toss machines that you screwed up without losing much time. I know, I know, rebuilding a Linux box instead of fixing it is sacrilege… but I got other shit to do.

On the minus side, containers don’t really act like VMs, especially when it comes to alternative network configurations. In particular, a Linux Container that uses a TUN or TAP interface needs some extra configuring. The TUN interface is how OpenVPN does its thing, so getting my GATE machine, the OpenVPN server that allows access to the DMZ on my internal network took a lot of fiddling with to get right. I did a bunch of Googling and I ended up with this forum post that recommends rebuilding the TUN interface at boot time with a script.

Here is the TUN script that I have graciously stolen so that I don’t have to Google it again (I didn’t even bother to change the German comments):

#! /bin/sh
### BEGIN INIT INFO
# Provides:          tun
# Required-Start:    $network
# Required-Stop:     $openvpn
# Default-Start:     S 1 2
# Default-Stop:      0 6
# Short-Description: Make a tun device.
# Description:       Create a tundev for openvpn
### END INIT INFO

# Aktionen
case "$1" in
    start)
        mkdir /dev/net
        mknod /dev/net/tun c 10 200
        chmod 666 /dev/net/tun
        ;;
    stop)
        rm /dev/net/tun
        rmdir /dev/net
        ;;
    restart)
        #do nothing!
        ;;
esac

exit 0

Then you enable the script and turn it on:
chmod 755 /etc/init.d/tun
update-rc.d tun defaults

With this script, I was able to stand up a real OpenVPN server (not just an Access Server appliance) for unlimited concurrent connections! Not that I need them. I’m the only one that uses the VPN and most of the time I just use SSH tunnels anyway.

Since OpenVPN container templates make standing up servers so easy, I thought I’d build another one that works in reverse. In addition to GATE that lets OpenVPN clients route in to the DMZ, I thought I would use an OpenVPN client to route traffic from some DMZ hosts out to the Internet via Sweden. In the past, I used a VPN service to dump my Bittorrent box’s traffic this way, but I would like to extend that service to multiple machines. EVERYBODY GETS A VPN!

Öppna dörr. Getönda flörr.
I couldn’t figure out what a machine that does this kind of thing is called. It’s a server, but it serves up its client connection to other clients. It’s a router, but it just has the one network interface (eth0) that connects to a tunnel (tun0). It’s basically setting up a site-to-site VPN, but the other site is actually a secure gateway. This identity crisis led to a terminology problem that made finding documentation pretty tough. Fortunately, I found another pirate looking to do the same thing and stole his scripts 🙂

Since it’s a doorway to a VPN gateway to Sweden, I decided to call the box DÖRR, which is Swedish for “door”. I did this to maintain my trans-dimensional gateway theme (HUB, GATE, PORTAL, etc.)

Also, I would like to apologize to the entire region of Scandinavia for what I did you your languages to make the pun above.

The Turnkey Linux OpenVPN template sets up in one of 3 modes: “Server”, “Gateway”, or “Client”. “Server” is the option I went with for GATE, which allows OVPN clients the option of accessing local subnets. This is the “Server” portion of a Site-to-Site VPN or a corporate VPN. “Gateway” forces all OVPN clients to route all traffic through it, this is the config for secure VPN services like NordVPN or AirVPN. “Client” makes a client connection to another OVPN server. If you connect a “Client” to a “Server” you get the full Site-to-Site solution, but there is no documentation on Turnkey about setting up a “Site-to-Site Client” to route traffic from its internal subnet to the “Site-to-Site Server”.

What I am looking to do is configure a “Site-to-Site Client” but point it to a “Gateway”. Another important consideration when setting this up was that I didn’t want to do any meddling with the setup of my DMZ network. I just want to manually configure a host to use DÖRR as its default gateway. No need for proxies, DNSMasq, DHCP or anything like that. Just static IP’s, the way God intended it 🙂

Step 1 – The Site-to-Site Client
Once I got the container running, I had to fix the /dev/tun problem (the script above) and then make some config changes to OpenVPN.

Because this is a VPN client, and not a server, you need to get the OpenVPN client profile loaded. The bulk of my experience with OpenVPN clients is on Windows where you start the client when you need it. For this application you need to automatically run the OpenVPN connect process at boot and keep it running indefinitely.

First, you need to obtain a client config. I downloaded my ‘client.ovpn’ file from my VPN provider, and I copied it to /etc/openvpn/client.conf as root. You can name the files whatever you want, just remember what you named them because it’s important later.

cp /root/client.ovpn /etc/openvpn/client.conf

Now test the connection to make sure everything worked

openvpn --config /etc/openvpn/client.conf &

The & is important because it puts the OpenVPN process into the background, so that you get your command prompt back by pressing ENTER a couple of times. You can then test your Internet connection to see what your IP is a few different ways. You can use SSH with a dynamic port and tunnel your web traffic thru it with a SOCKs proxy. You could use curl or lynx to view a page that will display your IP. Or you could just use wget. I like to use ifconfig.co like so:

wget -qO- ifconfig.co

If all goes well, you should see your VPN provider’s IP and not your ISP’s.

Once you get the VPN client working, you then want it to start up and connect at boot time. You do this by setting the ‘autostart’ option in /etc/default/openvpn.

nano /etc/default/openvpn
AUTOSTART="client"

If you changed your ‘/etc/openvpn/client.conf’ filename, you change the name here. The AUTOSTART value is the name of that file minus the ‘.conf’

Now reboot your server and do your wget test again to make sure that the VPN connection is starting automatically.

Once that is working, you have to route traffic. This means IPTables, because OpenVPN and IPTables go together like pizza and beer.

Step 2 – De Routningen

Normally to route traffic between interfaces on Linux, you have to add IP forwarding (echo 1 > /proc/sys/net/ipv4/ip_forward etc.) In this case, the Turnkey OpenVPN template has already done that for you. All you have to do add a few forwarding rules:

iptables -A FORWARD -o tun0 -i eth0 -s 192.168.1.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE

Now it’s time to test them. For this you need a client computer with a static IP. For the default gateway you want to use the static IP that you assigned to eth0 on your VPN doorway server. I used 192.168.0.254 for DÖRR. If your test box also shows your VPN provider’s IP when you access a site like ipleak.net then it’s time to make those rules permanent. By saving them to /etc/iptables.up.rules. It is important to save them to that specific file because the Turnkey template calls that file when setting up the eth0 interface in /etc/network/interfaces.

iptables-save | tee /etc/iptables.up.rules

I don’t know why it’s set up that way. I’m just here to make awful jokes about Germanic languages.

Once that’s done, reboot the doorway server one last time and test with your client computer with the alternate default gateway.

Now that the my VPN client is working again, I need to rebuild my BitTorrent machine. I am going to try to save some more RAM by going with another Turnkey Linux container template.

EDIT: In my elation over getting something to work, I forgot to change the default gateway back. Unfortunately my test machine was PORTAL, which happens to control my dynamic DNS. So currently all of my hostnames are pointed at Sweden, LÖL.

The Great Big Thing(tm): TV Edition

When I am not playing Skyrim to stave off my existential dread, I watch TV. Needless to say, I have been watching a lot of TV. I used to consider myself more of a cinema nerd, but films just aren’t that good anymore. When I compare some of my favorite films from a long time ago, to the franchise drek that is film today, it lacks quality. Sure, there are good films here and there, like The Dark Knight and Rogue One, but there are a lot of CGI messes too, and some TV shows seem to deliver more consistent quality.

Film sucks for the most part, and I can’t binge watch Adam Curtis documentaries all the time or I will lose my goddamn mind, so I watch TV. Of course I also do family stuff, but with an infant who doesn’t sleep at night, that involves a fair amount of staying up all night holding a sleeping baby, so TV is a big part of my nightly routine.

I have been watching a few new shows and re-watching some old faves, so I’m just going to list them in no particular order and say random things about them.

Stranger Things

I watched Stranger Things for the first time a couple of weeks after it dropped on NetFlix. Since then, I’ve probably rewatched it at least 3 times. It’s a great show, full of nods to 80’s movies like E.T. and Stand By Me, but it also captures something essential about my childhood, which was playing Dungeons and Dragons in my friend’s basement for hours at a time and being bullied.

There are lots of neat things to spot in the show (like the fact that Hop’s daughter, Eleven, and Will all have the same stuffed tiger) and I am unreasonably pumped for season 2, which should be out in a few weeks. I have my own theories about what will happen, but I don’t really want to spoil anything if by some odd chance this is the thing that inspires someone to watch the show, and by an even odder chance I turn out to be right. I will say that the kids’ D&D game at the beginning of the game sort of outlines the plot of the season, and their game at the end probably outlines what will happen in the second series, or at least underlines what is still unresolved at the end of the first series.

Rick and Morty (obvs.)

The new season of Rick and Morty is awesome. It’s another show full of details and fan theories to obsess over. My existential angst is both alleviated and agitated by the show. The show’s conflicting ideas of finding meaning in uncaring universe either helps or makes things worse; I can’t tell which.

The essential point of Rick and Morty is that people with beliefs will have those beliefs tested at every turn. The show actively punishes characters for having any kind of belief, including the devil. The only person that seems to escape this punishment is Rick, and yet Rick is borderline suicidal. Rick has all the answers, and his answer is not to think about it. As power fantasies go, Rick is either the greatest expression because he is essentially all-powerful, or the worst expression because all of his power never seems to get him anywhere. Again, I can’t tell which.

True Detective (season 1)

Speaking of the dichotomy of belief and disbelief, the first season of True Detective is one of the best television shows I have ever seen. Rust (Matthew McConaughey) is incredibly intelligent and yet completely unable to interact with people, except for when he is interrogating them and luring them into making confessions. There are a number of similarities to Rick and Morty, mostly having to do with the juxtaposition of human meaning and savage cruelty, but also the juxtaposition of truth and deception, duty and corruption. There is just barely enough evidence in the show to convince you that Rust is either psychic or psychotic, and somehow not enough to convince you which one.

Rust is working to find truth, and in so doing alienating everyone and choosing to live in madness and misery. Marty on the other hand does the opposite and ends up alienating everyone anyway. The only way that they can uphold the law is to break the law. It’s existential absurdity at its finest.

Season 2 is a good show, it’s just not the masterpiece that is season 1. It’s still worth watching, I just haven’t watched it a dozen times like I have season 1. If you are going to commit to both seasons, you should probably watch season 2 first. Season 2 unfortunately lacks both the Southern Gothic aesthetic of season 1, and the Lovecraftian symbolism. Season 2 takes place in L.A. and without those motifs, it’s just weird L.A. people doing weird L.A. shit. Kind of like a darker version of Bosch.

BoJack Horseman
BoJack Horseman is another “grown up cartoon” that specializes in reflecting my own nihilism back at me. While Rick and Morty is an endorsement for not engaging in reality, Bojack Horseman is an endorsement for [shying away from] your responsibility for your own reality. Like Rick Sanchez, Bojack understands that everything is shitty and pointless. Unlike Rick, Bojack learns that he is responsible for his own happiness. Of course, Bojack does a comically bad job of handling that responsibility, but he is aware that the responsibility exists.

Watching Bojack Horseman and Rick and Morty as a matched set offers two interesting takes on the “whatever you do you will end up feeling empty inside” nature of Western Civilization. I think both shows have an interesting viewpoint: that you can either take responsibility for yourself and your place in the world around you, or you can deny it. No matter which choice you make, you can still fuck it up completely.

I call every cop show on TV CSI

My mother is a big fan of cop shows. My only exposure to them is when I visit her during prime time. Because I either stream or torrent all of the TV I watch, I have completely lost touch with broadcast television and commercials. Watching TV with my mom is fairly surreal for me.

I don’t have anything against cop shows. Some of my favorite shows, like The Wire, True Detective, and Luther are cop shows. It’s just that there have been so many variations of the police procedural program on TV for so many years that it seems like a running joke. So I call every cop show on TV CSI: with the one feature that I notice.

Here are a few examples:

CSI: Goth Girl – All I know about this show is that the writers don’t understand how IP addresses work. Also, it’s been on the air for like 15 years. Surely Goth Girl has grown up to be a Goth Woman by now? Maybe she settled down, married a Goth Guy? Had a couple of Goth Kids? Sure, I make fun of Goth Girl, but I also deeply respect her commitment to the Goth Lifestyle. I was all about Punk Rock until I went into the military and they shaved my head. Then I never looked back.

CSI: Hacker Girl – This is another show that doesn’t get how comically easy it is to change an IP. It’s totally possible to match up IP info with other details, but it will take a long time and probably require cooperation with foreign governments that may or may not extradite to the US. Or, you know, the cooperation of a huge and totally illegal NSA surveillance program.

Anyway, the technical dialogue in this show is laughable, but the idea that some nerd holed up in a dark place surrounded by computers is a major contributor to the success of a mission is pretty cool. Also the gear she uses looks pretty cool. One point of order tho, why are there so many screens outside of her field of vision? I am total diva when it comes to monitors, so I understand the need for many, it’s just that turning your head to see a screen really interferes with your productivity.

CSI: Cop Killer – I love that my mom watches this show because she was suuuuuper worried about the music I listened to in the 90’s, like Bodycount. Our media landscape is funny in that 90’s white people were scared of Ice-T, and then he became a weeknight TV staple. On a cop show.

I love Ice-T memes so much that I have created my own narrative for his CSI character. Ice-T blames the LAPD for the riots in 1992 and has infiltrated the LAPD under a false ID. Now he is slowly destroying the LAPD by tricking the detectives into investigating nonsense crimes and thereby wasting their resources.

CSI: Pun Glasses I have no idea if my mom watches this show or not, but there’s a meme for it and I have an awful pun about puns, so by law I have to put it on the list.

For all of the crap that I give television, especially cop shows, there are a few cop shows that I’m a big fan of. Only about half of them are American, which even I think is a bit pretentious.

The problem with everything is central control

I have been reading postmortems on the election, and it basically came down to a failure of media and political elites to get a read on the voting public. Basically, a small number of very powerful intellectuals operated in a kind of silo of information.

All the stuff I have read and watched about the 2008 financial meltdown comes down to a failure of large banks. A small number of very powerful banks, operated in a kind of silo of finance.

This country is a mess because of centralized control and centralized culture. It’s a mess because of intellectual laziness and emotional cowardice. It’s a mess because we rely on crumbling institutions to help us.

Centralizing seems natural and logical. There is an idea in economics called the economy of scale. Basically, a big operation (a firm, a factory, a project) has better purchasing power and is able to spread fixed costs over large numbers of units. In network topology, the Star Model is the simplest to manage, putting all the resources at the center. I tend to think about economics and computer networks as kind of similar.

One of the primary criticisms of the Star Network is the single point of failure. If the center of the network has any sort of problem, the whole network suffers. This is also a problem with economies of scale. A lot of electronic component manufacturing is centralized in Taiwan, in 1999 an earthquake caused a worldwide shortage of computer memory. It seems that any time there is bad weather in New York City, flights are delayed across all of North America. In 2008, trouble with undersea fiber cables caused widespread Internet connectivity problems throughout Asia. A lack of biodiversity in potato crops contributed to the Irish Potato Famine. Centralized control is prone to failure.

This isn’t just a business or a technology problem. It can also be a cultural problem. Centralizing stores of information leads to gatekeeping, where a point of distribution controls the access and dissemination of information. This may be for financial gain, in the case of television and cinema, or it may be for political gain, in the case of the White house press corps. Media outlets repeating what the white house said, and the white house using media reports to support its assertions is how the us ended up invading Iraq under false pretenses.

The diametric opposite of the Star Network is the Mesh network, specifically the Peer-To-Peer network. These models eschew ideas of economy and control in favor of resilience and scalability. Economy of scale eliminates redundancies because they are expensive. Peer-to-peer embraces redundancies because they are resilient.

Embracing peer-to-peer from a cultural standpoint means embracing individuality and diversity. Not just in a left-wing identity politics sort of way, but in a Victorian class struggle kind of way. It means eschewing the gatekeeper-esque ideas of mono-culture in favor of cultural and social diversity. Peer-to-peer culture is messy. It’s full of conflicts and rehashed arguments. It’s not a “safe space” where people of similar mindsets never encounter dissent. It’s a constant barrage of respectful and learning argument.

The cultural division in this country is a failure of our core values. It’s a failure of the right’s anti-intellectualism, and it’s a failure of the left’s elitism. It’s faith by many in crumbling institutions that are out of touch. It’s a failure of corporate media that forces us to turn to our social networks for news that discourages discussion and only seeks to confirm our individual biases.

I’ll be writing more about this opinion (and make no mistake, it’s just an opinion) in future posts. Hopefully it will foster some of the discussion that I am seeking.