Adding a Small Web Server to the Smuggling Operation

One problem with using a single Docker server for a modern smuggling operation is that I end up running a bunch of web applications on different port numbers that I can’t remember. The other challenge is needing to connect to that server from a number of different methods.

I might be connecting through local ports on SSH tunnels, NeoRouter, or via a hostname.

Putting a bunch of links to the different server ports on a webpage *seemed* simple enough: just grab a basic Apache container, fire it up, and create a basic webpage full of hyperlinks. Turns out, there are several challenges with this:

  1. You don’t know what network you will be accessing the server from. The IP, FQDN, or hostname could be different every time you access the webpage. A hyper link to 192.168.1.211 is of no help if that IP is inaccessible to the client. This *could* be solved by using relative paths in the hyperlinks *but*
  2. Apache adds a leading slash to a relative path. That means that a link “:1234″ will point to http://example.com/:1234”
  3. I haven’t created a web page without using a content management system in *at least* 15 years. I am just a bit behind the kids today with their hula-hoops and their rock-and-roll.

So I did what I always do when presented with a technical challenge: fall back on a piece of knowledge that spent like 30 minutes learning that one time, like 20 years ago.

A long time ago, in a galaxy far away, there used to be these crappy free web hosts like Geocities where people could make their own websites. You could do all kinds of things with Java and JavaScript, but you couldn’t do anything that ran on the web server, like CGI scripts or server-side-includes. Server side includes were important, because you could commonly used code (like the header and footer for the page) in a couple of files, and if you changed one of those files, the change would replicate over your whole site.

You could do something similar with JavaScript. You put the script that you want on every page, and tell JavaScript to load it from a single file. Like so:

<script language="JavaSscript" src="header.js"/>

In the header.js file, I would put in a ton of document.write statements to force the client browser to write out the HTML of the head and body sections of the web page. I called this horrible technique “client-side includes”:

document.write('<body bgcolor="000000">');

For the current challenge, I just have to rewrite the URL for each hyperlink, based on some variables on the page:

< script language="JavaScript" >
     document.write('< a href="' + window.location.protocol + '//' + window.location.hostname + ':8989' + 
window.location.pathname + '"> Sonarr < / a > </script >

Sorry for some of the weird spacing, convincing WordPress to mark up JavaScript without actually executing it is kind of fiddly. The code examples look better here.

The solution works most of the time. I like to browse for torrents with a browser that blocks ads and JavaScript, so I have to enable JS for that tab, and then browse in that tab with caution. Sonarr, Radarr, and the like all rely heavily on JavaScript, so I prefer to use Brave’s shields wherever possible.

Modernizing the smuggling operation

The winter holidays are a depressing time for me, so the last month or so of the year I like to really throw myself into a game or project. After being ridiculed by one of my DnD bros about how inefficient and antiquated my piracy set up is, I decided to modernize by adding applications to automate the downloading and organizing of my stolen goods.

My old fashioned manual method was to search trackers like YTS, EZTV, or The Pirate Bay and then add the magnet links to a headless Transmission server that downloads the files to an NFS share on my file server. Once the download is complete, I would copy the files to their final destination, which is a Samba share on the file server. I don’t download directly to the Samba share because BitTorrent is rough on hard drives. My file server has several disks, some of them are nice (expensive) WD Red or Seagate Ironwolf disks, and some are cheap no-name drives. I use the cheap drives for BT and other forms of short term storage, and the nicer drives for long term storage.

For media playback, I used a home theater PC that would access the shared folder, and then play the files in VLC media player. This was the state of the art in 2003, but the game has gotten more fierce.

My new piracy stack now consists of Radarr, Sonarr, Lidarr, and Jackett. These are dedicated web apps for locating (Jackett), downloading (Transmission) and organizing movies (Radarr), Tv (Sonarr), and music (Lidarr). Once the media is downloaded, sorted, and properly renamed, it will be streamed to various devices using Plex.

Rather than run a different VM or Linux container for each app, a friend recommended that I use Docker. Docker is a way of packaging applications up into “containers.” It has taken me a good while to get my mind around what the difference is between a Linux container and a Docker container. I don’t have a good answer, but I do have a hot take: if you want an application/appliance that you can roll into a file and then deploy to different hosts, you can ask one of two people do do it for you: a system administrator or a developer. If you ask a system administrator to do it, and you will end up with LXC, where the Linux config is part of the package, and that package behaves like a whole server with RAM, and IP address, and something to SSH into. If you ask a developer to do it, you just get the app, a weird abstraction of storage and networking, and never having to deal with Unix file permissions. And that’s how you get Docker.

Because I am a hardware/operating system dude that dabbles in networking, LXC makes perfect sense to me. If you have a virtualization platform, like an emulator or a hypervisor, and you run a lot of similar systems on it, why not just run one Linux kernel and let the other “machines” use that? You get separate, resource efficient operating systems that have separate IP’s, memory allocation, and even storage.

The craziest thing about Docker is that if you start a plain container, like Debian, it will pull the image, configure it, start it up, and then immediately shut it down. And this is the expected behavior.

I like that Docker can pop up an application very quickly with almost no configuration on my part. Docker storage and networking feels weird to me, like a bunch of things stapled on after delivering the finished product. From a networking standpoint there is an internal IP scheme with randomly generated IPs for the containers that reminds me of a home network set up on a consumer grade router. If you want the container to have access to the “outer” network, you have to map ports to it. Storage is abstracted into volumes, with each container having a dedicated volume with a randomly generated name. You don’t mount NFS on each container/volume, instead you mount it on the host and point the container to the host’s mountpoint. It’s kind of like NFS, but internal to Docker using that weird internal network. Also, in typical developer fashion, there is very little regard for memory management. The VM that I am running docker on has 16gb of RAM, and its utilization is maxed out 24/7. Maybe Docker doesn’t actually use that RAM constantly, it just reserves it and manages it internally? It’s been chewing through my media collection for a couple of months now, and slowly but surely new things are showing up in Plex. Weird as the stack is, it’s still pretty rad.

All the randomly generated shit makes me feel like I don’t have control. I can probably dig a little deeper into the technology and figure out some manual configuration that would let me micromanage all those details, thereby defeating the whole entire purpose of Docker. Instead, I will just let it do its thing and accept that DevOps is a weird blend of software development and system administration.