## Upgrading a hosted Debian 8 VM to Debian 9

A long time ago, I extolled the virtues of Cloud at Cost’s developer cloud. It’s a good tool for spinning up a box to mess with, but it’s far from being reliable enough for “production” use. What it is great for is having a box that isn’t constrained by a network (like a VM at work might be), but for which access to it may require modifications to a local firewall (like a VM at home might be), while avoiding the cost of a “real” production VM on Digital Ocean or Amazon.

Using a VM this way is a bit like building your house out of straw. It goes up fast, but it comes down fast too. So I have gotten used to setting up machines quickly and then watching them be corrupted and blowing them away.

Sometimes I do something stupid to corrupt them, sometimes they go corrupt all on their own.

The base Debian install on C@C is getting a bit long in the tooth, so part of my normal setup is now upgrading Debian 8.something all the way to Debian 9.whatever. This procedure will take a pretty long time. A long enough time that you will probably have to leave home to go to work, or vice versa. I recommend locking down SSH and then installing screen so you can attach and detach sessions that are running in the background.

First, you should check your version of Debian, to make sure that you are on some version of Jesse, and not an earlier version for some reason:

# cat /etc/debian_version

The sources on a brand new C@C Debian 8 box are woefully out of date. Use your favorite editor (mine is nano; fight me) to edit the sources list.


# nano /etc/apt/sources.list

### Remove the entries and paste these in ####
deb http://httpredir.debian.org/debian jessie main
deb http://security.debian.org jessie/updates main

Once you have the list updated, save the file and run the upgrade scripts like so:

# apt-get update


On a new install this will take a long time. Note that if you are having trouble installing screen or fail2ban, you probably have to do this step before installing them.

Step 2 – See how bad the damage is

Now we see what kind of hell we will be unleashing on this poor little machine by upgrading just about everything. First, see what packages are broken:

# dpkg -C

On a fresh debian 8 box, there shouldn’t be a lot to report. If there is you need to fix those packages. Assuming that you got no messages about messed up packages, you can see what’s been held back from upgrade like so:

# apt-mark showhold

If you got a message that packages are obsolete, you can remove them like so:

# apt-get autoremove

Hopefully you don’t have any messed up packages, and you can proceed to the next step.

Step 3 – Do the thing

Now it’s time to change the sources from Jesse to Stretch and basically do step 1 all over again.

First you update the sources.list file:


# nano /etc/apt/sources.list

### Remove the entries and paste these in ####
deb http://httpredir.debian.org/debian stretch main


And then tag packages to be updated:

# apt-get update

Now do a dry run to see if it will blow up or not:

# apt list --upgradable

Assuming there are no flashing red lights or whatever it’s time to pull the trigger.

Step 4 – Hold on to your butt

Once you run the next set of commands, you will be asked if you want to restart services without asking. Assuming that you are doing this in screen, you can lose your SSH connection and the process will still run. In the event of a catastrophic failure, you can probably open the console and attach to your screen session, so say yes and then buckle up.

TIMES UP! LET’S DO THIS! LEEEEEEEEEERRRRROOOOOOOYYYYY:

# apt-get upgrade
# apt-get dist-upgrade

This will take a long time. Like a really long time. It’ll look cool tho. Having a command line window with text rolling by always makes me feel like Neo from the Matrix.

Step 5 – ??? Profit

Once it’s done, check the Debian version and revel in your victory:

# cat /etc/debian_version

Then check for obsolete packages, for which there will probably be a bunch:

# aptitude search '~o'

And then finally remove them all, like so:

# apt-get autoremove

Just to be safe, you should probably update and upgrade one last time:

# apt-get update
# apt-get upgrade

Step 6 – Diversify your backups

Now that you have gone through all of the difficulty of upgrading your house made of straw, it would be a shame for a big bad wolf to blow it down. For this reason, I recommend an old school Unix backup with tar, and keeping a copy of your backup on another computer. For this second part we will be using scp, and I recommend setting up SSH Keys on another Unix host. This might be a good time to set up ssh key pairs without passphrases for your root accounts to use.

The security model looks something like this:

1. No one can log into any of the hosts via SSH as root.
2. No one can log into any of the hosts without a private key.
3. Your plain user account’s private key should require a passphrase.
4. Your root password should be super strong, probably randomly generated by and stored in password manager like KeePass.
5. If you want to scp a file as root without a passphrase, you should have logged in as a plain user with a private key with a passphrase and then used su to become root.
6. If you can get past all those hurdles, a second public key passphrase isn’t going to protect much.

Change to the root of the file system (/) and run a giant compressed backup job of the whole filesystem (except for the giant tarball that you are dumping everything into).

# cd /
# tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system / 

This will also take a long time, so you should seriously be using screen. Also, there is a lot of stuff in the backup that doesn’t actually need to be backed up, so you could add additional –exclude=/shit/you/dont/need statements to shrink the size of your backup file.

Once once the backup is done you can then change the name of the backup file to that of your machine name and use SCP to copy off the backup file to another Unix host. In the example below I am calling the backup randoVM. You should change the name because you may be backing up multiple VMs to the same source. I like to use my HUB VM at home because it has a lot of [virtual] disk space compared to my hosted VMs.

# mv /backup.tar.gz /ranoVM.tar.gz
# scp randoVM.tar.gz steve@home.stevesblog.com:~/backups/ranoVM.tar.gz 

You can leave the big tarball on your VM’s file system, or you can delete it. There are merits to doing either. You will want to repeat this backup procedure periodically as you add features and services to the VM.

If you find yourself needing to restore the VM because you or the big bad wolf did something stupid, you can simply pull the backup down and expand it.

# cd /
# scp steve@home.stevesblog.com:~/backups/ranoVM.tar.gz .
# tar -xvpzf ranoVM.tar.gz

## Network File Systems and VMs: old school Unix meets the new school virtualization

I have been replacing low end servers with virtual machines for a while now, and it’s been kinda rad. In a previous post I mentioned replacing a physical server with a VM for Bittorrent. The results were fantastic.

The typical problem with BT is that it devours bandwidth and gets you busted by Hollywood. The other problem is that it also devours disk space. I solved the first problem using Swedish Internets, but my disk problem was actually exacerbated by using a VM.

In the past, I would just throw a big drive into a dinky little Atom CPU box and snarf torrents all day. When I set up my Proxmox cluster, my VMs were still using local drives. For a while, my Turnkey Linux Torrent Server VM had a 500GB virtual disk. That worked ok. I would grab videos and whatnot and copy them to my NAS for viewing, and once I seeded my torrents back 300%, I would delete them. This was fine until I set up a RetroPie and started grabbing giant ROM sets from a private tracker.

Private trackers are great for making specialized warez easy to find. The problem is that they track the ratio of what you download compared to what you upload, and grabbing too much without seeding it back is a no-no. I now find myself grabbing terabytes of stuff that I have to seed indefinitely. Time to put more disk(s) into the cluster.

I spent way too much money on my NAS to keep fretting about the hard drives on individual machines, virtual or otherwise. So the obvious choice was to toss a disk in and attach it to the VM through the network. I like using containers for Linux machines because the memory efficiency is insane. My research indicated that the best move with containers was to use CIFS. I couldn’t get that to work, so I went with the tried and true way: NFS. NFS is really the way to go for Unix to Unix file sharing. It’s fast, and fairly easy to setup. It also doesn’t seem to work with Proxmox containers, because kernel mode something or another… based on the twenty minutes I spent looking into the situation.

So I rebuilt my torrent server as a VM, and used NFS to mount a disk from my NAS like so:

In the /etc/fstab on my torrent server I added this line:

192.168.1.2:/volume2/Downloads /srv/storage nfs rw,async,hard,intr,noexec 0 0

Where –

1. 129.168.1.2 is the IP address of my NAS
2. /volume2/Downloads is the NFS export of the shared folder. I have a Synology, so your server config will probably be different.
3. /srv/storage is the folder that I want the torrent server to mount the shared folder as. On the Turnkey Torrent Server this is where Transmission BT stores its downloaded files by default.
4. The rest of the permissions mean it’s read/write and that basically anyone can modify the contents. These are terrible permissions to use for file shares the require privacy and security. They’re fine for stolen videos and games tho.

Once that is in place, you can mount it:

mount /srv/storage

And you’re set.

Because the disk is on my NAS, I can also share it using CIFS, and mount it to my Windows machines. This is handy for when I download a weekly show, I can watch it directly from the Downloads folder and then delete it once it’s done seeding. I like doing this for programs that will end up on Netflix, where I just want to stay current, rather than hanging on to the finished program.

This worked out so well that I decided to spin up a Turnkey Linux Media Server. For this little project, I basically duplicated the steps above, using the folder I have my videos shared on. So far, I have it working for serving cartoons to my daughter’s Roku TV, and my Amazon Fire Stick. I have plans to set the Emby app up on the kid’s Amazon Fire Tablets soon, once I figure out the app situation which is probably going to involve side loading or some other kind of Android fuckitude.

Of course, my media files aren’t properly named or organized, so I will have to write a script to fix all of that 🙂

UPDATE: During the holidays, the private tracker in question did an event where you could download select ROM sets for free and get a bonus for seeding them, so the brand new disk I bought filled up and I had to buy another. I couldn’t migrate a single disk to RAID0, so I had to move the data off the disk, build the new array, and then move the data to it. An operation that took something like 36 hours for 4TB via USB 3.

Also, not being able to use NFS with a container is apparently a Proxmox limitation that has been remedied in the latest release.

## Stupid SSH Tricks

I use this site for a number of reasons. One them is to keep a little diary of things that I have figured out so that I can reference them later. The problem is that those little discoveries are buried in rambling posts about why I chose to do something.

I have given 2600 talks about SSH tunnels but I really don’t have a permanent record of my various [mis]uses of SSH, so I thought I would put them all in a post for future reference. I have written about securing SSH with asymmetric encryption keys, but there are many more things that you can do with SSH.

IMPORTANT NOTE: – none of these tricks work unless your host has TCP port forwarding enabled. This option is usually enabled by default, but you should double check your /etc/ssh/sshd_config file:

nano /etc/ssh/sshd_config

Then make sure that the option is uncommented and enabled:

 AllowTcpForwarding yes

Also, if all this putty crap looks tough to do, just remember that all of this is way easier with UNIX. Particularly using the -J option for jumping SSH connections.

Dynamic Ports: slip past content filters with this one weird trick

This is the go-to use for an SSH tunnel. University and corporate networks are often overseen by petty tyrants who limit access to sites of questionable content, or sites that are prone to eating bandwidth, like social and streaming media. Ironically, most of these networks allow outbound SSH. SSH kills content filters dead.

You can create a SOCKS proxy using an SSH connection by creating a dynamic port. You can then point your browser to use your local address (127.0.0.1) as a SOCKS proxy to smuggle all of your browser traffic thru the SSH tunnel.

I like using Firefox specifically for this task because Chrome or IE does dumb things when you mess with the proxy settings. Or rather, they did dumb things that one time I used them to tunnel 10 years ago and then decided on using FireFox. If you want to also tunnel your DNS queries, in Firefox type “about:config” in the browser bar and then find the setting “network.proxy.socks_remote_dns” and set it to “true”. There’s probably a more modern way to to that, but that’s the tried and true way. Tunneling DNS isn’t strictly necessary, but it does help you stay under the radar on a restrictive network.

It should be noted that once you tunnel your browser traffic through the tunnel, you are likely to lose access to intranet sites on your local subnet. For this and a number of other reasons, I like to run Brave (Chrome) for my normal stuff, and Firefox for the tunneled stuff. Running two browsers at the same time seems wasteful, but it saves a bunch of headaches.

On Windows, you can drop the port one of three ways:

If you are into radio buttons and text boxes, you can configure PuTTY to open one every time you connect. On the tree control to the left, click Connection -> SSH -> Tunnels. You’ll need to select dynamic and enter a source port (5555 in the picture to the right) and click ‘Add’. Your new port should appear in the list above as ‘D5555’. Be sure to go back to your session and click ‘Save’ of you want the port to be created every time you open the session. As long as you are messing with your PuTTY session, you might as well set your terminal colors to green text so you look like a real hacker.

If using the GUI makes you feel like a wimp, you can just script it with a handy batch file:

putty -D 5555 user@ssh.server.com

You can also feel like a real UNIX badass by copying putty.exe to c:\Windows\System32 and renaming it to ssh.exe so you can kick off your session from DOS like a real console cowboy.

OR you can dump the whole interactive shell pretense and use plink.exe to make your connection and drop your ports without the whole pesky PuTTY window getting in your way:

plink -D 5555 user@ssh.server.com

Plink is bascially PuTTY with no window. It functions basically the same in all other respects.

If you are using a Unix or Linux workstation, you can set up your dynamic port with a similar syntax:

ssh -C -D 5555 user@ssh.server.com

Note: the -C switch compresses traffic going thru the tunnel, theoretically increasing network speeds through the tunnel.

Local Ports: I heard u like shells so we tunneled SSH thru yo SSH so you can get a shell while you gettin a shell

You can use SSH to secure more than just your web browsing. You can use it to secure pretty much any TCP connection. I like using it to secure notoriously insecure VNC and X sessions.

Another use is to get around port restrictions. Some networks may allow outbound SSH, but only on port 22. Some home ISP’s get shitty about running servers on reserved ports like 22. This means you have to forward some bunk port on your home router, like 62222 to the SSH server on your home network. I used to do this when I had Time Warner cable. The problem would get worse when I was trying to connect remotely from a restrictive network that only let SSH out on port 22.

To get around this problem, I would have to SSH to a public access Unix system like the Super Dimensional Fortress on port 22 and then drop a local port that forwarded to the bunk SSH port on the IP of my home router. When I did that with different windows that had different text colors it looked like I was on CSI: Hacker Girl back tracing the killer’s IP address.

The setup is pretty much the same as for the dynamic port, only you have to specify a destination IP and port as well. The idea is to connect to a remote host that is on the other side of a restrictive firewall and using the SSH tunnel to make something accessible to your local your local network. It forwards all traffic to your local port to the remote destination thru the tunnel. In the example above it was SSH but it could be RDP or any other TCP connection. I’ll use RDP (port 3389) in the example below.

To tunnel a Microsoft Remote Desktop session through SSH using the PuTTY gui, use the tree control to the left, click Connection -> SSH -> Tunnels. You’ll need to select local and enter a source port (13389 in the picture to the right), set the destination address or host+domain name and the port (192.168.1.10:3389 in the picture to the right) and click ‘Add’. 13389 will be the port on your workstation that is now connected to the RDP port on the remote network (3389). Your new port should appear in the list above as ‘L13389 192.168.1.10:3389’. Be sure to go back to your session and click ‘Save’ of you want the port to be created every time you open the session. In your RDP client, you would connect to 127.0.0.1:13389.

If you are scripting this setup, use the -L switch along with your source port, destination IP/host and the destination port. Using the scenario from above, you forward local port 13389 to 192.168.1.10 port 3389 like this, where your SSH username is ‘alice’ and your home network’s dynamic DNS hostname is casa-alice.dynamic.DNS:

putty -L 13389:192.168.1.10:3389 alice@casa-alice.dynamic.DNS

And finally, the syntax is the same with plink:

plink -L 13389:192.168.1.10:3389 alice@casa-alice.dynamic.DNS

You can actually specify multiple local ports to remote destinations. I do this with PuTTY to get direct access to the web interface on my Proxmox cluster and to RDP to a Windows host using just the one tunnel and without having to mess with my proxy settings.

Remote Ports: SSH kills firewalls dead.

In the local port scenario, you are connecting to a remote host behind a firewall and using the SSH tunnel to make a host inside the remote firewall accessible to your local your local network. You can also do the opposite, which is to connect to a remote host outside of your firewall and use the SSH tunnel to make a host inside your local firewall accessible to either the Internet or to hosts on a remote network.

You do this by dropping a port at the other end of the tunnel. On the remote host. The obvious use is to temporarily punch a hole in the local firewall to expose an internal web server to the Internet. If the remote host that you are connecting to is directly connected to the Internet, (like a hosted VM from Cloud At Cost) you can temporarily open a port on the remote server to tunnel traffic to the web server on your internal network.

A more nefarious use for a remote port would be for a leave-behind box (formerly known as a dropbox before the term became a brand name for cloud storage) to phone home from a target network. Basically you build a cheap single board PC, like a Raspberry Pi or a plug server that you physically plug into a network that you plan on hacking testing for security holes. This approach saves a ton of time reverse engineering firewalls to gain access. There are two basic approaches, load up the box with tools and drop it, or use a minimal box as a router/pivot for tools you are running outside the target network.

To do this with the PuTTY gui, it’s basically the same as setting up a local port. Use the tree control to the left, click Connection -> SSH -> Tunnels. You’ll need to select remote and enter a source port (58080 in the picture to the right), set the destination address or host+domain name and the port (192.168.1.10:80 in the picture to the right) and click ‘Add’. You also need to click the check both of the boxes next to “Local ports accept connections from other hosts” and “Remote ports do the same (SSH-2 only)”. Your new port should appear in the list above as ‘R58080:192.168.1.10:80’. Be sure to go back to your session and click ‘Save’ of you want the port to be created every time you open the session.

If you are scripting this setup, use the -R switch along with your source port, destination IP/host and the destination port. Using the scenario from above, you forward remote port 58080 on ssh.server.com to port 80 on your internal home web server with the IP 192.168.1.10 like this:

putty -R 58080:192.168.1.10:80 alice@ssh.server.com

And finally, the syntax is the same with plink:

putty -R 58080:192.168.1.10:80 alice@ssh.server.com

The only gotcha with scripting your remote port drop with putty/plink is that I don’t think that there is a command line switch for enabling connections from other hosts, so you have to enable the remote port sharing on the SSH server side.

Making Local and Remote Ports Accessible to other hosts

Sharing your remote and local hosts lets you set up your SSH tunnel on one host and then connect to the tunnel from a different host.

In the case of a local port, you could initiate the SSH session on your home Linux server, and then connect to that port from your Windows workstation. This is handy if you are tunneling RDP and you don’t have PuTTY available on your Windows box. Although PuTTY is super portable so it’s dead simple to smuggle it onto the most locked down of Windows machines.

In the case of the remote port, it’s pretty much mandatory for the web server or dropped box use cases. You can still script the connection, you just have to modify your sshd_config file on your SSH server. On a Debian-esque server you do this by using sudo or su to become root and then type:

nano /etc/ssh/sshd_config

You then add the GatewayPorts option. You can put it anywhere in the file, but I prefer to keep it in the first few lines of the file where entries for port configuration are.

# What ports, IPs and protocols we listen for
Port 22
GatewayPorts yes

Then save the file and restart SSH:

systemctl restart ssh

Or on Debian 9, you use ‘service’ to restart SSH:

service ssh restart

I am sure that this option is a big security risk, so I recommend a cheap low powered VM dedicated specifically to bouncing SSH connections. I also recommend securing it with SSH keys. If you are looking to script SSH connections that are secured with SSH keys, I recommend not setting a passphrase on your private key. You can include your private key on the putty/plink commandline with the -i switch:

putty -L 13389:192.168.1.10:3389 alice@casa-alice.dynamic.DNS -i c:\path\to\your\key\key.ppk

## Turnkey Torrents and Swedish Internets

A few months ago, I wrote about using a Turnkey Linux OpenVPN appliance to route network traffic thru Sweden. Since that time I have gotten my BitTorrent machine running. The other post was mostly about getting the VPN tunneling and routing to work. This post will mostly be about setting up the torrent server.

The Turnkey Torrent Server is neat because it’s a minimal Debian machine with a pre-configured Transmission BitTorrent Client, a web interface for managing BitTorrent, a working Samba server, and a webDAV client so you can use a browser to download files. Basically, you use the web interface to grab things, the Samba server to makes them accessible to your media players on your internal network, and webDAV makes the files accessible to the rest of the world, assuming you have the right ports forwarded. My preferred method for watching torrented videos is on a PC plugged into a TV running VLC Media player controlled with a wireless keyboard. I know I should be using Plex and shit like that, but I’m old school.

The Swedish Connection
For some of my friends who aren’t pirates (especially the friends that are into British TV) I am like their coke dealer except I deal in movies and TV shows. That means that sometimes I get asked to find things when I’m not at home. Like a third of my remote access shenanigans, A.K.A. reverse telecommuting, is so that I can pull up BitTorrent and snarf shit for friends and relatives when I’m not at home. Being able to expose the uTorrent remote interface to the web was great for letting my more technical non-hacker friends grab torrents without any assistance from me.

My VPN provider gives me the option of forwarding ports. When I was running uTorrent on a dedicated Windows machine, those forwarded ports were easy to configure. I would just set them up on the VPN site and map them to the ports I configured in uTorrent. One was for BitTorrent transfers to make sure that my ratios reported correctly on private trackers. The other was for the uTorrent web interface. For a long time I ran Windows for torrenting because I used PeerBlock to help me fly under the radar. Times change tho. Real time block lists is old and busted. VPNs is the new hotness. Unfortunately, with this VPN router setup it messes up forwarding ports. When I set up port forwarding on the VPN provider side, the forwarded ports hit the doorway server rather than the torrent server, so that has to be fixed with more IPTables kung fu on the doorway server.

I know I said that I wasn’t going to write anymore about the doorway server, but I lied. I needed to configure the doorway server to open those ports and then forward them to the torrent server. Let’s assume that my internal network is a 192.168.1.0/24 subnet (a class A block, a range of addresses from 192.168.1.1 to 192.168.0.254) with a default gateway of 192.168.1.1. All of my traffic goes through my local router and hits the Internet from my ISP, in the US. If a device asks for an IP via DHCP, this is the network configuration that it will receive, along with red-blooded American Internets. Here is an awful network diagram because why not?

The doorway server has a static IP of 192.168.1.254 and it’s configured to route all of its traffic through the VPN tunnel to Sweden. Any device that is configured to have a default gateway of 192.168.1.254 will also hit the Internet via the tunnel to Sweden, thereby receiving Swedish Internets. At this point, all the configuration is done, and your torrent server will work, but there won’t be any ports forwarded to it, which is lame. No forwarded ports is especially lame when you are using private trackers because it can really mess with your ratios. Now, you could just open a port on your firewall for the web interface on the American side, but that’s also pretty lame. If you want to use your torrent server, you should strictly be using Swedish Internets.

Welcome to Swedish Internet
To forward those ports, first set them up in Transmission, then with your VPN provider. The web interface port [12322] is already configured for you by Turnkey Linux. You can set the other port in the Preferences->Network->Listening Port blank. Once the entry points and the end points are configured, it’s time to do more iptables kung fu.

Let’s assume the following:

1. The web interface port for Transmission is 12322.
2. The listening port in Transmission to 9001.
3. The static IP for your torrent server is 192.168.1.10
4. The doorway server IP is 192.168.1.254.
5. The forwarding ports you were able to get from your VPN provider are 9000 and 9001.
6. You want to use port 9000 on the VPN side for the Transmission web interface.
7. You wand to use port 9001 on the VPN side for the Transmission listening port.

What needs to happen is for the VPN tunnel interface (tun0) to listen on ports 9000 and 9001, then forward traffic on those ports to 192.168.1.10. Then, you want any traffic on those same ports that comes from the doorway’s internal network interface (eth0) to be modified so that it doesn’t look like it came from the tunnel interface. This is super important for TCP handshakes.

First create your rules for accepting/forwarding connections on the VPN side:


iptables -A FORWARD -i tun0 -o eth0 -p tcp --syn --dport 9000 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -p udp --dport 9001 -m conntrack --ctstate NEW -j ACCEPT


This was probably configured fine in the doorway server post, but this specifically allows all the traffic that passes between your VPN and the local network connections once a connection is establshed:


iptables -A FORWARD -i eth0 -o tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT


Now add the rules to rewrite packets destined to the web interface and then rewrite the responses:


iptables -t nat -A PREROUTING -i tun0 -p tcp --dport 9000 -j DNAT --to-destination 192.168.1.10:12322
iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 9000 -d 192.168.1.10 -j SNAT --to-source 192.168.1.254


Add the rules to rewrite all the BitTorrent packets, including responses:


iptables -t nat -A PREROUTING -i tun0 -p udp --dport 9001 -j DNAT --to-destination 192.168.1.10:9001
iptables -t nat -A POSTROUTING -o eth0 -p udp --dport 9001 -d 192.168.38.37 -j SNAT --to-source 192.168.1.254


All the strict rewriting probably isn’t a big deal for the BitTorrent traffic because it’s UDP, and UDP don’t give a fuck.

If it’s working, point your browser to https://the-ip-address-of-your-vpn-server:9000 and you should be prompted to log in to the web interface. Once you’re sure it’s all good, then it’s time to save your working iptables config:

iptables-save | tee /etc/iptables.up.rules

Make sure that your rules work well after you reboot your VM. And then run your backups to make sure that they have your latest config because there’s nothing worse than trying to piece all this crap together for the third time.

You can skip having to remember the IP by registering it as a subdomain somewhere, either with a dynamic DNS service, or with the registrar for a domain that you own.

In the unlikely event that I made this, or any other technical thing look easy, rest assured that it took me at least a couple hours. Also, I had it working a months ago, but I forgot to update my snapshot and had to redo it again because I am not a smart man. Then during this second go around I had to restore the VM from a backup because iptables just isn’t my bag. Thankfully BitTorrent is my bag. Happy pirating!

It’s been a few weeks since I exorcised HyperV from my life like an evil demon. I have replaced it with Proxmox and so far it’s been mostly great. With a couple of serious caveats.

My transition to Proxmox has been a rather involved, not so much because Proxmox is hard to set up (it’s not), but because I am tired of slapping old junky hardware together and hoping it doesn’t die, and then scrambling to fix it when it inevitably betrays me. Unlike most dudes with home servers and labs, most of my acquisitions were made years ago to support an MMO habit. Specifically multiboxing.

I call them “computers” because they are computers in the sense that they have CPU’s, RAM, and HDD’s. But they were low-budget things when they were assembled years ago. The upgrade path works something like this:

1. A computer begins its life as my main gaming machine that will run my favorite game at a satisfactory speed and resolution.
2. Then I find a new favorite and upgrade the gaming machine’s guts to run the new game.
3. The old gaming guts get transplanted in to my “server” where they are *barely* able to run a few VMs and things like that.
4. The final stage is when the server guts are no longer up to the task of running VMs. I then add a few old network cards and the “server” then becomes my “router”.
5. The old router guts then get donated somewhere. They’re not really useful to anyone, so they probably get shipped to Africa where they get mined for gold and copper by children at gunpoint.

Breaking the [Re]Cycle of Violence
In the years since then, I have taken to playing epic single player games like Skyrim. These games really only need one machine. The rest of the gear I used to run little “servers” for one thing or another, which I have slowly replaced with VMs. The problem with using old junky computers as servers is when you run them balls out 24 hours a day. In my search for a replacement VM host, I spent a lot of time researching off-lease servers. My goal was to have 8 cores and 32gb of ram, with the ability to live migrate VMs to another [lesser] host in an emergency, something that my HyperV setup was lacking. After a lot of consternation, I decided that since a single VM would never actually use more than 4 cores or 8gb of RAM, why not use 2 [or more] desktops?

I found some old off-lease quad-core Intel desktops for about the same retail price as a low end server processor. I used the RAM from my older gaming machines/VMservers and some hard drives from some old file servers to build out my “new” Proxmox cluster. With two quad core desktops running maxed-out memory(16GB each) I managed to satisfy my need to be like the other kids with “8 cores with 32GB of RAM” for about the price of an off-lease server chassis, with the added bonus having a cluster. The goal is to add nodes to grow the cluster to 16 cores and 64GB of RAM, while also adding clustered storage via Ceph to make use of old hard drives from file servers.

New hot servers is old and busted. Old busted clusters is the new hotness.
For me, the clustered model is better, in my opinion for a number of reasons. It mostly has to do with modularity:

1. You can build out your infrastructure one paycheck at a time. Part of the problem with off-lease servers is that while the chassis is cheap, the components that go in it are expensive and/or hard to find. The deal with servers is that the cost of the motherboard and CPU are nothing compared to what you will spend on RAM. I was looking for something I could start using for less than $200, and a refurb desktop and RAM from old gaming boxes got me going at that price point. 2. Desktops stack on top of each other for free. I don’t have any server or telco racks, so in addition buying ECC RAM, I would also be buying a rack, rails, and all of the other stuff that goes with them. This would easily eat up my$200 startup budget before I powered on a single box.
3. Moar boxes == moar resiliency. My gear at home is part lab and part production environment. Yes, I use it to hack stuff and learn new things, but my family also uses it in their daily lives. Network shares stream cartoons; VOIP phones connect friends; keeping these things going is probably as important as my day job. Being able to try bold and stupid things without endangering the “Family Infrastructure” is important to my quality of life.
4. Scaling out is probably more important than Scaling Up. A typical I.T. Department/Data Center response to capacity problems is to regularly stand up newer/more powerful [expensive] gear and then dump the old stuff. I guess this is a good approach if you have the budget. It certainly has created a market for used gear. I don’t have any budget to speak of, so I want to be able to increase capacity by adding servers while keeping the existing ones in play. There are still cost concerns with this approach, mainly with network equipment. In addition to upping my server game, I am going to have to up my networking game as well.

It works…ish

I have my two cluster nodes *kind of* working, with most of my Linux guests running as containers, which is very memory and CPU efficient. I am running two Windows VMs, PORTAL for remote access and dynamic DNS, and MOONBASE which I am using for tasks that need wired network access. All of my desktops are currently in pieces, having donated their guts to the “Cluster Collective” so I am mostly using my laptop for everything. I am not really in the habit of plugging it in to Ethernet, or leaving it turned on, so for now I am using a VM in place of my desktop for long running tasks like file transfers.

I say that the cluster is only kind of working because my home network isn’t very well segmented and the cluster heartbeat traffic straight up murders my little switch. It took me a while to figure out the problem. So the cluster works for a few days and then my core switch chokes and passes out, knocking pretty much everything offline. For now, the “cluster” is disabled and the second node is powered off until my new network cards arrive and I can configure separate networks for the clustering, storage, and the VMs.

Coming soon: Adventures in Proxmox part 2: You don’t know shit about networking.

## Mouse Without Borders

My relationship with Mouse Without Borders is complicated. On the one hand I dearly love it and rely on it for a lot of my workday. On the other hand it stops working for various reasons and it drives me absolutely insane. I have used Synergy in the past with Linux and MacOS, but if you are just connecting Windows machines, MWoB is the way to go.

The reasons to love MWoB are numerous. It lets you use one keyboard and mouse to control multiple computers. This is different than using a KVM switch because there is no video involved. Instead, you place up to 4 computers side by side and MWoB lets you move the mouse off of the screen on one machine and onto the screen of another. This is significant if you use several machines at once. Most video setups support 1 or 2 monitors, but I am hardcore and like to use 3 or more screens at the same time. I like to pretend that I work at NASA.

The reason to hate MWoB is that it sits at the intersection of two explosive elements: human interface devices and Windows network security.

The keyboard and mouse are the human interface to a computer system. They are of tremendous psychological significance to the human operating said computer. If the human interface malfunctions in any way, the emotional impact on the human is swift and severe. Keyboard and mouse malfunctions are Hulk-level rage inducing. This really isn’t MWoB’s fault, but it did decided to play a dangerous game.

MWoB uses networking to connect two Windows systems together. This means that MWoB is at the tender mercy of Windows Defender, a fickle beast. Windows networking can make file shares randomly disappear; it can quit seeing print queues; it’s utter chaos. I really dread messing with firewall rules on Unix systems, but I actively avoid it on Windows. The same goes for editing Group Policy. You can spend hours tuning both just to see a Windows security update wipe all of it out. Using MWoB means you have to get two Windows systems to play nicely with each other reliably, no small task. That’s two Windows operating systems, two MWoB installs, and two panicky firewalls to appease. I have reinstalled Windows on more than one occasion just to realize that the problem that I am having is actually with the *other* computer. Sure, Windows systems and networks are easy to set up, but like a house made of sticks, they’re easy to knock down. Again, this isn’t necessarily MWoB’s fault, but it’s a piece of software that has decided to play a [doubly] dangerous game.

When you force a vital computing component like your keyboard to operate in a volatile environment like Windows networking, you get a service that alleviates a tremendous strain. However, the sudden re-introduction of that strain is is eye-gougingly frustrating.

In my previous post, I expanded on my preferred methods for gaining remote access to my home network. Since then, I have decided to quit using Hyper-V because it’s awful.

I have now decided to move to ProxMox on my server. Proxmox is pretty cool, although the documentation sucks. I recently started using Linux containers for my remote access servers instead of VMs, which ProxMox supports out of the box. A truly compelling feature of Proxmox is its integration with Turnkey Linux. You can download Turnkey Linux Container Templates directly in Proxmox and spin them up quickly. I used the Turnkey OpenVPN template to rebuild GATE, my OpenVPN server.

The performance improvement is remarkable. On Hyper-V, each Linux VM ate 512MB of RAM just to sit idle 99.9% of the time. So far I have 3 containers configured with 512MB of ram each, but they use roughly 25-50MB each and they leave the rest for the server. PORTAL, my Windows VM, still takes his share of the RAM and doesn’t give it back, but that’s nothing new.

Moar RAM == moar servers!
On the plus side, efficient use of memory means that I can feel better about running a dedicated Linux box (container) for each application. Dedicated boxes mean that when I inevitably screw something up, it doesn’t affect the other applications that are running (that I haven’t screwed up yet.) Also, with pre-built containers and snapshots, you can toss machines that you screwed up without losing much time. I know, I know, rebuilding a Linux box instead of fixing it is sacrilege… but I got other shit to do.

On the minus side, containers don’t really act like VMs, especially when it comes to alternative network configurations. In particular, a Linux Container that uses a TUN or TAP interface needs some extra configuring. The TUN interface is how OpenVPN does its thing, so getting my GATE machine, the OpenVPN server that allows access to the DMZ on my internal network took a lot of fiddling with to get right. I did a bunch of Googling and I ended up with this forum post that recommends rebuilding the TUN interface at boot time with a script.

Here is the TUN script that I have graciously stolen so that I don’t have to Google it again (I didn’t even bother to change the German comments):

 #! /bin/sh ### BEGIN INIT INFO # Provides: tun # Required-Start: $network # Required-Stop:$openvpn # Default-Start: S 1 2 # Default-Stop: 0 6 # Short-Description: Make a tun device. # Description: Create a tundev for openvpn ### END INIT INFO # Aktionen case "\$1" in start) mkdir /dev/net mknod /dev/net/tun c 10 200 chmod 666 /dev/net/tun ;; stop) rm /dev/net/tun rmdir /dev/net ;; restart) #do nothing! ;; esac exit 0 

Then you enable the script and turn it on:
chmod 755 /etc/init.d/tun update-rc.d tun defaults

With this script, I was able to stand up a real OpenVPN server (not just an Access Server appliance) for unlimited concurrent connections! Not that I need them. I’m the only one that uses the VPN and most of the time I just use SSH tunnels anyway.

Since OpenVPN container templates make standing up servers so easy, I thought I’d build another one that works in reverse. In addition to GATE that lets OpenVPN clients route in to the DMZ, I thought I would use an OpenVPN client to route traffic from some DMZ hosts out to the Internet via Sweden. In the past, I used a VPN service to dump my Bittorrent box’s traffic this way, but I would like to extend that service to multiple machines. EVERYBODY GETS A VPN!

Öppna dörr. Getönda flörr.
I couldn’t figure out what a machine that does this kind of thing is called. It’s a server, but it serves up its client connection to other clients. It’s a router, but it just has the one network interface (eth0) that connects to a tunnel (tun0). It’s basically setting up a site-to-site VPN, but the other site is actually a secure gateway. This identity crisis led to a terminology problem that made finding documentation pretty tough. Fortunately, I found another pirate looking to do the same thing and stole his scripts 🙂

Since it’s a doorway to a VPN gateway to Sweden, I decided to call the box DÖRR, which is Swedish for “door”. I did this to maintain my trans-dimensional gateway theme (HUB, GATE, PORTAL, etc.)

Also, I would like to apologize to the entire region of Scandinavia for what I did you your languages to make the pun above.

The Turnkey Linux OpenVPN template sets up in one of 3 modes: “Server”, “Gateway”, or “Client”. “Server” is the option I went with for GATE, which allows OVPN clients the option of accessing local subnets. This is the “Server” portion of a Site-to-Site VPN or a corporate VPN. “Gateway” forces all OVPN clients to route all traffic through it, this is the config for secure VPN services like NordVPN or AirVPN. “Client” makes a client connection to another OVPN server. If you connect a “Client” to a “Server” you get the full Site-to-Site solution, but there is no documentation on Turnkey about setting up a “Site-to-Site Client” to route traffic from its internal subnet to the “Site-to-Site Server”.

What I am looking to do is configure a “Site-to-Site Client” but point it to a “Gateway”. Another important consideration when setting this up was that I didn’t want to do any meddling with the setup of my DMZ network. I just want to manually configure a host to use DÖRR as its default gateway. No need for proxies, DNSMasq, DHCP or anything like that. Just static IP’s, the way God intended it 🙂

Step 1 – The Site-to-Site Client
Once I got the container running, I had to fix the /dev/tun problem (the script above) and then make some config changes to OpenVPN.

Because this is a VPN client, and not a server, you need to get the OpenVPN client profile loaded. The bulk of my experience with OpenVPN clients is on Windows where you start the client when you need it. For this application you need to automatically run the OpenVPN connect process at boot and keep it running indefinitely.

First, you need to obtain a client config. I downloaded my ‘client.ovpn’ file from my VPN provider, and I copied it to /etc/openvpn/client.conf as root. You can name the files whatever you want, just remember what you named them because it’s important later.

cp /root/client.ovpn /etc/openvpn/client.conf

Now test the connection to make sure everything worked

openvpn --config /etc/openvpn/client.conf &

The & is important because it puts the OpenVPN process into the background, so that you get your command prompt back by pressing ENTER a couple of times. You can then test your Internet connection to see what your IP is a few different ways. You can use SSH with a dynamic port and tunnel your web traffic thru it with a SOCKs proxy. You could use curl or lynx to view a page that will display your IP. Or you could just use wget. I like to use ifconfig.co like so:

wget -qO- ifconfig.co

If all goes well, you should see your VPN provider’s IP and not your ISP’s.

Once you get the VPN client working, you then want it to start up and connect at boot time. You do this by setting the ‘autostart’ option in /etc/default/openvpn.

nano /etc/default/openvpn
AUTOSTART="client"

If you changed your ‘/etc/openvpn/client.conf’ filename, you change the name here. The AUTOSTART value is the name of that file minus the ‘.conf’

Now reboot your server and do your wget test again to make sure that the VPN connection is starting automatically.

Once that is working, you have to route traffic. This means IPTables, because OpenVPN and IPTables go together like pizza and beer.

Step 2 – De Routningen

Normally to route traffic between interfaces on Linux, you have to add IP forwarding (echo 1 > /proc/sys/net/ipv4/ip_forward etc.) In this case, the Turnkey OpenVPN template has already done that for you. All you have to do add a few forwarding rules:

iptables -A FORWARD -o tun0 -i eth0 -s 192.168.1.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE

Now it’s time to test them. For this you need a client computer with a static IP. For the default gateway you want to use the static IP that you assigned to eth0 on your VPN doorway server. I used 192.168.0.254 for DÖRR. If your test box also shows your VPN provider’s IP when you access a site like ipleak.net then it’s time to make those rules permanent. By saving them to /etc/iptables.up.rules. It is important to save them to that specific file because the Turnkey template calls that file when setting up the eth0 interface in /etc/network/interfaces.

iptables-save | tee /etc/iptables.up.rules

I don’t know why it’s set up that way. I’m just here to make awful jokes about Germanic languages.

Once that’s done, reboot the doorway server one last time and test with your client computer with the alternate default gateway.

Now that the my VPN client is working again, I need to rebuild my BitTorrent machine. I am going to try to save some more RAM by going with another Turnkey Linux container template.

EDIT: In my elation over getting something to work, I forgot to change the default gateway back. Unfortunately my test machine was PORTAL, which happens to control my dynamic DNS. So currently all of my hostnames are pointed at Sweden, LÖL.

## Remote Access Shenannegans

A while back, I wrote about using Windows HyperV server. The reason that I set up this server was to use the combination of a Linux server and a Windows desktop to get remote access to my home network. I thought that I would elaborate on the tools that I use to get into my home network from work or while traveling.

I use several methods, each with certain advantages and disadvantages. Mostly I prefer SSH over pretty much anything else in order to connect to a Linux host, and I prefer Remote Desktop over pretty much anything else in order to connect to a Windows host. As a backup, I will use Teamviewer. It’s not ideal, but it works where other services fail.

SSH is pretty much a Swiss Army Knife of network tools. You can use it to do waaaay more with it than just log into a Unix box and execute commands. It’s a tool for creating encrypted tunnels, it just so happens that 90% of those tunnels connect to remote shells. In addition to connecting to a remote shell, you can open ports on a host. I am fortunate enough to have Cincinnati Bell Fioptics which lets me open almost any port on my firewall without any bother. I forward port 22 directly to a Linux box named HUB, and I secure it with SSH keys. I can then use SSH to tunnel traffic into my home network, be that browser traffic through a SOCKS proxy and dynamic port, or RDP traffic with a local port. This works well when I am in a restrictive network that still allows outbound SSH traffic, and as long as I have my Putty session set up ahead of time with my private key. This is the technique that I use when I am not able to access my network through NeoRouter.

Remote Desktop (RDP) is another Swiss Army Knife for connecting to computers. I use Windows as my primary desktop OS. I like to use Linux mostly for server stuff and for running specific tools like Clonezilla or Kali. As a matter of fact, I prefer Linux for servers and tools over Windows. I know, I’m an odd duck. RDP not only gives you remote access to the Windows Desktop, it lets you map drives remotely to transfer files and it lets you connect at a desktop resolution that is greater or lesser than that of the machine that you are connecting to. This is a big deal when you are using RDP on a wide-screen monitor to control a server that is plugged into an old CRT monitor, or when you are using a tiny netbook to control your multi-screen desktop. Teamviewer (and the VNC server that it is based on) cannot do that.

In order to make my SSH and RDP connections, I like to use either NeoRouter or OpenVPN. NeoRouter is technically a split-tunneling VPN solution, but I like to think of it as creating a network of computers that is independent of their actual networks. Split-tunneling VPN is a fancy term for VPN connections that don’t mess with your Internet access. There are lots of other features for split-tunnels, but under most circumstances, I want my computers to talk to each other differently than they talk to the Internet.

The NeoRouter network explorer tool lets me see which of my computers are up and connected. I run the NeoRouter server on HUB, which is sitting behind my firewall, with port 32976 forwarded to it as well. Running the server inside my firewall lets me do some neat networking tricks, like having my BitTorrent VM connect to the internal IP for HUB, instead of using the Internet. My BitTorrent box uses a VPN client to route all Internet traffic through Sweden, which really slows down my Remote Desktop session. I run the NeoRouter client on my desktops and laptops, and also on my file servers so that I can access shared folders remotely. File transfers this way can be really slow, so I also use One Drive top share big files like videos or ISO images.

OpenVPN is my tool of choice for open WiFi networks at hotels and coffee shops. I can access my home network while also securing all of my network traffic. I run OpenVPN Access Server on a dedicated VM named GATE. Access Server is easy to use and configure, and it’s free for two concurrent connections. For occasional use, especially by people other than me, it works really well. There’s even a ready made Hyper-V appliance that you can just boot up and go. I used to run OpenVPN on HUB, but the networking/subnet stuff meant that I had to remember the internal IP for the OpenVPN network segment and change it to connect to NeoRouter. So I just use two separate machines and it all works out. I have built OpenVPN servers without Access Server in the past. I like to use the Turnkey Linux OpenVPN appliance, and setup couldn’t be easier.

If I cannot get in via NeoRouter, OpenVPN, or old school SSH tunneling, then I fall back on using TeamViewer. It can get me in when pretty much all other tools fail me, but it’s not as nice as using RDP. Also, it should be noted that TeamViewer can only be used to control graphical desktops, there is no command line equivalent. In order to alleviate some of the frustrations of TeamViewer’s desktop resolution, I run a dedicated Windows VM that I call Portal. I keep the native (console) resolution fairly low, and I have RDP and Putty sessions set up so I can quickly connect to my other computers.

One other thing that I use Portal for is to move files into and out of my home network. You can use RDP or TeamViewer to copy files, but for big files like videos and ISO’s, One Drive does a much better job. I have a dedicated One Drive account that I use specifically for moving files this way. I just grab a file from somewhere, copy it to the One Drive folder on Portal, and it automagically uploads. Then, some time later, I can use the One Drive website to download the file, at much faster speeds than using RDP, SCP (SSH), or TeamViewer’s file transfer tool. It’s an extra step, but one worth taking, especially if I find myself in an oh-shit-i-forgot-that-important-file situation.

## Da Mystery of Multiboxing – A brief tale of Automated Heroics Inc.

I have long been a fan of playing Massively Multiplayer Online games, but I really don’t like MMO gamers because they tend to be jackasses. At the time my MMO of choice was City of Heroes, which was popular with teenagers. Needless to say, the jackass factor was high. The game is best played with others tho, so I was often stuck playing with jackasses. You do what you gotta do to unlock those badges.

My gaming experience was sub optimal. So, I did what any hacker does when he is confronted with a problem: I started hacking. I found that I could multiplex keyboard commands through some networked software and came up with a workable multibox solution. The trick was it needed multiple computers. So I cobbled together some old desktops to make barely-passable gaming machines. At one point I had 8 of them running. It took a half hour to get all my bots logged into the game and another half hour to enter an instance, but being able to play on superhero teams where everyone did what I told them to do was sheer joy. My group was all robot-themed and my supergroup was called “Automated Heroics Inc.” and all of the player-character bios read like product descriptions in a catalog. I also had macros programmed so that all of them could do “The Robot” in sync. It was hilarious. Why didn’t I get any video of that?

Multiboxing can be tricky because each MMO is different about how it handles its controls, sessions, authentication, you name it. In the case of CoH, running multiple instances of the game on the same computer didn’t work well. It was fine if I alt-tabbed between the sessions and controlled the toons manually, but having sessions in windowed mode made them crash. The software that I used, Auto Hotkey, worked well when testing scripts with notepad windows, but when it came time to run them with CoH, it was shit show.

So I decided to keep AHK, but I used some junk PCs and old video cards to run the game. AHK has some networking features that let you push groups of keystrokes out to clients, so that if I pressed ‘0’ on my main PC, it would send a series of key presses and pauses to the other 7 machines. Because I am writing this several years after I did the project, I no longer have any of the files I used. Also CoH has been shut down for years, so example code wouldn’t be all that useful even if I had it. Here are a few things to consider though:

1. Hopefully your game has a free-to-play or freemium option so that you can set up multiple accounts for not much money. Running just one bot toon is way different from a tactical standpoint than running seven of them.
2. Hopefully your game has an auto-follow function, where you target a player and your toon moves whenever and where ever the target goes. This is so important for moving all of your bots in an orderly fashion.
3. Hopefully your game has an assist or auto-target function, where you target a player and your toon targets that player’s target. Much like the auto-follow feature, assist keeps everyone shooting at the same thing. I found that concentrating fire on the big critters first was the most effective way to initiate combat. If you time it right, you put them down fast and then mop up the minions.
4. If you have both auto-follow and assist, then you can round up your bot crew by mapping a key to tell each bot to target you, follow you, and assist you. Being able to get your toons to focus on you is an essential function because targeting can cause your bots to do dumb things like take off running or shoot at the wrong thing. On my “main” pc, I mapped this script to the same key that I used to target the enemy closest to me.
5. Multiboxed toons work best with ranged combat, especially area of effect attacks. You will want your crew to be mostly squishy DPS types and dudes that can heal and buff squishy DPS types. My bot crew was entirely ranged. I called them “The Firing Squad.”
6. An AOE that is centered on the player (A Player Based Area Of Effect, PBAOE, in CoH parlance) is great for mopping up a mob once it has closed distance with your crew.
7. Another great use is AOE heals. Even if they’re weak, you can have two or more toons dropping their heals as part of their attack sequence. Often, your toons will either have a PBAOE attack, or a PBAOE heal. If you are dropping PBAOEs when the enemy moves into melee range, you will likely need AOE heals too, so just have everyone drop them at once.
8. I mostly used my bots to level my support toons that were hard to solo, like controllers and tanks. It’s decent practice for keeping a team alive, but it’s not the same skill at playing with real humans.
9. Multiboxing isn’t about playing an indiviual bot toon well. It’s about using the entire group of bot toons to support your main toon[s]. There are some key differences between playing a main toon vs. playing a bot toon:
• Your bots will probably never be alone, so there’s no need to balance offense with defense. A “real” toon needs to be well rounded, bot toons are highly specialized insects.
• Your bots should have two basic specialties: shooting or healing. They should be going pew pew pew or heal heal heal pretty much all the time.
• Putting up shields and other buffs can be a pain to script but it’s worth it: Targeting a team member, drop one or more buffs on them, target the next team member, etc.
• There will be multiple buffers dropping different buffs, so don’t focus so much on making each buff powerful, focus on making each buff mana/energy efficient with short cool down periods so you can lay them down fast and often. Once the buff process is scripted, running it between each mob isn’t a big deal.

In CoH, there were two character classes, the Corruptor and the Defender that both combined blasting stuff with healing and buffs. The Corruptor’s primary power set was offense and the secondary power set was support, while the Defender was the exact opposite. A third class, the Blaster, was exclusively focused on offense. I had two Blasters, four Corruptors and one Defender. The corruptors could buff everyone up before a fight, then my main toon would pull a mob, the bots would open fire, and if the mob got close, I had the Blasters drop their PBAOE blasts and then the Defender and the Corruptors dropped heals. The benefit of their damage abilities was obvious, but the shields and heals were equally important for helping to level my tank and controller. At higher levels, the bots all had a sniper-type attack that was long range, accurate, and did lots of damage with a long cool down timer. I could generally have everyone target a mob’s boss/lieutenant and drop him in order to pull the rest of the mob. I would then use my tank or controller to tie up the mob while the firing squad picked off minions one at a time. If anything survived that and actually made it to melee range, I would drop the PBAOE blasts, AKA “The Nukes”, along with the heals. The stragglers then got picked off by the firing squad and we rebuffed and took on another mob.

The things you learn about keyboards
Getting your bot toons to do things involved creating macros for each toon to execute certain actions, noting the times that certain animations took, and then mapping those macros to shortcut keys and using AHK to script the key presses for those shortcuts. You have to learn a lot about your game’s behavior, but you also have to learn about keyboards.

Keyboard behavior plays a major part in getting your scripts right. I had the hardest time getting my bots to do simple things like run because I didn’t understand that pushing a key down, and letting go of it are two different events. It was so hard to get those bastards to run, that I ended up relying on the auto-follow feature for basically all movement.

It’s hard to imagine all of the realtime events that go into pressing keys on a keyboard until you have to simulate key presses with software. One thing I wanted to do make the bots do was spread out so that they didn’t all get hit with enemy AOEs. I never did get it right, so I just kept everyone close together and used lots of heals.

I miss all my robot minions. I hope that some day a similar MMO will emerge that will let me rebuild Automated Heroics Inc. so I can record some goddamn video of my dancing robots.

## I hate separating hackers based on morality.

I have given a few talks recently to non-hacker audiences. In so doing, I learned that even at its most basic level, the idea of what hacking is, is kind of lost on “normal people.” The “Wanna Cry” malware couldn’t have better illustrated the things I was trying to teach.

It’s not that normies aren’t capable of understanding, it’s that they have been given the wrong information  by the government, the media, and popular culture for years. There is this fairly lame idea of hackers following  this sort of monochromatic gradient matching that of the old-west: the good guys wear white hats, the bad guys wear black hats, and there is a spectrum of moralities in between. There are legitimate ethics that guide hackers, they just aren’t the kinds that you hear about in movies and on TV:

1. The Sharing Imperative – Hacking is a gift economy. You get tools, knowledge and code for free, so you have to share what you have learned to keep growing the pool.
2. The Hands-On Imperative – Just like “real” science, you have to learn by doing. Take things apart, break them even, and learn how they work. Use that knowledge to create interesting things.
3. The Community Imperative – Communities (geographic, philosophical, etc.) are how it gets done. Crews, clubs, chat rooms, hackerspaces, conferences, email lists, are all places for n00bs to ask questions and get flamed, and for l33ts to hold court.

Monochromatic Morality
The typical whitehat is a security researcher, penetration tester, or security consultant that only hacks the computers and networks that they have permission to hack. This can either be a lab environment built for research, a client who has retained security services, or an employer who has granted express permission. Whitehats then disclose their findings. This disclosure may be for the benefit of a client or an employer, or it may be to benefit the public. The key difference is that the whitehat first seeks permission and then shares their discovery for the benefit of others.

The typical blackhat is a generally considered to be a criminal. They hack systems that do not belong to them and then do not disclose their findings. The exploits that they develop are then hoarded and stockpiled for their benefit alone. The key difference is that blackhats do not seek permission, they do not disclose their findings, and they hack for the benefit of themselves.

The gray areas have to do with the degree to which a hacker has permission, discloses their findings, and how they profit from their activities. Whitehats are supposed to have “real” jobs and share everything, blackhats supposedly don’t have jobs and therefore hack for money. A typical grayhat might hack systems that don’t belong to them but then anonymously share their findings, or they might develop their exploits in a lab, but then sell those exploits rather than disclosing them.

In my professional life, I routinely employ hacking tools for the benefit of my employer, whether it’s scanning networks to find and fix problems, or cracking passwords to help users who have lost access to their computers. In previous jobs, I have exfiltrated research data from one network to another at the request of the data’s owner. While I don’t always have my employer’s explicit permission to do what I do, they hired me to fix problems for their users, so I do what it takes. The things that I learn, I then share and teach to others, whether that’s talks at conferences or Cinci2600 meetings, or posts on this blog. I have no idea where that falls in the white/gray spectrum.

Chromatic Pragmatism
Instead of black and white, I prefer to look at hacking from a red vs. blue perspective. Regardless of your moral compass (or that of your employer), you are either on the offensive end which is the red team or the defensive end, which is blue team.

Teams are better terms to think in because hacking is a social activity. You may or may not be physically alone, but you are always learning from others. You read docs and code, you try stuff, you get stuck, you look up answers and ultimately ask someone for help. The idea of hackers as introverted smart kids living in their mom’s basements isn’t nearly as accurate as TV would have you believe.

Regardless of the reason why you are hacking a computer or a network, you are either the attacker or the defender. You are either probing defenses looking for  a way in, or you are hardening defenses to keep others out. You can further divide these activities into application vs. network security, but at that point the discussion is more about tools.

A great example of this is the people that run botnets. Once a bot-herder gets control of a computer (bad), they will then patch that computer (good) so that some other bot-herder doesn’t snatch it away from them (???).

Thinking about hacking in terms of offense and defense takes away all of the politics, business, and patriotism of your red and blue teams. If you are a red teamer, backed by your country’s military, you might be doing black hat stuff like seizing control of things that don’t belong to you for a “good” cause. You might be a blue teamer working for organized crime syndicate, doing white hat stuff like analyzing malware for “bad” people. You might be a whistle-blower or a journalist, exfiltrating stolen data to expose bad acts by a government.

Wanna Cry: with the good comes the bad, with the bad comes the good
The Wanna Cry debacle is interesting because of its timing, its origin, its disclosure, and its impact.

Its timing is interesting because nation-state political hacking is like half of all discussions when it comes to the Presidential election. Turns out that the USA hacks as much or more shit than Russia does.

Its origin is interesting because the tools in the leaked sample appear to come from the NSA. The leak comes from a group known as “Shadow Brokers.” They said they would auction the rest for a large sum of money. The world got a head start on an inevitable malware outbreak thanks to some bad guys doing a good thing by releasing something that they discovered. Something that the US Government had been hoarding to use against its enemies.

The disclosure is interesting because the first release is a free sample to prove the quality of the goods they intend to auction. This is the Golden Key problem in a nutshell: a tool, used by the good guys, falls into the hands of the bad guys, and chaos ensues.

The zero-day exploit exposed by the leaked tools was then used to implement a large scale ransomware attack that severely affected systems in Europe and the UK. A researcher was able to locate a call in the ransomware to deactivate the malware, which stopped the attack dead in its tracks. There are lots of theories about this strange turn of events, but my personal theory is that the ransomware campaign was a warning shot. Possibly to prove out a concept, possibly to urge everyone to patch against the vulnerability before a proper villain did some real damage with it.

The idea that NSA tools were compromised and disclosed by a criminal organization, turns the whole black hat/white hat thing on its head. The NSA was hoarding exploits and not disclosing them, which is total black hat move. Shadow Brokers exposed the tools, prompting a widespread campaign to fix a number of vulnerabilities, which is a total white hat move. So you have a government agency, a “good guy”, doing black hat things, and a criminal organization, a “bad guy”, doing white hat things.

If you want to talk about the specifics of the hack, the NSA’s blue team didn’t do its job, and the Shadow Brokers’ red team ate the NSA’s lunch. The blue team’s principle was a server where attacks were either launched or controlled. This server was the red team’s target. It’s a pretty epic win for the red team because the NSA is a very advanced hacking group, possibly the best in the world.