Adventures in Proxmox Part 3: Chris don’t know shit about networking

When I first started messing with Proxmox, I crashed my home network. If you aren’t interested in the story of my journey of network sexual awakening, click here.

I have since spent the last several months learning about Proxmox networking using virtual box. I have also been working on a parallel project: upgrading my home network to be segregated using VLANs. Like my budget for server hardware, my budget for network gear is practically nonexistent, so I have been doing a lot of reusing things that should have been replaced years ago.

After a bit of consternation, I settled on a prosumer router and a smart switch, rather than a PC-based router and a managed switch. Mostly because I needed this to work for the family as well as for the lab, and I didn’t want to spend weeks relearning Cisco. Time to tear down the old home network!!

A New Router

My plan is to have 4 “real” networks for my “physical” equipment:

  1. The family’s wireless network – for phones, tablets, game consoles, and tv sticks.
  2. My wired network for my personal workstations and servers.
  3. A VOIP network for POE phones, ATAs, and my PBX.
  4. A server and network lab for me to wreck things.

When I say “real” I really mean “operated by humans” or perhaps “not a Proxmox host”. When I say “physical” I also mean “operated by humans” or perhaps “not a Proxmox host”. At least half of these “real” ports are VLANs, and at least half of these “physical” devices are VMs. In this scenario, “real” and “physical” networks and devices are the ones that I and the family use, compared to the networks that are dedicated to the Proxmox cluster.

The critical distinction is that all of these network segments connect to a different port on the router, and have firewall rules to keep them from connecting to each other. In this scenario, a dumb switch plugged into each port of the router will provide a physically separated network at layer 2 (Ethernet) and a logically separated network at layer 3 (IP). It is here that I have used my first batch of dumb old mini switches:

  1. eth1 – Family Wireless, 192.168.10.0/24
  2. eth2 – Personal Wired, 192.168.11.0/24
  3. eth3 – VOIP, 192.168.12.0/24
  4. eth4 – Lab, 192.168.13.0/24

The family wireless network consists of 2 wireless access points, both with 4 dumb gigabit Ethernet ports:

  1. WAP port 1 -> eth1 on the router, uplink to the Internet
  2. WAP port 2 -> eth0 on the NAS appliance
  3. WAP port 3 -> port 1 on the smart switch
  4. WAP port 4 -> port 1 on the other WAP

So, I had my router set up, and plugging a laptop in to each dumb switch let me pull an IP from the DHCP server for the respective network segment. I was also able to browse the Internet. Awesome. I have managed to convert a big, clunky, error-prone network into four smaller error-prone networks. This is progress?

As far as the family is concerned, eth1 on the router is the network. Wireless access to both the Internet and to the data and media stored on the NAS. If I never plug in the smart switch then only I would notice. I have the WAP’s dumb switch plugged in to the smart switch because I have a media server VM on the Proxmox cluster that I want to put onto the wireless network to stream video to tablets, mobile phones and smart TVs. Because the cluster nodes only have 4 network ports, I need to put multiple network connections on to 1 of those network ports. This is where VLANs come into play. This is also where upgrading my knowledge of routing, switching, and firewalls comes in to play with Proxmox: putting the cluster onto all 4 of my network segments using just one network port from each node.

VLANs: everything you hate about dozens of dumb switches, plus virtualization

With the new router working, it’s time to configure the networks’ core: the smart switch.

VLANs are a great way to divide up a big physical switch into smaller virtual networks. A 24 port switch could be broken down into 4 networks, with a a varying number of ports in each network. You can also put a single switch port onto more than one VLAN. The network traffic gets put into the appropriate virtual network by using tags. You can even put a given port into “all” of the VLANs, this is sometimes referred to as a “trunk.” Trunks are used to connect multiple switches together, passing all tags between them.

Dumb switches can’t tag traffic. So, if you want to mix a smart switch that does VLANs with a dumb switch that doesn’t, you need to make sure that your untagged traffic is going out of the right ports. In the hypothetical 24 port managed switch in the example above, if you put port 2 into VLAN 2, and then plug a dumb switch into port 2, then port 2 needs to know what to do with untagged traffic. Traffic coming out of the dumb switch won’t have tags, and traffic going into to the smart switch will lose its tags. This is the essence of “VID” and “PID/PVID”. A VID is a VLAN ID, PVID is a Port VLAN ID. All the ports on the smart switch need to treat all traffic as tagged, even when it’s not. Untagged traffic needs to be treated differently than tagged traffic, basically meaning that “untagged” is just a special category of “tagged”. The PVID is a kind of “untagged == special tag” way for ports to deal with untagged traffic. This is the exact moment that I developed a migraine.

Star Trek guy with severe head pain.I have done a decent job keeping the family wireless network packets away from everything, and everything away from the family by putting each network segment on its own dumb switch. Now it is time to blur those boundaries a bit by plugging each of those dumb switches into the smart switch. My network is broken into 4 subnets, so my VLANs will break down something like this:

  • VLAN 1 – Family Wireless
  • VLAN 2 – Personal Wired
  • VLAN 3 – VOIP
  • VLAN 4 – Lab

I probably don’t need a separate /24 (class C) network for each VLAN, but I am not very clever and I have zero confidence in my ability to design networks or IP schemes. I know how routing works when you are using /24’s so for my implementation VLAN == /24. Also, as I learned in the Virtual Box lab, network designs get real confusing real fast, so having the VLAN tag roughly correspond to /24 subnet helps me to not go completely insane.

The smart switch is configured by a web interface. This interface has a default IP of 192.168.0.1, so I set a static IP on the Ethernet port of my laptop and logged in. This part of the configuration is important, and it will come into play again later. Once I have all the VLANs set up, I still need to be able to access the switch on this IP address.

I configured the first 4 ports on the switch as access ports or up-links to the dumb switches. Because the dumb switches don’t tag traffic, I needed the uplink ports to treat all “untagged” traffic as tagged to a single VLAN, using the PVID:

  • switch port 1 – VLAN 1, PVID 1
  • switch port 2 – VLAN 2, PVID 2
  • switch port 3 – VLAN 3, PVID 3
  • switch port 4 – VLAN 4, PVID 4

So now, if I change port 5 to VLAN 1 and PVID 1, I can plug in my Windows laptop and pull an IP from the wireless network. Then I can change port 5 to VLAN 2 and PVID 2, and now I can pull an IP from the wired network. Now I need to figure out how to get my Prox cluster nodes to sit on all 4 networks at the same time using a single switch port for each node.

Enter the Management Workstation

Up to this point, I was able to set up my dumb switches and my VLANs with a Windows laptop. I just disabled the WiFi and plugged the Ethernet adapter into the various switches and ports. This was fine for scenarios where one switch port corresponded to just one network segment. But it turns out that Windows can’t do VLANs without proper hardware and software support for the NIC. If you have a VLAN-aware NIC and the Intel or HP enterprise app to configure it, I guess it works fine, but there is no Windows 10 app for the Intel NIC in my crashtop.

In my Virtual Box Proxmox lab, I learned that life is just easier when you have a Linux box dedicated to managing the cluster and testing your network setup, so I decided that before I set up the cluster, I should set up a “Management Workstation.” For the BoxProx lab, I used a Virtual Box VM running a GUI to administer the cluster because I needed a browser on the host only network. Technically, I could have run the management workstation without a GUI and just used SSH tunneling to access the web management interfaces for the Proxmox VMs, but I didn’t want to spend any time doing stupid SSH tricks. I also don’t have the actual hardware cluster running yet, so I need to do this with actual hardware. The hope is that once I get the VLANS and network bridges configured, the workstation will be superfluous. Therefore, the workstation doesn’t have to be powerful at all. Literally any old laptop or desktop that is laying around will do nicely.

My operating system of choice is Turnkey Linux Core. Set up an old desktop on port 5 of the smart switch. For the initial install, I left port 5 configured for VLAN 1 and PVID 1. I was able to pull an IP address from the wireless network, install and update the OS, and configure SSH.

Remote access is important because I can’t sit in my basement all day; Internet access is important because I need to install some network tools.

First step is to get the VLAN tools installed:

apt-get install vlan

Then enable VLAN support in the kernel:

echo 8021q | tee -a /etc/modules

Then add your tagged network interfaces:

nano /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 192.168.0.10
    netmask 255.255.255.0

auto eth0.1
iface eth0.1 inet static
    vlan-raw-device eth0
    address 192.168.1.10
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-nameservers 8.8.8.8 8.8.4.4

Then reboot the machine. I know there is a bunch of crap that you can do to avoid that, but this is the only way I can be sure that it works. I also know that if you name the interface eth0.N you probably don't have to mark the 'vlan-raw-device' but the Debian VLAN tutorial did it so I did it too.

What this does is change the IP of untagged interface eth0 to 192.168.0.10 (remember the IP of the switch from before?) and add eth0.1 (VLAN 1) with an IP of 192.168.1.10 and configured a default gateway and DNS for that interface.

Now, the machine should still be connected to the Internet, and you can modify port 5 on the smart switch to be in VLAN 1 and PVID 1.

If you can ping the IP for the smart switch (192.168.0.1), the IP of something on your wireless network (like an access point) as well as Google's DNS (8.8.8.8) then you are in good shape.

At this point, I left the basement and went upstairs. I connected my laptop to the family wireless network (192.168.1.0/24) to SSH into the workstation. Since I will be making modifications to the smart switch configuration, as well as the management workstation, I decided to configure PuTTy to drop a local port and forward it to 192.168.0.1:80 so that I can access the web interface of the smart switch from my laptop, and the unencrypted HTTP traffic will be secured by the SSH tunnel.

Now I just need to move the Internet access to the 'Lab" VLAN and add the remaining VLANS to /etc/network/interfaces:

nano /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 192.168.0.10
    netmask 255.255.255.0

auto eth0.1
iface eth0.1 inet static
    vlan-raw-device eth0
    address 192.168.1.10
    netmask 255.255.255.0

auto eth0.2
iface eth0.2 inet static
    vlan-raw-device eth0
    address 192.168.2.5
    netmask 255.255.255.0

auto eth0.3
iface eth0.3 inet static
    vlan-raw-device eth0
    address 192.168.3.5
    netmask 255.255.255.0

auto eth0.4
iface eth0.4 inet static
    vlan-raw-device eth0
    address 192.168.4.5
    netmask 255.255.255.0
    gateway 192.168.4.1
    dns-nameservers 8.8.8.8 8.8.4.4

The last step is to make sure that smart switch port 5 is part of VLANs 1, 2, 3, and 4, with PVID 1. If all goes well, the workstation can ping the smart switch IP, Google DNS, and servers on all 4 VLANs.

The next step is to use this same network setup for the management NIC on the Proxmox cluster. Using the 4 VLAN interfaces for the network bridges (VMBR1-VMBR4).

Adventures in Proxmox Part 1: Words About Boxes

The Proxmox logo
It’s been a few weeks since I exorcised HyperV from my life like an evil demon. I have replaced it with Proxmox and so far it’s been mostly great. With a couple of serious caveats.

Plastic dinosaurs betraying each other.My transition to Proxmox has been a rather involved, not so much because Proxmox is hard to set up (it’s not), but because I am tired of slapping old junky hardware together and hoping it doesn’t die, and then scrambling to fix it when it inevitably betrays me. Unlike most dudes with home servers and labs, most of my acquisitions were made years ago to support an MMO habit. Specifically multiboxing.

PC case made from peg board.

I call them “computers” because they are computers in the sense that they have CPU’s, RAM, and HDD’s. But they were low-budget things when they were assembled years ago. The upgrade path works something like this:

  1. A computer begins its life as my main gaming machine that will run my favorite game at a satisfactory speed and resolution.
  2. Then I find a new favorite and upgrade the gaming machine’s guts to run the new game.
  3. The old gaming guts get transplanted in to my “server” where they are *barely* able to run a few VMs and things like that.
  4. The final stage is when the server guts are no longer up to the task of running VMs. I then add a few old network cards and the “server” then becomes my “router”.
  5. The old router guts then get donated somewhere. They’re not really useful to anyone, so they probably get shipped to Africa where they get mined for gold and copper by children at gunpoint.

Breaking the [Re]Cycle of Violence
Wall-E holding a pile of scrapIn the years since then, I have taken to playing epic single player games like Skyrim. These games really only need one machine. The rest of the gear I used to run little “servers” for one thing or another, which I have slowly replaced with VMs. The problem with using old junky computers as servers is when you run them balls out 24 hours a day. In my search for a replacement VM host, I spent a lot of time researching off-lease servers. My goal was to have 8 cores and 32gb of ram, with the ability to live migrate VMs to another [lesser] host in an emergency, something that my HyperV setup was lacking. After a lot of consternation, I decided that since a single VM would never actually use more than 4 cores or 8gb of RAM, why not use 2 [or more] desktops?

A room full of old PCs.I found some old off-lease quad-core Intel desktops for about the same retail price as a low end server processor. I used the RAM from my older gaming machines/VMservers and some hard drives from some old file servers to build out my “new” Proxmox cluster. With two quad core desktops running maxed-out memory(16GB each) I managed to satisfy my need to be like the other kids with “8 cores with 32GB of RAM” for about the price of an off-lease server chassis, with the added bonus having a cluster. The goal is to add nodes to grow the cluster to 16 cores and 64GB of RAM, while also adding clustered storage via Ceph to make use of old hard drives from file servers.

New hot servers is old and busted. Old busted clusters is the new hotness.
For me, the clustered model is better, in my opinion for a number of reasons. It mostly has to do with modularity:

  1. You can build out your infrastructure one paycheck at a time. Part of the problem with off-lease servers is that while the chassis is cheap, the components that go in it are expensive and/or hard to find. The deal with servers is that the cost of the motherboard and CPU are nothing compared to what you will spend on RAM. I was looking for something I could start using for less than $200, and a refurb desktop and RAM from old gaming boxes got me going at that price point.
  2. Desktops stack on top of each other for free. I don’t have any server or telco racks, so in addition buying ECC RAM, I would also be buying a rack, rails, and all of the other stuff that goes with them. This would easily eat up my $200 startup budget before I powered on a single box.
  3. Moar boxes == moar resiliency. My gear at home is part lab and part production environment. Yes, I use it to hack stuff and learn new things, but my family also uses it in their daily lives. Network shares stream cartoons; VOIP phones connect friends; keeping these things going is probably as important as my day job. Being able to try bold and stupid things without endangering the “Family Infrastructure” is important to my quality of life.
  4. Scaling out is probably more important than Scaling Up. A typical I.T. Department/Data Center response to capacity problems is to regularly stand up newer/more powerful [expensive] gear and then dump the old stuff. I guess this is a good approach if you have the budget. It certainly has created a market for used gear. I don’t have any budget to speak of, so I want to be able to increase capacity by adding servers while keeping the existing ones in play. There are still cost concerns with this approach, mainly with network equipment. In addition to upping my server game, I am going to have to up my networking game as well.

It works…ish

I have my two cluster nodes *kind of* working, with most of my Linux guests running as containers, which is very memory and CPU efficient. I am running two Windows VMs, PORTAL for remote access and dynamic DNS, and MOONBASE which I am using for tasks that need wired network access. All of my desktops are currently in pieces, having donated their guts to the “Cluster Collective” so I am mostly using my laptop for everything. I am not really in the habit of plugging it in to Ethernet, or leaving it turned on, so for now I am using a VM in place of my desktop for long running tasks like file transfers.

I say that the cluster is only kind of working because my home network isn’t very well segmented and the cluster heartbeat traffic straight up murders my little switch. It took me a while to figure out the problem. So the cluster works for a few days and then my core switch chokes and passes out, knocking pretty much everything offline. For now, the “cluster” is disabled and the second node is powered off until my new network cards arrive and I can configure separate networks for the clustering, storage, and the VMs.

Coming soon: Adventures in Proxmox part 2: You don’t know shit about networking.

Mouse Without Borders

My relationship with Mouse Without Borders is complicated. On the one hand I dearly love it and rely on it for a lot of my workday. On the other hand it stops working for various reasons and it drives me absolutely insane. I have used Synergy in the past with Linux and MacOS, but if you are just connecting Windows machines, MWoB is the way to go.

The reasons to love MWoB are numerous. It lets you use one keyboard and mouse to control multiple computers. This is different than using a KVM switch because there is no video involved. Instead, you place up to 4 computers side by side and MWoB lets you move the mouse off of the screen on one machine and onto the screen of another. This is significant if you use several machines at once. Most video setups support 1 or 2 monitors, but I am hardcore and like to use 3 or more screens at the same time. I like to pretend that I work at NASA.

The reason to hate MWoB is that it sits at the intersection of two explosive elements: human interface devices and Windows network security.

The keyboard and mouse are the human interface to a computer system. They are of tremendous psychological significance to the human operating said computer. If the human interface malfunctions in any way, the emotional impact on the human is swift and severe. Keyboard and mouse malfunctions are Hulk-level rage inducing. This really isn’t MWoB’s fault, but it did decided to play a dangerous game.

MWoB uses networking to connect two Windows systems together. This means that MWoB is at the tender mercy of Windows Defender, a fickle beast. Windows networking can make file shares randomly disappear; it can quit seeing print queues; it’s utter chaos. I really dread messing with firewall rules on Unix systems, but I actively avoid it on Windows. The same goes for editing Group Policy. You can spend hours tuning both just to see a Windows security update wipe all of it out. Using MWoB means you have to get two Windows systems to play nicely with each other reliably, no small task. That’s two Windows operating systems, two MWoB installs, and two panicky firewalls to appease. I have reinstalled Windows on more than one occasion just to realize that the problem that I am having is actually with the *other* computer. Sure, Windows systems and networks are easy to set up, but like a house made of sticks, they’re easy to knock down. Again, this isn’t necessarily MWoB’s fault, but it’s a piece of software that has decided to play a [doubly] dangerous game.

When you force a vital computing component like your keyboard to operate in a volatile environment like Windows networking, you get a service that alleviates a tremendous strain. However, the sudden re-introduction of that strain is is eye-gougingly frustrating.

Additional Remote Access Shenannegans

In my previous post, I expanded on my preferred methods for gaining remote access to my home network. Since then, I have decided to quit using Hyper-V because it’s awful.

I have now decided to move to ProxMox on my server. Proxmox is pretty cool, although the documentation sucks. I recently started using Linux containers for my remote access servers instead of VMs, which ProxMox supports out of the box. A truly compelling feature of Proxmox is its integration with Turnkey Linux. You can download Turnkey Linux Container Templates directly in Proxmox and spin them up quickly. I used the Turnkey OpenVPN template to rebuild GATE, my OpenVPN server.

The performance improvement is remarkable. On Hyper-V, each Linux VM ate 512MB of RAM just to sit idle 99.9% of the time. So far I have 3 containers configured with 512MB of ram each, but they use roughly 25-50MB each and they leave the rest for the server. PORTAL, my Windows VM, still takes his share of the RAM and doesn’t give it back, but that’s nothing new.

Moar RAM == moar servers!
On the plus side, efficient use of memory means that I can feel better about running a dedicated Linux box (container) for each application. Dedicated boxes mean that when I inevitably screw something up, it doesn’t affect the other applications that are running (that I haven’t screwed up yet.) Also, with pre-built containers and snapshots, you can toss machines that you screwed up without losing much time. I know, I know, rebuilding a Linux box instead of fixing it is sacrilege… but I got other shit to do.

On the minus side, containers don’t really act like VMs, especially when it comes to alternative network configurations. In particular, a Linux Container that uses a TUN or TAP interface needs some extra configuring. The TUN interface is how OpenVPN does its thing, so getting my GATE machine, the OpenVPN server that allows access to the DMZ on my internal network took a lot of fiddling with to get right. I did a bunch of Googling and I ended up with this forum post that recommends rebuilding the TUN interface at boot time with a script.

Here is the TUN script that I have graciously stolen so that I don’t have to Google it again (I didn’t even bother to change the German comments):

#! /bin/sh
### BEGIN INIT INFO
# Provides:          tun
# Required-Start:    $network
# Required-Stop:     $openvpn
# Default-Start:     S 1 2
# Default-Stop:      0 6
# Short-Description: Make a tun device.
# Description:       Create a tundev for openvpn
### END INIT INFO

# Aktionen
case "$1" in
    start)
        mkdir /dev/net
        mknod /dev/net/tun c 10 200
        chmod 666 /dev/net/tun
        ;;
    stop)
        rm /dev/net/tun
        rmdir /dev/net
        ;;
    restart)
        #do nothing!
        ;;
esac

exit 0

Then you enable the script and turn it on:
chmod 755 /etc/init.d/tun
update-rc.d tun defaults

With this script, I was able to stand up a real OpenVPN server (not just an Access Server appliance) for unlimited concurrent connections! Not that I need them. I’m the only one that uses the VPN and most of the time I just use SSH tunnels anyway.

Since OpenVPN container templates make standing up servers so easy, I thought I’d build another one that works in reverse. In addition to GATE that lets OpenVPN clients route in to the DMZ, I thought I would use an OpenVPN client to route traffic from some DMZ hosts out to the Internet via Sweden. In the past, I used a VPN service to dump my Bittorrent box’s traffic this way, but I would like to extend that service to multiple machines. EVERYBODY GETS A VPN!

Öppna dörr. Getönda flörr.
I couldn’t figure out what a machine that does this kind of thing is called. It’s a server, but it serves up its client connection to other clients. It’s a router, but it just has the one network interface (eth0) that connects to a tunnel (tun0). It’s basically setting up a site-to-site VPN, but the other site is actually a secure gateway. This identity crisis led to a terminology problem that made finding documentation pretty tough. Fortunately, I found another pirate looking to do the same thing and stole his scripts 🙂

Since it’s a doorway to a VPN gateway to Sweden, I decided to call the box DÖRR, which is Swedish for “door”. I did this to maintain my trans-dimensional gateway theme (HUB, GATE, PORTAL, etc.)

Also, I would like to apologize to the entire region of Scandinavia for what I did you your languages to make the pun above.

The Turnkey Linux OpenVPN template sets up in one of 3 modes: “Server”, “Gateway”, or “Client”. “Server” is the option I went with for GATE, which allows OVPN clients the option of accessing local subnets. This is the “Server” portion of a Site-to-Site VPN or a corporate VPN. “Gateway” forces all OVPN clients to route all traffic through it, this is the config for secure VPN services like NordVPN or AirVPN. “Client” makes a client connection to another OVPN server. If you connect a “Client” to a “Server” you get the full Site-to-Site solution, but there is no documentation on Turnkey about setting up a “Site-to-Site Client” to route traffic from its internal subnet to the “Site-to-Site Server”.

What I am looking to do is configure a “Site-to-Site Client” but point it to a “Gateway”. Another important consideration when setting this up was that I didn’t want to do any meddling with the setup of my DMZ network. I just want to manually configure a host to use DÖRR as its default gateway. No need for proxies, DNSMasq, DHCP or anything like that. Just static IP’s, the way God intended it 🙂

Step 1 – The Site-to-Site Client
Once I got the container running, I had to fix the /dev/tun problem (the script above) and then make some config changes to OpenVPN.

Because this is a VPN client, and not a server, you need to get the OpenVPN client profile loaded. The bulk of my experience with OpenVPN clients is on Windows where you start the client when you need it. For this application you need to automatically run the OpenVPN connect process at boot and keep it running indefinitely.

First, you need to obtain a client config. I downloaded my ‘client.ovpn’ file from my VPN provider, and I copied it to /etc/openvpn/client.conf as root. You can name the files whatever you want, just remember what you named them because it’s important later.

cp /root/client.ovpn /etc/openvpn/client.conf

Now test the connection to make sure everything worked

openvpn --config /etc/openvpn/client.conf &

The & is important because it puts the OpenVPN process into the background, so that you get your command prompt back by pressing ENTER a couple of times. You can then test your Internet connection to see what your IP is a few different ways. You can use SSH with a dynamic port and tunnel your web traffic thru it with a SOCKs proxy. You could use curl or lynx to view a page that will display your IP. Or you could just use wget. I like to use ifconfig.co like so:

wget -qO- ifconfig.co

If all goes well, you should see your VPN provider’s IP and not your ISP’s.

Once you get the VPN client working, you then want it to start up and connect at boot time. You do this by setting the ‘autostart’ option in /etc/default/openvpn.

nano /etc/default/openvpn
AUTOSTART="client"

If you changed your ‘/etc/openvpn/client.conf’ filename, you change the name here. The AUTOSTART value is the name of that file minus the ‘.conf’

Now reboot your server and do your wget test again to make sure that the VPN connection is starting automatically.

Once that is working, you have to route traffic. This means IPTables, because OpenVPN and IPTables go together like pizza and beer.

Step 2 – De Routningen

Normally to route traffic between interfaces on Linux, you have to add IP forwarding (echo 1 > /proc/sys/net/ipv4/ip_forward etc.) In this case, the Turnkey OpenVPN template has already done that for you. All you have to do add a few forwarding rules:

iptables -A FORWARD -o tun0 -i eth0 -s 192.168.1.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE

Now it’s time to test them. For this you need a client computer with a static IP. For the default gateway you want to use the static IP that you assigned to eth0 on your VPN doorway server. I used 192.168.0.254 for DÖRR. If your test box also shows your VPN provider’s IP when you access a site like ipleak.net then it’s time to make those rules permanent. By saving them to /etc/iptables.up.rules. It is important to save them to that specific file because the Turnkey template calls that file when setting up the eth0 interface in /etc/network/interfaces.

iptables-save | tee /etc/iptables.up.rules

I don’t know why it’s set up that way. I’m just here to make awful jokes about Germanic languages.

Once that’s done, reboot the doorway server one last time and test with your client computer with the alternate default gateway.

Now that the my VPN client is working again, I need to rebuild my BitTorrent machine. I am going to try to save some more RAM by going with another Turnkey Linux container template.

EDIT: In my elation over getting something to work, I forgot to change the default gateway back. Unfortunately my test machine was PORTAL, which happens to control my dynamic DNS. So currently all of my hostnames are pointed at Sweden, LÖL.

Remote Access Shenannegans

A while back, I wrote about using Windows HyperV server. The reason that I set up this server was to use the combination of a Linux server and a Windows desktop to get remote access to my home network. I thought that I would elaborate on the tools that I use to get into my home network from work or while traveling.

I use several methods, each with certain advantages and disadvantages. Mostly I prefer SSH over pretty much anything else in order to connect to a Linux host, and I prefer Remote Desktop over pretty much anything else in order to connect to a Windows host. As a backup, I will use Teamviewer. It’s not ideal, but it works where other services fail.

SSH is pretty much a Swiss Army Knife of network tools. You can use it to do waaaay more with it than just log into a Unix box and execute commands. It’s a tool for creating encrypted tunnels, it just so happens that 90% of those tunnels connect to remote shells. In addition to connecting to a remote shell, you can open ports on a host. I am fortunate enough to have Cincinnati Bell Fioptics which lets me open almost any port on my firewall without any bother. I forward port 22 directly to a Linux box named HUB, and I secure it with SSH keys. I can then use SSH to tunnel traffic into my home network, be that browser traffic through a SOCKS proxy and dynamic port, or RDP traffic with a local port. This works well when I am in a restrictive network that still allows outbound SSH traffic, and as long as I have my Putty session set up ahead of time with my private key. This is the technique that I use when I am not able to access my network through NeoRouter.

Remote Desktop (RDP) is another Swiss Army Knife for connecting to computers. I use Windows as my primary desktop OS. I like to use Linux mostly for server stuff and for running specific tools like Clonezilla or Kali. As a matter of fact, I prefer Linux for servers and tools over Windows. I know, I’m an odd duck. RDP not only gives you remote access to the Windows Desktop, it lets you map drives remotely to transfer files and it lets you connect at a desktop resolution that is greater or lesser than that of the machine that you are connecting to. This is a big deal when you are using RDP on a wide-screen monitor to control a server that is plugged into an old CRT monitor, or when you are using a tiny netbook to control your multi-screen desktop. Teamviewer (and the VNC server that it is based on) cannot do that.

In order to make my SSH and RDP connections, I like to use either NeoRouter or OpenVPN. NeoRouter is technically a split-tunneling VPN solution, but I like to think of it as creating a network of computers that is independent of their actual networks. Split-tunneling VPN is a fancy term for VPN connections that don’t mess with your Internet access. There are lots of other features for split-tunnels, but under most circumstances, I want my computers to talk to each other differently than they talk to the Internet.

The NeoRouter network explorer tool lets me see which of my computers are up and connected. I run the NeoRouter server on HUB, which is sitting behind my firewall, with port 32976 forwarded to it as well. Running the server inside my firewall lets me do some neat networking tricks, like having my BitTorrent VM connect to the internal IP for HUB, instead of using the Internet. My BitTorrent box uses a VPN client to route all Internet traffic through Sweden, which really slows down my Remote Desktop session. I run the NeoRouter client on my desktops and laptops, and also on my file servers so that I can access shared folders remotely. File transfers this way can be really slow, so I also use One Drive top share big files like videos or ISO images.

OpenVPN is my tool of choice for open WiFi networks at hotels and coffee shops. I can access my home network while also securing all of my network traffic. I run OpenVPN Access Server on a dedicated VM named GATE. Access Server is easy to use and configure, and it’s free for two concurrent connections. For occasional use, especially by people other than me, it works really well. There’s even a ready made Hyper-V appliance that you can just boot up and go. I used to run OpenVPN on HUB, but the networking/subnet stuff meant that I had to remember the internal IP for the OpenVPN network segment and change it to connect to NeoRouter. So I just use two separate machines and it all works out. I have built OpenVPN servers without Access Server in the past. I like to use the Turnkey Linux OpenVPN appliance, and setup couldn’t be easier.

If I cannot get in via NeoRouter, OpenVPN, or old school SSH tunneling, then I fall back on using TeamViewer. It can get me in when pretty much all other tools fail me, but it’s not as nice as using RDP. Also, it should be noted that TeamViewer can only be used to control graphical desktops, there is no command line equivalent. In order to alleviate some of the frustrations of TeamViewer’s desktop resolution, I run a dedicated Windows VM that I call Portal. I keep the native (console) resolution fairly low, and I have RDP and Putty sessions set up so I can quickly connect to my other computers.

One other thing that I use Portal for is to move files into and out of my home network. You can use RDP or TeamViewer to copy files, but for big files like videos and ISO’s, One Drive does a much better job. I have a dedicated One Drive account that I use specifically for moving files this way. I just grab a file from somewhere, copy it to the One Drive folder on Portal, and it automagically uploads. Then, some time later, I can use the One Drive website to download the file, at much faster speeds than using RDP, SCP (SSH), or TeamViewer’s file transfer tool. It’s an extra step, but one worth taking, especially if I find myself in an oh-shit-i-forgot-that-important-file situation.

I hate separating hackers based on morality.

I have given a few talks recently to non-hacker audiences. In so doing, I learned that even at its most basic level, the idea of what hacking is, is kind of lost on “normal people.” The “Wanna Cry” malware couldn’t have better illustrated the things I was trying to teach.

It’s not that normies aren’t capable of understanding, it’s that they have been given the wrong information  by the government, the media, and popular culture for years. There is this fairly lame idea of hackers following  this sort of monochromatic gradient matching that of the old-west: the good guys wear white hats, the bad guys wear black hats, and there is a spectrum of moralities in between. There are legitimate ethics that guide hackers, they just aren’t the kinds that you hear about in movies and on TV:

  1. The Sharing Imperative – Hacking is a gift economy. You get tools, knowledge and code for free, so you have to share what you have learned to keep growing the pool.
  2. The Hands-On Imperative – Just like “real” science, you have to learn by doing. Take things apart, break them even, and learn how they work. Use that knowledge to create interesting things.
  3. The Community Imperative – Communities (geographic, philosophical, etc.) are how it gets done. Crews, clubs, chat rooms, hackerspaces, conferences, email lists, are all places for n00bs to ask questions and get flamed, and for l33ts to hold court.

Monochromatic Morality
The typical whitehat is a security researcher, penetration tester, or security consultant that only hacks the computers and networks that they have permission to hack. This can either be a lab environment built for research, a client who has retained security services, or an employer who has granted express permission. Whitehats then disclose their findings. This disclosure may be for the benefit of a client or an employer, or it may be to benefit the public. The key difference is that the whitehat first seeks permission and then shares their discovery for the benefit of others.

The typical blackhat is a generally considered to be a criminal. They hack systems that do not belong to them and then do not disclose their findings. The exploits that they develop are then hoarded and stockpiled for their benefit alone. The key difference is that blackhats do not seek permission, they do not disclose their findings, and they hack for the benefit of themselves.

The gray areas have to do with the degree to which a hacker has permission, discloses their findings, and how they profit from their activities. Whitehats are supposed to have “real” jobs and share everything, blackhats supposedly don’t have jobs and therefore hack for money. A typical grayhat might hack systems that don’t belong to them but then anonymously share their findings, or they might develop their exploits in a lab, but then sell those exploits rather than disclosing them.

In my professional life, I routinely employ hacking tools for the benefit of my employer, whether it’s scanning networks to find and fix problems, or cracking passwords to help users who have lost access to their computers. In previous jobs, I have exfiltrated research data from one network to another at the request of the data’s owner. While I don’t always have my employer’s explicit permission to do what I do, they hired me to fix problems for their users, so I do what it takes. The things that I learn, I then share and teach to others, whether that’s talks at conferences or Cinci2600 meetings, or posts on this blog. I have no idea where that falls in the white/gray spectrum.

Chromatic Pragmatism
red_vs_blueInstead of black and white, I prefer to look at hacking from a red vs. blue perspective. Regardless of your moral compass (or that of your employer), you are either on the offensive end which is the red team or the defensive end, which is blue team.

Teams are better terms to think in because hacking is a social activity. You may or may not be physically alone, but you are always learning from others. You read docs and code, you try stuff, you get stuck, you look up answers and ultimately ask someone for help. The idea of hackers as introverted smart kids living in their mom’s basements isn’t nearly as accurate as TV would have you believe.

Regardless of the reason why you are hacking a computer or a network, you are either the attacker or the defender. You are either probing defenses looking for  a way in, or you are hardening defenses to keep others out. You can further divide these activities into application vs. network security, but at that point the discussion is more about tools.

A great example of this is the people that run botnets. Once a bot-herder gets control of a computer (bad), they will then patch that computer (good) so that some other bot-herder doesn’t snatch it away from them (???).

Thinking about hacking in terms of offense and defense takes away all of the politics, business, and patriotism of your red and blue teams. If you are a red teamer, backed by your country’s military, you might be doing black hat stuff like seizing control of things that don’t belong to you for a “good” cause. You might be a blue teamer working for organized crime syndicate, doing white hat stuff like analyzing malware for “bad” people. You might be a whistle-blower or a journalist, exfiltrating stolen data to expose bad acts by a government.

Wanna Cry: with the good comes the bad, with the bad comes the good
The Wanna Cry debacle is interesting because of its timing, its origin, its disclosure, and its impact.

Its timing is interesting because nation-state political hacking is like half of all discussions when it comes to the Presidential election. Turns out that the USA hacks as much or more shit than Russia does.

Its origin is interesting because the tools in the leaked sample appear to come from the NSA. The leak comes from a group known as “Shadow Brokers.” They said they would auction the rest for a large sum of money. The world got a head start on an inevitable malware outbreak thanks to some bad guys doing a good thing by releasing something that they discovered. Something that the US Government had been hoarding to use against its enemies.

The disclosure is interesting because the first release is a free sample to prove the quality of the goods they intend to auction. This is the Golden Key problem in a nutshell: a tool, used by the good guys, falls into the hands of the bad guys, and chaos ensues.

The zero-day exploit exposed by the leaked tools was then used to implement a large scale ransomware attack that severely affected systems in Europe and the UK. A researcher was able to locate a call in the ransomware to deactivate the malware, which stopped the attack dead in its tracks. There are lots of theories about this strange turn of events, but my personal theory is that the ransomware campaign was a warning shot. Possibly to prove out a concept, possibly to urge everyone to patch against the vulnerability before a proper villain did some real damage with it.

The idea that NSA tools were compromised and disclosed by a criminal organization, turns the whole black hat/white hat thing on its head. The NSA was hoarding exploits and not disclosing them, which is total black hat move. Shadow Brokers exposed the tools, prompting a widespread campaign to fix a number of vulnerabilities, which is a total white hat move. So you have a government agency, a “good guy”, doing black hat things, and a criminal organization, a “bad guy”, doing white hat things.

If you want to talk about the specifics of the hack, the NSA’s blue team didn’t do its job, and the Shadow Brokers’ red team ate the NSA’s lunch. The blue team’s principle was a server where attacks were either launched or controlled. This server was the red team’s target. It’s a pretty epic win for the red team because the NSA is a very advanced hacking group, possibly the best in the world.

Windows Hyper-V Manager is Stupid

I spend many hours at work in the middle of the night. Sometimes I work on my own things by connecting to my gear at home. I call this telecommuting in reverse. In order to facilitate my reverse telecommute, I use a couple of machines, one Linux box I call Hub, for OpenVPN, SSH, and NeoRouter, and one Windows machine I call Portal, for Teamviewer, Remote Desktop, and to run my DNS hosts Windows-only dynamic DNS client. Hub died, and so I figured I would run the two machines on one box via XenServer or Virtualbox. It turns out that the hardware for Portal doesn’t do Linux very well. So I decided to take a run at virtualization with Hyper-V. Hyper-V Server 2012 R2 lets you evaluate the product indefinitely, so I thought that would be a good place to start.

After downloading the ISO, which is hard to locate on the MS TechNet site, I burned it to disk and wiped Portal and loaded Hyper-V Server and configured a static IP for it. This isn’t a high end box, it’s a dual core AMD with 8gb of ram. It’s fine for using Windows 7 as a springboard to get into my home network. I just want to spin up a couple of low end Linux boxes and a Windows machine. The sconfig.cmd tool is fine for the basics of setting up the box, but since I am not much of a powershell guy, I wanted to use the Hyper-V manager on another workstation. I was trying to do this without having to pirate anything, and it turned out to be a complete waste of time.

Hyper-V Manager and the Hyper-V Server that it can manage is basically a matched set. You can use the manager on Windows 7 to connect to Hyper-V on Server 2008 and earlier. You can’t really use Win7 or Win10 to manage 2012 R2. So, I basically have to either pirate Server 2008, pirate Win8.1, or pirate Server 2016. Or, I can just use a ProHVM, a third party tool from a Swedish company that seems to have been invented specifically because Hyper-V Manager is the worst.

Even with ProHVM, it’s not all champagne and roses. Accessing the console of a VM causes wonky keyboard performance. This is mildly frustrating, so I recommend using a mouse as much as possible for configuration of a VM. The only real showstopper is logging in to a Linux box with no GUI. Having only 50% of your keystrokes register makes logging into the console completely impossible because you don’t see the *** to let you know which character you are on.

My workaround for Debian VMs is to not set a root password, which forces Debian to disable root in favor of sudo, like Ubuntu. Then you set a very short password for your user account (like 12345, same as the combination to my luggage) and make certain that you set up an SSH server during setup. Then you can SSH to the box and use the ‘passwd’ command to reset the password to something more secure. Then you can configure SSH keys for your logins.

So if you find yourself in a situation where you need to do virtualization on Windows, and you are deeply invested in the idea of using 2012 R2, don’t bother with Hyper-V manager. Instead, download ProHVM, and then use ProHVM as little as possible. It’s free for non-commercial use and you can build new VMs and all that stuff that you *should* be able to use Hyper-V Manager for.

My guide to setting up SSH keys with Putty

TL;DR: if you just want to set up keys with putty: IDGAF about Cloud At Cost take me to the Putty screenshots. If you are sitting in front of a computer that you don’t have keys set up on, and you are trying to log into a remote server that you have already locked down: I haven’t set up keys on my other computer like an absolute walnut.

Fun with Cloud At Cost
I have become a kind of fan of Cloud At Cost. Their one-time-fee servers and easy build process is great for spinning up test machines. I would hardly recommend running anything that I would consider “production” or mission critical on a cloud at cost VM, but it is a cheap, quick, and simple way to spin up boxes to play with until you are ready for more expensive/permanent hosting (like with Digital Ocean or Amazon). Spinning up a new box means securing SSH. So here is my guide.

The major problem with a hosted server of any kind is drive-by scans. There are folks out there that scan for huge swaths of the Internet looking for vulnerable machines. There are two basic varieties: scanning a single host for all vulnerabilities, and scanning a large number of hosts for a specific vulnerability. A plain box should really only be running SSH, so that is the security focus of this post. There should also be a firewall running, that rejects connections on all ports except the services you absolutely need, but that’s a blog post for another day.

It should be noted that Your security measures don’t necessarily have to be top notch, your box just has to be less convenient than the next host on the scanners’ lists. It’s not hard to scan a large subnet and find hosts to hammer on. Drive-by scans are a numbers game; it’s all about the low hanging fruit. With C@C, it’s a question of timing. You have to get onto the box and lock it down quickly. Maybe I’m just being paranoid, but I have had boxes that I didn’t log in to right after spinning them up and I have seen very high CPU utilization on them when they aren’t really running anything, which leads me to believe that the host has been compromised. Also, beware that the web-based stats can be wildly inaccurate.

This guide will only lock down SSH. If you are running a web server, this guide will not lock down the web server. If you are running Asterisk, this guide will not lock down Asterisk. If you are running MySQL, for the love of god make sure that it’s only accessible from localhost (127.0.0.1) and that is not accessible from the Internet. All this guide will do is shore up a couple of vulnerabilities with SSH. I recommend running these steps *BEFORE* installing anything on your VM.

My use case for Cloud At Cost is something like this: There are times when I need a box that is easier to get to than hosting a box on my home network, but doesn’t really justify the monthly cost of running a server on Linode, Digital Ocean or Amazon. For me, I spend a lot of time working all night inside a very restrictive corporate network, so it’s hard to get access to my stuff at home especially since Team Viewer is compromised. C@C is cheap and easy, which probably means it’s a playground for scammers and other bad actors. This means it’s a good idea to lock down your box before you do anything useful with it, and keep the useful things that you do with it to the bare minimum.

You can get started with C@C for around $35, but if you follow them closely, you can catch some of their discount deals and get a very low end developer box for around $10. I took advantage of a few of these promotions and now I have a bucket of resources at my disposal for all of my tinkering needs. Also, if your box starts to misbehave (loads of network traffic, high cpu utilization, etc.) it’s probably compromised, so just torch it and build a new one.

Getting Started

You can learn about the basics of the Cloud At Cost panel here, the info will be useful later on:

Once you have signed up with C@C, bought some resources, and fired up your Linux VM, it’s time to do some housekeeping. I prefer Debian, and it’s what I am using in this guide, but it doesn’t really matter what you choose.

As soon as the box is up, log in with SSH, using the root password given in the information button. I use putty*, because most of my time in front of a computer is spent working or gaming, so I use Windows a lot. I know it upsets a lot of folks to hear that, but hey, those folks can feel secure in knowing that their “Unix Beards” are mightier than mine.

The very first thing that I do is change the root password. Make like 30 or more random characters. You shouldn’t actually need to type it in after this point, but keep it somewhere encrypted just in case. I also comment out the non-us repo that C@C Debian machines are still pointed to in sources list:

passwd
nano /etc/apt/sources.list

Just locate the line that begins with “deb http://non-us.debian.org” and put a # in front of it. On a C@C Debain 8 box, it should be the first line.

With that pesky non-US entry removed, you are clear to update your packages:
apt-get update
apt-get upgrade

I also run these commands from the Nerd Vittles blog to make sure the password doesn’t revert to the Cloud At Cost root password:

sed -i '/exit 0/d' /etc/rc.local
killall plymouthd
echo killall plymouthd >> /etc/rc.local
rm -f /etc/rc3.d/S97*
echo "exit 0" >> /etc/rc.local

I don’t know if they are strictly necessary, but the dudes at Nerd Vittles recommend it, and they spend waaaay more time doing this stuff than I do, so there you have it.

After that, it’s time to install fail2ban, and then create a non-root user:

apt-get install fail2ban
adduser steve

Hopefully, in a few minutes fail2ban will be made superfluous by our additional security measures. In the meantime it will stop brute force attempts. Some of my hacker buddies change the default port for SSH to throw off driveby scans, but the restrictive corporate network I mentioned before doesn’t like arbitrary ports, so that’s a hard no in this case.

Enable Sudo for a Non-Root User

To start implementing our security measures, we will install sudo, add ‘steve’ (our non-root user) to the sudo group, and then make sure steve has the right permissions in the sudoers file:
apt-get install sudo
adduser steve sudo
nano /etc/sudoers

At this point the /etc/sudoers file should open in the Nano next editor. I know I should be using vi, but I am too busy #YOLOing to do that Unix Beard crap. 🙂

Press ‘ctrl+w’ to open the search box, and type ‘%sudo’ to find the permissions line.
Press ‘ctrl+k’ to cut the ‘%sudo ALL=(ALL:ALL) ALL’ line, and then ‘ctrl+u, ctrl+u’ (hold ctrl and press ‘u’ twice) to paste the line in twice.
Edit the second line to read ‘steve ALL=(ALL:ALL) ALL’ and press ‘ctrl+x’ to exit, and press enter to save.

Setting up sudo is important because we are going to disable root logins here in a minute, but first we are going to set up SSH Keys for logins and then disable clear text logins. SSH does use clear text passwords, but it passes them through an encrypted tunnel. This means that while your password isn’t likely to be sniffed, it could be guessed or brute forced. Using SSH keys means you have to have the right private key to match with a public key on the server. But before we can do any of that, we need to test the new non-root account by logging in with it.

Once you are logged in as steve, test sudo:
sudo whoami

Which should return ‘root’.

Securing SSH with Asymmetric Keys

Once the non-root account is working and sudo-ing, we can proceed to lock down SSH with public+private key pairs. I will explain how to do this with putty for Windows, but it’s actually way easier to do this with Unix.

The first step is to make sure you have puttygen.exe handy. Download it and launch it, change the bits for your keys to 4096 (in the lower right corner) then click the ‘Generate’ button.

puttygen1
Wiggle the mouse around for a bit, and in a minute or so you will see your public key, with a key comment and blanks for your passphrase. You don’t have to change the comment, or enter a passphrase, but I recommend it. I like to change the comment to match the username and server (‘steve@stevesblog.com’ in the screenshot below), since I have lots of different keys. The passphrase keeps things safe in case your private key file falls into enemy hands.**

puttygen2

At this point, you may be tempted to use the same passphrase for your private key as you use for your non-root user account. This is a bad idea, because your non-root password is now basically your root password. Do yourself a favor and use two completely different passwords.

Next, click ‘Save private key’ and save the resulting .ppk file in a safe location, but don’t close the puttygen window just yet. If you use multiple computers, putty will let you re-use your private key file between Windows machines, if that’s what you’re into. SSH on Linux may, but it will not let you use a puttygen file in a Linux system. (Based on that one time I tried it and it didn’t work for me.) So just keep that in mind.

Also, it’s no big deal to have multiple private/public key pairs on the same server. You can use a different pair for each client computer, which is probably safer and more convenient than using a shared key pair. If you lose access to a client machine for whatever reason, you can just delete the public key off of the server and that machine won’t be able to connect to your server.

Leave your puttygen window up and switch back to your putty/SSH window. Create a .ssh folder and a key file for SSH, then a text file to store your keys:
mkdir ~/.ssh
nano ~/.ssh/authorized_keys

Paste the Public Key text in the top of the puttygen window onto a single line in the file. This will be a Very Large Line Of Text(tm) (VLLOT). The VLOTT should begin with ‘ssh-rsa’ and end with ‘rsa-key-yyyymmdd’ where yyyymmdd is the date you created the key. Sometimes the key comment (steve@stevesblog.com in the example below) is the last bit of text. I haven’t quite nailed down why that is, presumably an order of operations thing. Anyway, be sure that the VLOTT begins with ssh-rsa, or you didn’t grab all the text in the public key.

Save and close the file (‘ctrl+x’ and then ‘enter’) and then set the permissions for the file:
chmod 600 ~/.ssh/authorized_keys

UPDATE: on CentOS 7 the home directory (~), the .ssh directory and the authorized_keys file should all be writable only by the owner so do this instead:
chmod 0700 ~
chmod 0700 ~/.ssh
chmod 0700 ~/.ssh/authorized_keys

Now exit your ssh session and reopen putty. You need to set the IP address of your server as the hostname. I prefer this to host names because DNS can’t always be trusted. Give your session a useful name.

putty_2

Under ‘Connection -> Data’ add the username for your non-root account. In this example, I named my account ‘steve’.

putty3

Under ‘Connection -> SSH -> Auth’ browse to the safe place you saved your private key. You pasted your public key onto the server, and you have your private key stored on your computer. You will want to keep the private key file safe because if you lose it you have to set up a new pair while logged in at the console, which is a total pain. I keep mine in Dropbox so I can use them on multiple PCs, but I keep them secured with a passphrase.**

putty4

Now go back to Session and save your session profile. Henceforth you can connect simply by double clicking ‘steve’s server’ under ‘Saved Sessions’.

Now it’s time to test your new key pair. Just double click ‘steve’s server’ and you should be prompted for the passphrase that you set for your private key. Once you enter it, you should be logged in to the server as user ‘steve’. If you were able to log in using your key, you are all set to move on. You are now free to close PuttyGen.

* Protip: put your putty.exe file in ‘c:\windows\system32’ so you can run putty from the command line or the run line. If you want to be a real hard rock, rename putty.exe to ssh.exe. Did you know putty accepts commandline args? It does, so you can do awesome Unixy shit from the command line like type ‘ssh steve@testbox.stevesblog.com’ to connect to a remote host. It still pops up your connection in the putty window, but it keeps your hands on the keyboard. 🙂

** Another Protip: not setting a passphrase is handy for automating ssh connections, especially if you want to move files back and forth with ‘scp’ or mess with tunneling via local and remote ports. I haven’t found a decent scp command line app for Windows, other than the Unix utils in CygWin.

If The Server Rejects Your Key

It’s most likely that you didn’t paste the public key correctly. This is why we left the PuttyGen window open. 🙂

Log in with your non-root username and password (‘steve’ in this example) and open your ~/.ssh/authorized_keys file in nano again:
nano ~/.ssh/authorized_keys

In the PuttyGen window, make sure that you scroll to the top of the public key text. It should begin with ‘ssh-rsa’. Now click and drag down to the end of the public key text, then right click and select ‘copy’.

In the Putty window, with your authorized_keys file open in nano, delete the incomplete key and paste the complete text of the public key on a single VLLOT.

Save and exit nano, then exit your SSH session and try again.

Also make sure that you changed the permissions of the authorized_keys file:
chmod 600 ~/.ssh/authorized_keys

If your key is still being rejected, generate a new public and private key by clicking the ‘Generate’ button and starting the whole key process over again.

Disable Root and Cleartext Logins

Once your keypair is working, (and you are able to log in with it) it’s time to eliminate root logins and cleartext logins. Some folks will tell you that root logins are fine with SSH because passwords don’t get sent in the clear. While that’s true, ‘root’ is still the one username that is guaranteed to be on every Unix-based machine, so if you are going to brute force an account, this is the one to focus your efforts on. Disabling root logins and clear text logins is all done in the sshd_config file:
sudo nano /etc/ssh/sshd_config

Press ‘ctrl+w’ and search for the word ‘root’. You are looking for this entry:
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

Change ‘#PermitRootLogin yes’ to ‘PermitRootLogin no’. (uncomment if necessary and change from ‘yes’ to ‘no’.)

Then press ‘ctrl+w’ and search for the words ‘clear text’. You are looking for this entry:
# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes

Change ‘#PasswordAuthentication yes’ to ‘PasswordAuthentication no’ (uncomment and change from ‘yes’ to ‘no’.)

Once these changes are made, DO NOT LOG OFF OF YOUR SSH SESSION. Once these changes are implemented, it will be hard to log back in to undo anything if you make a mistake. You should have tested and succeeded with your ssh-key based login because we are about to restart the ssh daemon and prevent clear text logins:
sudo systemctl restart ssh

UPDATE: on Debian 9 you can restart SSH with
service ssh restart
UPDATE: on CentOS 7 you can restart SSH with
systemctl restart sshd.service

To test ssh logins, connect to the IP of your server with putty using the ‘Default Settings’ profile. Your login attempt should fail because only people with private keys are allowed to the party:

putty_failed

At this point you are far from being hack-proof, but you are a bit more locked down than you were before, and there are always more convenient targets out there 🙂

Hardening web servers is another story, which really isn’t my bag to be honest. There’s a reason that I host my blogs with Google or WordPress 🙂

OH SHIT! I set this up on my Windows machine at home and I don’t have access to my private key at work/school/Aunt Tillie’s computer!

So you’ve set up your keys, you’ve disabled clear text logins and now you are trying to get a new public key onto your locked down box, but you can’t log in because you don’t have the private key. How screwed are you?

Not completely.

I find myself in this situation when I am setting up a new Linux or Unix workstation. With Windows I use a cloud storage service to keep my keys (that have passphrases), and I use one key pair per server, and just reuse my private key on each of my Windows machines.. The passphrase protects the private key on the cloud service (should the cloud service experience a breach or some other security failure) and on the local drive of my PC, should it fall into the wrong hands.

In the Unix world, I do the opposite. As I stated above, SSH keys are way easier to do with Unix. It’s no problem to produce key pairs and upload them to remote servers, so I use one key pair on per workstation, but then I use that pair on each of my servers.

This is probably a much safer practice than my Windows+Dropbox approach, but I use encryption tools like BitLocker and KeePass to add an little more security when I use Windows.

All you have to do on your new Unix box is create a new key pair like so:
ssh-keygen -t rsa -b 4096

And then use the ever so handy tool ssh-copy-id to add the newly created public key:
ssh-copy-id -i ~/.ssh/id_rsa user@remote-server

It couldn’t be easier, except when you have disabled password logons like an absolute walnut. In that case, you will need to log into both your new Unix box and your locked down remote Unix box from a computer that already has a key pair configured with the locked down remote server. For this reason, I recommend setting up a VM, a container, or a shell account to use as an intermediary.

By default, RSA key pairs will be stored in ~/.ssh/id_rsa (private) and ~/.ssh/id_rsa.pub (public). Once you are logged in on your third computer that is already set up with a key pair, open an SSH connection to the new Unix machine and an SSH connection to the locked down remote server. For the purposes of this demonstration, I will call the new box “newbox” and the locked down remote server “remotebox”. Both non-root user accounts will be called “steve”.

In the window for your session on newbox, view your id_rsa.pub file like so:
steve@newbox:~$ cat ~/.ssh/id_rsa.pub

You will see the familiar Very Long Line of Text ™ (VLLOT) which is the public key. The text will fill multiple lines. This is important to note, because you will need to copy all of these lines in a moment.

In the window for your session on remotebox, edit your authorized_keys file like so:
steve@remotebox:~$ nano ~/.ssh/authorized_keys

You will see the VLLOT that you pasted in when you locked down the server initially. It will be all on one line, and will most likely be truncated. This is important to note because you will be pasting text into this window in a moment.

Now you just copy the VLLOT from the window with your session on newbox into the editor window with your session on remotebox. If you are using putty for this operation, you can copy text by pressing ALT+C, instead of CTRL+C. CTRL+C cancels things in the Unix shell, and in Nano it will show the current cursor position.

Your window with our session on remotebox should now have two Very Long Lines Of Text(tm). Use the arrow keys on your keyboard to verify that the whole line is there. If you are certain that the VLLOT was pasted correctly, simply save the authorized_keys file and exit. In Nano, CTRL+O to saves and CTRL+X exits.

Once you have saved the authorized_keys file, switch to your window with your session on newbox. You can now attempt to connect to remotebox via SSH:

steve@newbox:~$ ssh steve@remotebox

If all goes well, you should be prompted for your passphrase for “/home/steve/.ssh/id_rsa”. If not, you probably didn’t paste the VLLOT correctly.

Public Access Unix Rocks

As I stated before, I recommend having a VM, container, or a Unix shell account to use as an intermediary for accessing locked down remote servers. Getting remote access to your gear is important. So important that I run multiple VMs to make sure that I can access everything remotely.

If you are not fortunate enough to be blessed with an embarrassment of hardware like I am, you can still use a dedicated server as an intermediary by signing up for a shell account with SDF.

I have had a shell account at SDF for decades. It enabled me to learn about large Unix systems without needing to set up a Linux box. Even if you have a server at home and a hosted server somewhere else, having a Unix shell account is still a great tool to have in your arsenal for way more than just stupid SSH tricks.

My .screenrc

I am a huge fan of screen. It’s indispensable for working on a Unix host via SSH. It lets me have multiple terminals (screens) up at a time. There are dudes that use screen to split their terminals into multiple views, like a tiling window manager, but for the command line.

My needs are not nearly as sophisticated, since I mostly use putty to connect to Linux servers from Windows.

I use 4 special keys:
F9 to detach from the screen session. This is leaves your session running in the background. I mostly use this to idle in IRC. Once detached from your session you can view your active screen session by typing:
screen -ls

Which will return something like this:

user@localhost:~$ screen -ls
There is a screen on:
2030.pts-0.localhost (05/25/2016 06:45:51 PM) (Detached)
1 Socket in /var/run/screen/S-user.

To reconnect to a detached screen session, type
screen -r 2030.pts-0.localhost

If the session is in use elsewhere, use the -D option:
screen -D 2030.pts-0.localhost

This will disconnect the screen session that’s in use, log off the SSH session that initiated it, and then reattach the active SSH session to the screen session.

Or, if you’re like me and never have any idea if you have a running a screen or not, just combine -D and -R and quit worrying about sockets and get on with your life:
screen -DR

And, if you are also like me and forget the switches for screen, just use the alias command in your .bashrc to have screen do -DR every time:
alias screen = 'screen -DR'

F10 to open a new terminal in screen.

This option lets you have multiple terminals in the same SSH session. This is handy for having a full screen app (like irssi) in one term, and one or more additional terms for running other commands. To close a terminal, type
exit

F11 and F12 to switch terminals
When you have multiple terminals open you can navigate them, from left to right with the F11 key to select the terminal to the right, and the F12 key to select the terminal to the left.

The File
To use this file, simply paste the contents below into a file called .screenrc in your home directory. So here it is, the .screenrc, that I have been using for years:


startup_message off

# Window list at the bottom.
# I got the long line of vars from https://bbs.archlinux.org/viewtopic.php?pid=423481#p423481
hardstatus alwayslastline
hardstatus string "%{.kW}%-w%{.W}%n %t%{-}%{=b kw}%?%+w%? %=%c %d/%m/%Y" #B&W & date&time

# From Stephen Shirley
# Don't block command output if the terminal stops responding
# (like if the ssh connection times out for example).
nonblock on

# Allow editors etc. to restore display on exit
# rather than leaving existing text in place
altscreen on

# bind F9 to detach screen session (to background)
bindkey -k k9 detach

# bind F10 to create a new screen
bindkey -k k; screen

# Bind F11 and F12 (NOT F1 and F2) to previous and next screen window
bindkey -k F1 prev
bindkey -k F2 next

The Drama With My New Laptop: the High Cost of Saving $350 (part 3)

This post contains a lot of profanity. Like a shitload.

When we last left our heroes, I had finally managed to encrypt my SSD, and after running clonezilla probably a hundred times to back up and restore the drive after fucking it up, I decided to try and simplify the backup process.

Part of the hassle was the fact that I had removed the optical drive and installed the original mechanical drive into that bay. This meant booting from an external DVD drive, or from a USB stick in order to do the backups. I was also using GParted a lot, which meant a second cd-rom disc or thumb drive. Thankfully I was using an i-Odd external hard drive to do this, but it still meant plugging something in so that I could copy files to an internal hard drive. Backing up has to be convenient or backups simply won’t happen.

My first thought was to install linux on an external drive. This would give me the option of using the drive on different computers. Maybe it’s possible, but I never got it to go. I wiped an external drive a couple of times. I used to use Sardu Linux, but it was not that reliable, and the project seldom kept pace with new versions of live CDs. Also the primary developer started putting spammy spyware in the installer at one point.

After a lot of formatting and re-partitioning, this time on my secondary clonezilla_logo_smallbackup drive, I decided to go with a simpler approach and just put the Clonezilla live install on a small partition on the backup drive. This hadn’t worked on my USB external drive, but I wanted to try it with the internal, based on this document. Basically I created an 800mb FAT32partition and extracted the zip to that partition. I used the rest of the disk for a large NTFS partition. I skipped all the GRUB stuff, and I just use the alternate boot menu to boot from the other drive when I want to do my backups. I then set the FAT32 partition to be hidden so it won’t show up in Windows. It would have been great to have a small Linux install for times when I am in a hurry and I don’t want to decrypt my Windows drive, but this will do fine for now.

holy shit! i got it working!