I hate separating hackers based on morality.

mr-robot-addressI have given a few talks recently to non-hacker audiences. In so doing, I learned that even at it’s most basic level, the idea of what hacking is, is kind of lost on “normal people.” The “Wanna Cry” malware couldn’t have better illustrated the things I was trying to teach.

It’s not that normies aren’t capable of understanding, it’s that they have been given the wrong information  by the government, the media, and popular culture for years. There is this fairly lame idea of hackers following  this sort of monochromatic gradient matching that of the old-west: the good guys wear white hats, the bad guys wear black hats, and there is a spectrum of moralities in between. There are legitimate ethics that guide hackers, they just aren’t the kinds that you hear about in movies and on TV:

  1. The Sharing Imperative – Hacking is a gift economy. You get tools, knowledge and code for free, so you have to share what you have learned to keep growing the pool.
  2. The Hands-On Imperative – Just like “real” science, you have to learn by doing. Take things apart, break them even, and learn how they work. Use that knowledge to create interesting things.
  3. The Community Imperative – Communities (geographic, philosophical, etc.) are how it gets done. Crews, clubs, chat rooms, hackerspaces, conferences, email lists, are all places for n00bs to ask questions and get flamed, and for l33ts to hold court.

Monochromatic Morality
heckermanThe typical whitehat is a security researcher, penetration tester, or security consultant that only hacks the computers and networks that they have permission to hack. This can either be a lab environment built for research, a client who has retained security services, or an employer who has granted express permission. Whitehats then disclose their findings. This disclosure may be for the benefit of a client or an employer, or it may be to benefit the public. The key differentiator is that the whitehat gets permission and then shares their discovery for the benefit of others.

The typical blackhat is a generally considered to be a criminal. They hack systems that do not belong to them and then do not disclose their findings. The exploits that they discover are hoarded and stockpiled for their benefit alone. The key differentiator is that blackhats do not seek permission, they do not disclose their findings, and they hack for personal benefit.

The gray areas have to do with the degree to which a hacker has permission, discloses their findings, and how they profit from their activities. Whitehats have “real” jobs and share everything, blackhats don’t have jobs and therefore hack for money. A typical grayhat might hack systems that don’t belong to them but then anonymously share their findings, or they might develop their exploits in a lab, but then sell those exploits rather than disclosing them.

In my professional life, I routinely employ hacking tools for the benefit of my employer, whether it’s scanning networks to troubleshoot problems, or cracking passwords to help users who have lost access to their computers. In previous jobs, I have exfiltrated research data from one network to another at the request of the data’s owner. While I don’t always have my employer’s explicit permission to do what I do, they hired me to fix problems for their users, so I do what it takes. The things that I learn, I then share and teach to others, whether that’s talks at conferences or Cinci2600 meetings, or posts on this blog. I have no idea where that falls in the white/gray spectrum.

Chromatic Pragmatism
red_vs_blueInstead of black and white, I prefer to look at hacking from a red vs. blue perspective. Regardless of your moral compass (or that of your employer), you are either on the offensive end which is the red team or the defensive end, which is blue team.

Teams are better terms to think in because hacking is a social activity. You may or may not be physically alone, but you are always learning from others. You read docs and code, you try stuff, you get stuck, you look up answers and ultimately ask someone for help. The idea of hackers as introverted smart kids living in their mom’s basements isn’t nearly as accurate as TV would have you believe.

Regardless of the reason why you are hacking a computer or a network, you are either the attacker or the defender. You are either probing defenses looking for  a way in, or you are hardening defenses to keep others out. You can further divide these activities into application vs. network security, but at that point the discussion is more about tools.

Thinking about hacking in terms of offense and defense takes away all the politics, business, and patriotism of your red and blue teams. If you are a red teamer, backed by your country’s military, you might be doing black hat stuff for a “good” cause. You might be a blue teamer working for organized crime syndicate, doing white hat stuff for “bad” people. You might be a whistleblower or a journalist, exposing bad acts by a government.

Wanna Cry: with the good comes the bad, with the bad comes the good
The Wanna Cry debacle is interesting because of its timing, its origin, its disclosure, and its impact.

Its timing is interesting because nation-state political hacking is like half of all discussions when it comes to the Presidential election. It’s origin is interesting because the tools in the leaked sample appear to come from the NSA. The leak comes from a group known as “Shadow Brokers.” They said they would auction the rest for a large sum of money. The disclosure is interesting because the first release is a free sample to prove the quality of the goods they intend to auction.

The zero-day exploit exposed by the leaked tools was then used to implement a large scale ransomware attack that severely affected systems in Europe and the UK. A researcher was able to locate a call in the ransomware to deactivate the malware, which stopped the attack dead in its tracks. There are lots of theories about this strange turn of events, but my personal theory is that the ransomware campaign was a warning shot. Possibly to prove out a concept, possibly to urge everyone to patch against the vulnerability.

The idea that NSA tools were compromised, and disclosed by a criminal organization, turns the whole black hat/white hat thing on its head. The NSA was hoarding exploits and not disclosing them, which is total black hat move. Shadow Brokers exposed the tools, prompting a widespread campaign to fix a number of vulnerabilities, which is a total white hat move. So you have a government agency, a “good guy”, doing bad things, and a criminal organization, a “bad guy”, doing white had things.

If you want to talk about the specifics of the hack, the NSA’s blue team didn’t do it’s job, and the Shadow Brokers’ red team ate their lunch. The blue team’s principle was a server where attacks were either launched or controlled. This server was the red team’s target. It’s a pretty epic win for the red team because the NSA is a very advanced hacking group, possibly the best in the world.

Advertisements

Windows Hyper-V Manager is Stupid

I spend many hours at work in the middle of the night. Sometimes I work on my own things by connecting to my gear at home. I call this telecommuting in reverse. In order to facilitate my reverse telecommute, I use a couple of machines, one Linux box I call Hub, for OpenVPN, SSH, and NeoRouter, and one Windows machine I call Portal, for Teamviewer, Remote Desktop, and to run my DNS hosts Windows-only dynamic DNS client. Hub died, and so I figured I would run the two machines on one box via XenServer or Virtualbox. It turns out that the hardware for Portal doesn’t do Linux very well. So I decided to take a run at virtualization with Hyper-V. Hyper-V Server 2012 R2 lets you evaluate the product indefinitely, so I thought that would be a good place to start.

After downloading the ISO, which is hard to locate on the MS TechNet site, I burned it to disk and wiped Portal and loaded Hyper-V Server and configured a static IP for it. This isn’t a high end box, it’s a dual core AMD with 8gb of ram. It’s fine for using Windows 7 as a springboard to get into my home network. I just want to spin up a couple of low end Linux boxes and a Windows machine. The sconfig.cmd tool is fine for the basics of setting up the box, but since I am not much of a powershell guy, I wanted to use the Hyper-V manager on another workstation. I was trying to do this without having to pirate anything, and it turned out to be a complete waste of time.

Hyper-V Manager and the Hyper-V Server that it can manage is basically a matched set. You can use the manager on Windows 7 to connect to Hyper-V on Server 2008 and earlier. You can’t really use Win7 or Win10 to manage 2012 R2. So, I basically have to either pirate Server 2008, pirate Win8.1, or pirate Server 2016. Or, I can just use a ProHVM, a third party tool from a Swedish company that seems to have been invented specifically because Hyper-V Manager is the worst.

Even with ProHVM, it’s not all champagne and roses. Accessing the console of a VM causes wonky keyboard performance. This is mildly frustrating, so I recommend using a mouse as much as possible for configuration of a VM. The only real showstopper is logging in to a Linux box with no GUI. Having only 50% of your keystrokes register makes logging into the console completely impossible because you don’t see the *** to let you know which character you are on.

My workaround for Debian VMs is to not set a root password, which forces Debian to disable root in favor of sudo, like Ubuntu. Then you set a very short password for your user account (like 12345, same as the combination to my luggage) and make certain that you set up an SSH server during setup. Then you can SSH to the box and use the ‘passwd’ command to reset the password to something more secure. Then you can configure SSH keys for your logins.

So if you find yourself in a situation where you need to do virtualization on Windows, and you are deeply invested in the idea of using 2012 R2, don’t bother with Hyper-V manager. Instead, download ProHVM, and then use ProHVM as little as possible. It’s free for non-commercial use and you can build new VMs and all that stuff that you *should* be able to use Hyper-V Manager for.

Cub Linux as a kid’s computer

zoey_compOne of the things that my daughter wanted for Christmas was to be able to play some of the web games she’s seen on TV. I have a strict policy about not letting anyone touch any of my computers, so I rehabilitated an old HTPC for her to use.

The PC portion was mostly incidental; her main gift was her cool keyboard, cool mouse, awesome Pepa Pig headphones, and of course, her game subscription.

The donor PC was an old Intel Atom box with 2gb of RAM. This basically made Windows impossible. I toyed with the idea of using Lubuntu, but then I came across Cub Linux. It’s basically a lightweight version of Linux that boots to the Chromium browser. It’s like an [more] open source version of Chrome OS.

Getting the machine setup was fairly straight forward. I set it to auto-login and to go to sleep after a half hour. She knows how to turn the monitor off, that’s good enough for a 4 year old. I also installed VNC media player so she can watch cartoons that I have downloaded for her.

I almost always install Samba on Linux machines because it makes it easy to move files from Windows. The process is documented fairly well here. I just shared out the home directory like before so I could put videos in the Videos folder.

old_linux_screenieOne problem with kids’ computers, especially for kids that are learning to use a computer while also learning to read, is that they need constant assistance. I use SSH for the low level operating system stuff, but a lot of it is just her not yet knowing what to do when something pops up on the screen. So I decided to share the desktop so I didn’t have to get up and walk over to the PC just to click OK or type in a password. One of the best tools for remote access to a Linux desktop is VNC.

VNC is a technology that I have been using off and on for years. I even used it on Windows in the NT and Win2K days before RDP basically obsoleted it. Every now and then VNC comes in super handy.

There are a number of ways to set up VNC, and a number of packages that deliver its functionality. Basically, you can run multiple X Window servers that let multiple users have graphical desktops at the same time. It can be super confusing for Windows users, so bear with me. Unix is multi-user. It’s meant to be used by multiple people at the same time. These users may be sitting at one or more physical consoles, virtual consoles, or remote shells. VNC is one way to get a graphical (window that you click with a mouse) console remotely on a system. You start a VNCserver on a given display x (:1, :2, :3. etc.) and then connect a VNC client to it on TCP port 509x (5091 for :1, 5092 for :2). Multiple users can run multiple servers and launch pretty much any number of graphical shells.

octopod_screenieVNC is awesome, but a kid computer is seriously single user. What I need is to be able to pull up her Linux desktop on my [often] Windows desktop, without any intervention from her, and without getting up from my desk. She is still learning to use a computer, so I want to demonstrate things on her screen. Not getting up from my desk is important because she needs assistance fairly often. Also, I happen to be a lazy slug.

Fortunately, there is a tool for doing this known as X11VNC. The key difference for X11VNC is that it shares the physical console display, :0, which is the display of the user sitting at the keyboard. This is ideal because when I connect to her computer, I see what she’s seeing, and either of us can type or move the mouse.

To set up X11VNC, I first had to get the software installed from repos:
sudo apt-get install x11vnc

After you’ve installed it, you want to create a remote access password and then edit the config to start at boot. I use the same password for the remote session that I use to log into the user account. Thanks to the auto login, no one but me should ever have to type it in.
sudo x11vnc –storepasswd /root/.vnc/passwd
sudo nano /etc/init/x11vnc.conf

Then paste this into the editor:

# description "Start x11vnc on system boot"

description "x11vnc"

start on runlevel [2345]
stop on runlevel [^2345]

console log

respawn
respawn limit 20 5

exec /usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /root/.vnc/passwd -rfbport 5900 -shared


Then you can use any VNC Viewer to access the desktop remotely by entering the IP for the computer. My personal favorite viewer is tight-vnc.

With the remote access portion set up, I am now able to help her with her computer without getting up from mine. She has discovered that we can both type on the same computer at the same time, so a game has emerged. One of us types in a text editor and the other tries to delete what the other has written. It’s a race to either type or delete gibberish and she laughs like a maniac when we play it.

The problem with everything is central control

I have been reading postmortems on the election, and it basically came down to a failure of media and political elites to get a read on the voting public. Basically, a small number of very powerful intellectuals operated in a kind of silo of information.

All the stuff I have read and watched about the 2008 financial meltdown comes down to a failure of large banks. A small number of very powerful banks, operated in a kind of silo of finance.

This country is a mess because of centralized control and centralized culture. It’s a mess because of intellectual laziness and emotional cowardice. It’s a mess because we rely on crumbling institutions to help us.

Centralizing seems natural and logical. There is an idea in economics called the economy of scale. Basically, a big operation (a firm, a factory, a project) has better purchasing power and is able to spread fixed costs over large numbers of units. In network topology, the Star Model is the simplest to manage, putting all the resources at the center. I tend to think about economics and computer networks as kind of similar.

One of the primary criticisms of the Star Network is the single point of failure. If the center of the network has any sort of problem, the whole network suffers. This is also a problem with economies of scale. A lot of electronic component manufacturing is centralized in Taiwan, in 1999 an earthquake caused a worldwide shortage of computer memory. It seems that any time there is bad weather in New York City, flights are delayed across all of North America. In 2008, trouble with undersea fiber cables caused widespread Internet connectivity problems throughout Asia. A lack of biodiversity in potato crops contributed to the Irish Potato Famine. Centralized control is prone to failure.

This isn’t just a business or a technology problem. It can also be a cultural problem. Centralizing stores of information leads to gatekeeping, where a point of distribution controls the access and dissemination of information. This may be for financial gain, in the case of television and cinema, or it may be for political gain, in the case of the White house press corps. Media outlets repeating what the white house said, and the white house using media reports to support its assertions is how the us ended up invading Iraq under false pretenses.

The diametric opposite of the Star Network is the Mesh network, specifically the Peer-To-Peer network. These models eschew ideas of economy and control in favor of resilience and scalability. Economy of scale eliminates redundancies because they are expensive. Peer-to-peer embraces redundancies because they are resilient.

Embracing peer-to-peer from a cultural standpoint means embracing individuality and diversity. Not just in a left-wing identity politics sort of way, but in a Victorian class struggle kind of way. It means eschewing the gatekeeper-esque ideas of mono-culture in favor of cultural and social diversity. Peer-to-peer culture is messy. It’s full of conflicts and rehashed arguments. It’s not a “safe space” where people of similar mindsets never encounter dissent. It’s a constant barrage of respectful and learning argument.

The cultural division in this country is a failure of our core values. It’s a failure of the right’s anti-intellectualism, and it’s a failure of the left’s elitism. It’s faith by many in crumbling institutions that are out of touch. It’s a failure of corporate media that forces us to turn to our social networks for news that discourages discussion and only seeks to confirm our individual biases.

I’ll be writing more about this opinion (and make no mistake, it’s just an opinion) in future posts. Hopefully it will foster some of the discussion that I am seeking.

My guide to setting up SSH keys with Putty

I have become a kind of fan of Cloud At Cost. Their one-time-fee servers and easy build process is great for spinning up test machines. I would hardly recommend running anything that I would consider “production” or mission critical on a cloud at cost VM, but it is a cheap, quick, and simple way to spin up boxes to play with until you are ready for more expensive/permanent hosting (like with Digital Ocean or Amazon). Spinning up a new box means securing SSH. So here is my guide.

The major problem with a hosted server of any kind is drive-by scans. There are folks out there that scan for huge swaths of the Internet looking for vulnerable machines. There are two basic varieties: scanning a single host for all vulnerabilities, and scanning a large number of hosts for a specific vulnerability. A plain box should really only be running SSH, so that is the security focus of this post. There should also be a firewall running, that rejects connections on all ports except the services you absolutely need.

It should be noted that Your security measures don’t necessarily have to be top notch, your box just has to be less convenient than the next host on the scanners’ lists. It’s not hard to scan a large subnet and find hosts to hammer on. Drive-by scans are a numbers game; it’s all about the low hanging fruit. With C@C, it’s a question of timing. You have to get onto the box and lock it down quickly. Maybe I’m just being paranoid, but I have had boxes that I didn’t log in to right after spinning them up and I have seen very high CPU utilization on them when they aren’t really running anything, which leads me to believe that the host has been compromised. Also, beware that the web-based stats can be wildly inaccurate.

This guide will only lock down SSH. If you are running a web server, this guide will not lock down the web server. If you are running Asterisk, this guide will not lock down Asterisk. All this guide will do is shore up a couple of vulnerabilities with SSH. I recommend running these steps *BEFORE* installing anything on your VM.

My use case for Cloud At Cost is something like this: There are times when I need a box that is easier to get to than hosting a box on my home network, but doesn’t really justify the monthly cost of running a server on Digital Ocean or Amazon. For me, I spend a lot of time working all night inside a very restrictive corporate network, so it’s hard to get access to my stuff at home especially since Team Viewer is compromised. C@C is cheap and easy, which probably means it’s a playground for scammers and other bad actors. This means it’s a good idea to lock down your box before you do anything useful with it.

You can get started with C@C for around $35, but if you follow them closely, you can catch some of their discount deals and get a very low end developer box for around $10. I took advantage of a few of these promotions and now I have a bucket of resources at my disposal for all of my tinkering needs. Also, if your box starts to misbehave (loads of network traffic, high cpu utilization, etc.) it’s probably compromised, so just torch it and build a new one.

Getting Started

You can learn about the basics of the Cloud At Cost panel here, the info will be useful later on:

Once you have signed up with C@C, bought some resources, and fired up your Linux VM, it’s time to do some housekeeping. I prefer Debian, and it’s what I am using in this guide, but it doesn’t really matter what you choose.

As soon as the box is up, log in with SSH, using the root password given in the information button. I use putty*, because most of my time in front of a computer is spent working or gaming, so I use Windows a lot. I know it upsets a lot of folks to hear that, but hey, those folks can feel secure in knowing that their “Unix Beards” are mightier than mine.

The very first thing that I do is change the root password. Make like 30 or more random characters. You shouldn’t actually need to type it in after this point, but keep it somewhere encrypted just in case. I also comment out the non-us repo that C@C Debian machines are still pointed to in sources list:

passwd
nano /etc/apt/sources.list

Just locate the line that begins with “deb http://non-us.debian.org” and put a # in front of it. On a C@C Debain 8 box, it should be the first line.

With that pesky non-US entry removed, you are clear to update your packages:
apt-get update
apt-get upgrade

I also run these commands from the Nerd Vittles blog to make sure the password doesn’t revert to the Cloud At Cost root password:

sed -i '/exit 0/d' /etc/rc.local
killall plymouthd
echo killall plymouthd >> /etc/rc.local
rm -f /etc/rc3.d/S97*
echo "exit 0" >> /etc/rc.local

I don’t know if they are strictly necessary, but the dudes at Nerd Vittles recommend it, and they spend waaaay more time doing this stuff than I do, so there you have it.

After that, it’s time to install fail2ban, and then create a non-root user:

apt-get install fail2ban
adduser steve

Hopefully, in a few minutes fail2ban will be made superfluous by our additional security measures. In the meantime it will stop brute force attempts. Some of my hacker buddies change the default port for SSH to throw off driveby scans, but the restrictive corporate network I mentioned before doesn’t like arbitrary ports, so that’s a hard no in this case.

Enable Sudo for a Non-Root User

To start implementing our security measures, we will install sudo, add ‘steve’ (our non-root user) to the sudo group, and then make sure steve has the right permissions in the sudoers file:
apt-get install sudo
adduser steve sudo
nano /etc/sudoers

At this point the /etc/sudoers file should open in the Nano next editor. I know I should be using vi, but I am too busy #YOLOing to do that Unix Beard crap. 🙂

Press ‘ctrl+w’ to open the search box, and type ‘%sudo’ to find the permissions line.
Press ‘ctrl+k’ to cut the ‘%sudo ALL=(ALL:ALL) ALL’ line, and then ‘ctrl+u, ctrl+u’ (hold ctrl and press ‘u’ twice) to paste the line in twice.
Edit the second line to read ‘steve ALL=(ALL:ALL) ALL’ and press ‘ctrl+x’ to exit, and press enter to save.

Setting up sudo is important because we are going to disable root logins here in a minute, but first we are going to set up SSH Keys for logins and then disable clear text logins. SSH does use clear text passwords, but it passes them through an encrypted tunnel. This means that while your password isn’t likely to be sniffed, it could be guessed or brute forced. Using SSH keys means you have to have the right private key to match with a public key on the server. But before we can do any of that, we need to test the new non-root account by logging in with it.

Once you are logged in as steve, test sudo:
sudo whoami

Which should return ‘root’.

Securing SSH with Asymmetric Keys

Once the non-root account is working and sudo-ing, we can proceed to lock down SSH with public+private key pairs. I will explain how to do this with putty for Windows, but it’s actually way easier to do this with Unix.

The first step is to make sure you have puttygen.exe handy. Download it and launch it, change the bits for your keys to 4096 (in the lower right corner) then click the ‘Generate’ button.

puttygen1
Wiggle the mouse around for a bit, and in a minute or so you will see your public key, with a key comment and blanks for your passphrase. You don’t have to change the comment, or enter a passphrase, but I recommend it. I like to change the comment to match the username and server (‘steve@stevesblog.com’ in the screenshot below), since I have lots of different keys. The passphrase keeps things safe in case your private key file falls into enemy hands.**

puttygen2

At this point, you may be tempted to use the same passphrase for your private key as you use for your non-root user account. This is a bad idea, because your non-root password is now basically your root password. Do yourself a favor and use two completely different passwords.

Next, click ‘Save private key’ and save the resulting .ppk file in a safe location, but don’t close the puttygen window just yet. If you use multiple computers, putty will let you re-use your private key file between Windows machines, if that’s what you’re into. SSH on Linux may, but it will not let you use a puttygen file in a Linux system. (Based on that one time I tried it and it didn’t work for me.) So just keep that in mind.

Also, it’s no big deal to have multiple private/public key pairs on the same server. You can use a different pair for each client computer, which is probably safer and more convenient than using a shared key pair. If you lose access to a client machine for whatever reason, you can just delete the public key off of the server and that machine won’t be able to connect to your server.

Leave your puttygen window up and switch back to your putty/SSH window. Create a .ssh folder and a key file for SSH, then a text file to store your keys:
mkdir ~/.ssh
nano ~/.ssh/authorized_keys

Paste the Public Key text in the top of the puttygen window onto a single line in the file. This will be a Very Large Line Of Text(tm) (VLLOT). The VLOTT should begin with ‘ssh-rsa’ and end with ‘rsa-key-yyyymmdd’ where yyyymmdd is the date you created the key. Sometimes the key comment (steve@stevesblog.com in the example below) is the last bit of text. I haven’t quite nailed down why that is, presumably an order of operations thing. Anyway, be sure that the VLOTT begins with ssh-rsa, or you didn’t grab all the text in the public key.

Save and close the file (‘ctrl+x’ and then ‘enter’) and then set the permissions for the file:
chmod 600 ~/.ssh/authorized_keys

Now exit your ssh session, and reopen putty. You need to set the IP address of your server as the hostname. I prefer this to host names because DNS can’t always be trusted. Give your session a useful name.

putty_2

Under ‘Connection -> Data’ add the username for your non-root account. In this example, I named my account ‘steve’.

putty3

Under ‘Connection -> SSH -> Auth’ browse to the safe place you saved your private key. You pasted your public key onto the server, and you have your private key stored on your computer. You will want to keep the private key file safe because if you lose it you have to set up a new pair while logged in at the console, which is a total pain. I keep mine in Dropbox, but I keep them secured with a passphrase.**

putty4

Now go back to Session and save your session profile. Henceforth you can connect simply by double clicking ‘steve’s server’ under ‘Saved Sessions’.

Now it’s time to test your new key pair. Just double click ‘steve’s server’ and you should be prompted for the passphrase that you set for your private key. Once you enter it, you should be logged in to the server as user ‘steve’. If you were able to log in using your key, you are all set to move on. You are now free to close PuttyGen.

If The Server Rejects Your Key

It’s most likely that you didn’t paste the public key correctly. This is why we left the PuttyGen window open. 🙂

Log in with your non-root username and password (‘steve’ in this example) and open your ~/.ssh/authorized_keys file in nano again:
nano ~/.ssh/authorized_keys

In the PuttyGen window, make sure that you scroll to the top of the public key text. It should begin with ‘ssh-rsa’. Now click and drag down to the end of the public key text, then right click and select ‘copy’.

In the Putty window, with your authorized_keys file open in nano, delete the incomplete key and paste the complete text of the public key on a single VLLOT.

Save and exit nano, then exit your SSH session and try again.

Also make sure that you changed the permissions of the authorized_keys file:
chmod 600 ~/.ssh/authorized_keys

If your key is still being rejected, generate a new public and private key by clicking the ‘Generate’ button and starting the whole key process over again.

Disable Root and Cleartext Logins

Once your keypair is working, (and you are able to log in with it) it’s time to eliminate root logins and cleartext logins. Some folks will tell you that root logins are fine with SSH because passwords don’t get sent in the clear. While that’s true, ‘root’ is still the one username that is guaranteed to be on every Unix-based machine, so if you are going to brute force an account, this is the one to focus your efforts on. Disabling root logins and clear text logins is all done in the sshd_config file:
sudo nano /etc/ssh/sshd_config

Press ‘ctrl+w’ and search for the word ‘root’. You are looking for this entry:
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

Change ‘#PermitRootLogin yes’ to ‘PermitRootLogin no’. (uncomment if necessary and change from ‘yes’ to ‘no’.)

Then press ‘ctrl+w’ and search for the words ‘clear text’. You are looking for this entry:
# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes

Change ‘#PasswordAuthentication yes’ to ‘PasswordAuthentication no’ (uncomment and change from ‘no’ to ‘yes’.)

Once these changes are made, DO NOT LOG OFF OF YOUR SSH SESSION. Once these changes are implemented, it will be hard to log back in to undo anything if you make a mistake. You should have tested and succeeded with your ssh-key based login because we are about to restart the ssh daemon and prevent clear text logins:
sudo systemctl restart ssh

To test ssh logins, connect to the IP of your server with putty using the ‘Default Settings’ profile. Your login attempt should fail because only people with private keys are allowed to the party:

putty_failed

At this point you are far from being hack-proof, but you are a bit more locked down than you were before, and there are always more convenient targets out there 🙂

Hardening web servers is another story, which really isn’t my bag to be honest. There’s a reason that I host my blogs with Google or WordPress 🙂

* Protip: put your putty.exe file in ‘c:\windows\system32’ so you can run putty from the command line or the run line. If you want to be a real hard rock, rename putty.exe to ssh.exe. Did you know putty accepts commandline args? It does, so you can do awesome Unixy shit from the command line like type ‘ssh steve@testbox.stevesblog.com’ to connect to a remote host. It still pops up your connection in the putty window, but it keeps your hands on the keyboard. 🙂

** Another Protip: not setting a passphrase is handy for automating ssh connections, especially if you want to move files back and forth with ‘scp’ or mess with tunneling via local and remote ports. I haven’t found a decent scp command line app for Windows, other than the Unix utils in CygWin.

My .screenrc

I am a huge fan of screen. It’s indispensable for working on a Unix host via SSH. It lets me have multiple terminals (screens) up at a time. There are dudes that use screen to split their terminals into multiple views, like a tiling window manager, but for the command line.

My needs are not nearly as sophisticated, since I mostly use putty to connect to Linux servers from Windows.

I use 4 special keys:
F9 to detach from the screen session. This is leaves your session running in the background. I mostly use this to idle in IRC. Once detached from your session you can view your active screen session by typing:
screen -ls

Which will return something like this:

user@localhost:~$ screen -ls
There is a screen on:
2030.pts-0.localhost (05/25/2016 06:45:51 PM) (Detached)
1 Socket in /var/run/screen/S-user.

To reconnect to a detached screen session, type
screen -r 2030.pts-0.localhost

If the session is in use elsewhere, use the -D option:
screen -D 2030.pts-0.localhost

This will disconnect the screen session that’s in use, log off the SSH session that initiated it, and then reattach the active SSH session to the screen session.

F10 to open a new terminal in screen.

This option lets you have multiple terminals in the same SSH session. This is handy for having a full screen app (like irssi) in one term, and one or more additional terms for running other commands. To close a terminal, type
exit

F11 and F12 to switch terminals
When you have multiple terminals open you can navigate them, from left to right with the F11 key to select the terminal to the right, and the F12 key to select the terminal to the left.

The File
To use this file, simply paste the contents below into a file called .screenrc in your home directory. So here it is, the .screenrc, that I have been using for years:


startup_message off

# Window list at the bottom.
# I got the long line of vars from https://bbs.archlinux.org/viewtopic.php?pid=423481#p423481
hardstatus alwayslastline
hardstatus string "%{.kW}%-w%{.W}%n %t%{-}%{=b kw}%?%+w%? %=%c %d/%m/%Y" #B&W & date&time

# From Stephen Shirley
# Don't block command output if the terminal stops responding
# (like if the ssh connection times out for example).
nonblock on

# Allow editors etc. to restore display on exit
# rather than leaving existing text in place
altscreen on

# bind F9 to detach screen session (to background)
bindkey -k k9 detach

# bind F10 to create a new screen
bindkey -k k; screen

# Bind F11 and F12 (NOT F1 and F2) to previous and next screen window
bindkey -k F1 prev
bindkey -k F2 next

The FBI asking Apple to Backdoor an iPhone is a Rubicon for Privacy

The US District Court of California has asked Apple to backdoor a locked iPhone for the FBI. This isn’t a request to unlock a single phone, this is a request for Apple to build a tool that lets the FBI circumvent the security on the iPhone… as in basically all iPhones, which will then set a precedent for all smart phones.

“Make no mistake: This is unprecedented, and the situation was deliberately engineered by the FBI and Department of Justice to force a showdown that could define limits our civil rights for generations to come. This is an issue with far-reaching implications well beyond a single phone, a single case, or even Apple itself.”

In case this is your first time reading about why government mandated back doors are a universally bad idea, here is the quick list:

  1. A digital backdoor, much like a real back door, can be used by anyone, not just those authorized to access it. Back doors make excellent targets for criminals, spies, and other bad actors. These things get discovered, and then they get misused. If you are a criminal, and you are looking to steal data, knowing that there is a backdoor in a system lets you focus your cracking efforts.
  2. Encryption is only good when it’s secure. Insecure crypto is worse than useless because it creates a false sense of safety and control. This is why Digital Rights Management technologies never work. No matter how you slice it, a purpose built entry point is a vulnerability. Once you introduce a back door, or a “Golden Key” it invalidates the security (and value) of the entire system (see point 1). An insecure phone just isn’t worth as much as a secure one.
  3. The bad guys you are trying to catch are bad guys. They don’t give a single runny shit about government regulations. This means that the bad guys who use crypto will simply switch to new illegal tools that don’t have back doors. When the SOPA bill threatened to block DNS for sites accused of piracy, tools immediately began to surface that would defeat the blocks, before the bill was even voted on.
  4. In the case of criminals, government mandated back doors would create a market for secure tools. These tools wouldn’t be Made In America like the *iPhone. Back doors would devalue the iPhone (see point 3) and add value to technologies that aren’t made in the US. Meanwhile, Federal Law Enforcement still couldn’t access phones that belong to terrorists. All the damage done by this would be collateral because the only people affected by this mandate would be innocent bystanders.

There are *tons* of other reasons why back doors are bad, but those are the top 4. Cory Doctorow sums the argument against back doors fairly succinctly in an article in The Guardian:

That’s really the argument in a nutshell. Oh, we can talk about whether the danger is as grave as the law enforcement people say it is, point out that only a tiny number of criminal investigations run up against cryptography, and when they do, these investigations always find another way to proceed. We can talk about the fact that a ban in the US or UK wouldn’t stop the “bad guys” from getting perfect crypto from one of the nations that would be able to profit (while US and UK business suffered) by selling these useful tools to all comers. But that’s missing the point: even if every crook was using crypto with perfect operational security, the proposal to back-door everything would still be madness.

The Law Enforcement community declares war on crypto in one form or another once or twice a decade. Every time they do, we as digital citizens need to stand up and say “NO!” They will keep trying, and we have to keep fighting, every time. It really is that important.

*The iPhone isn’t made in America either, but Apple does employ Americans around the country. Russian mobsters or Romanian cyber-criminals presumably don’t employ many Americans.

The Drama With My New Laptop: the High Cost of Saving $350 (part 3)

This post contains a lot of profanity. Like a shitload.

When we last left our heroes, I had finally managed to encrypt my SSD, and after running clonezilla probably a hundred times to back up and restore the drive after fucking it up, I decided to try and simplify the backup process.

Part of the hassle was the fact that I had removed the optical drive and installed the original mechanical drive into that bay. This meant booting from an external DVD drive, or from a USB stick in order to do the backups. I was also using GParted a lot, which meant a second cd-rom disc or thumb drive. Thankfully I was using an i-Odd external hard drive to do this, but it still meant plugging something in so that I could copy files to an internal hard drive. Backing up has to be convenient or backups simply won’t happen.

My first thought was to install linux on an external drive. This would give me the option of using the drive on different computers. Maybe it’s possible, but I never got it to go. I wiped an external drive a couple of times. I used to use Sardu Linux, but it was not that reliable, and the project seldom kept pace with new versions of live CDs. Also the primary developer started putting spammy spyware in the installer at one point.

After a lot of formatting and re-partitioning, this time on my secondary clonezilla_logo_smallbackup drive, I decided to go with a simpler approach and just put the Clonezilla live install on a small partition on the backup drive. This hadn’t worked on my USB external drive, but I wanted to try it with the internal, based on this document. Basically I created an 800mb FAT32partition and extracted the zip to that partition. I used the rest of the disk for a large NTFS partition. I skipped all the GRUB stuff, and I just use the alternate boot menu to boot from the other drive when I want to do my backups. I then set the FAT32 partition to be hidden so it won’t show up in Windows. It would have been great to have a small Linux install for times when I am in a hurry and I don’t want to decrypt my Windows drive, but this will do fine for now.

holy shit! i got it working!

The Drama With My New Laptop: the High Cost of Saving $350 (part 2)

This post contains a lot of profanity. Like a shitload.

When we last left our heroes, I had finally gotten Windows working on an SSD after trying a bunch of things, and then basically giving up and then reinstalling everything. Now that the SSD was working, the time had come to encrypt the SSD.

I am a fan of block crypto. I encrypt lots of things, not because I am worried about the government seizing my gear (well, not *that* worried) but because gadgets get lost and stolen. I lost my mobile phone a couple of years ago, and if I hadn’t encrypted it, it would have been nerve wracking worrying about what someone might do with the data that’s on it. So rather than worry about what is or isn’t protected, I just encrypt the whole drive. Full drive encryption is important because Physical Access is Total Access. I have rescued untold amounts of data for others from their crashed or otherwise misbehaving hard drives by removing them and plugging them into a different computer. I don’t normally encrypt the drives on my gaming rigs because if the FBI or whomever needs my Goat Simulator game saves that badly, they are welcome to them. This was a special case because it’s a gaming laptop. My rule is that if it leaves the house, it has to be encrypted.

Modern computers use UEFI to “securely” boot the operating system. I guess this is a security measure to prevent someone from booting your laptop from a CD and stealing all your shit, but since this laptop doesn’t have a Trusted Platform Module, Secure Boot doesn’t protect you from someone plugging your drive into another computer and stealing all your shit, I think it’s more trouble that it’s worth. If you have to ask Windows for permission to boot off a CD, it’s just going to stop the user from doing what he or she wants, it will not stop Proper Villainy(tm).

My favorite disk encryption tool, TrueCrypt, vanished under mysterious circumstances. I won’t get into the conspiracy theories behind its demise, but I have decided to keep encrypting my drive, and that leads me to the next chapter of this saga, where I get punished for using the basic version of Windows.

Part 2 – Solid State Drama’s Revenge

I prefer to run Windows on laptops because of all the bullshit proprietary hardware that goes into them. I am probably showing my age here, but there was a time when hardware support in Linux was spotty. I have swapped out Intel WiFi card for an Atheros cards in laptops to make sure I can do packet injection, but I now have a dedicated Kali laptop for that sort of thing. For my daily driver/EDC laptop, life is just easier with Windows. I know that that fucking with Linux makes a lot of dudes feel superior, and they probably are. For me, I prefer to use Linux for specific tasks (i.e. Kali and Clonezilla) or for servers. With that being said, I am not such a Windows fanboy that I care about the differences between Windows versions. My personal laptop won’t be joining an Active Directory domain, so I just go with whatever version came with my laptop, which I replaced with whatever version MS let me download when I migrated to the SSD.

This path of least resistance philosophy led me to entertain thoughts of using BitLocker to encrypt my hard drive, only I am not running Windows 8.1 Professional or Enterprise, so I guess that BitLocker isn’t included with my version. There is no fucking way that I’m forking over $150 for a new version of Windows after working so hard to save $200 on the RAM and SSD. No TrueCrypt? Fine. No BitLocker? Whatever. I don’t give a fuck. I’ll just use a fork of TrueCrypt called VeraCrypt. Well, VeraCrypt’s boot loader doesn’t play nicely with UEFI and GPT partitions. It only works on MBR disks. feelsbadman.jpg

So after days of messing with various tools to get Windows working on my SSD, and then enduring the hassle of setting up Windows all over again, and waiting on my Steam library to download again, I am faced with yet another hard disk challenge: converting my GPT partitioned drive to MBR without deleting anything. Honestly, now that Steam is in the Debian repos, I am sorely tempted to make my next gaming rig run Linux.

I tried a bunch of things and ended up using the pirated AOMEI tool to do the conversion, and it worked, sort of. The drive booted, and VeraCrypt didn’t bitch about GPT anymore. However, when I went to back up the drive one last time before encrypting it, I discovered that AOMEI half-assed the conversion. According to Clonezilla, my drive had some remnant of the GPT boot stuff left on it that I had to fix with the Linux version of fdisk for GPT, a.k.a gdisk. I have screwed up plenty of working partitions with fdisk, so I was nervous to say the least. Also, the magical -z option that I needed to was buried in the “expert” menu section (AKA Here There Be Dragons!) which added to the danger. Clonezilla said to run gdisk -z but -z isn’t a valid option from the command line.

I read this tutorial to figure out what had to be done, and in the end I just closed my eyes, clenched up my butt cheeks, and hit enter. I got it working, and thankfully I had already made plenty of backups, just in case. Speaking of backups, I should find a way to make running Clonezilla easier…

Update 8/16 – A few months ago, I tried migrating to Win10, but it was a shitshow. I just pirated Win10 Pro (thanks to KMSPico portable, JFGI) and used BitLocker without a TPM. This was less stressful since I set up easy bare metal backups in Part 3.

Stay tuned for the thrilling conclusion in Part 3 – Making Backups Easy to do is Hard 🙂