David Ramsden
30Jul/16

Creating a Highly Interactive Honeypot With HonSSH

HonSSH is essentially an SSH proxy, acting like a Man-in-The-Middle attack. It sits between the attacker and a honeypot and proxies the SSH connections. By doing this it can log all interactions, spoof (rewrite) login passwords and even capture files downloaded by the attacker on to the honeypot for later analysis.

Below is my topology:

HonSSH topology

HonSSH topology

Configuring the Honeypot Server

For the honeypot server (the server attackers will login to), I'm using Ubuntu 14.04 but maybe using an unsupported Linux distribution may yield more interesting results. I'm using QEMU as the hypervisor and tried to configure the honeypot to be as "real" as possible, using an emulated Intel Gigabit Ethernet NIC and setting a valid Intel MAC address, using an emulated SATA adapter etc.

I installed the following on the honeypot:

  • OpenSSH server (essential!).
  • Apache.
  • PostgreSQL.
  • ProFTPd.
  • Build tools (GCC, make etc).

I set a password of 123456 for the system created "ftp" and "postgres" users and ensured their shells were valid. I created two new users called "setup" and "test" with a password of 123456. I gave the "setup" user root privileges via sudo. Finally, I also enabled the root account with a password of 123456.

The honeypot server needs to route via the HonSSH server and not via the firewall. This is because the advanced networking features in HonSSH will be used, allowing the HonSSH server to automatically source NAT (SNAT) the SSH connection to the honeypot server, so if the attacker ran a netstat they'd see their public IP address and not the IP address of the HonSSH server, therefore making it less suspicious.

After everything was configured I took a snapshot of the honeypot server so it could easily be restored to its non-pwned state.

Configuring the HonSSH Server

For the HonSSH server I'm using Debian. This time fully updated, support, patched etc.

Clone HonSSH from git:

Look at the requirements file and install the packages listed.

Copy the default honssh and users cfg files:

Edit the users.cfg file. This maps usernames and passwords sent by the attacker to HonSSH to the honeypot server. There are two modes. Fixed, where you supply a list of valid passwords that HonSSH will accept and random, where you specify a random chance that the password will be accepted. You also define the users and the "real password" that HonSSH will send to the honeypot server. My users.cfg looks something like:

In this scenario if an attacker tries to login with root, only the passwords jiamima, wubac and toor will be accepted. HonSSH will in turn login to the honeypot server as root but using the real_password defined. For the other users defined (setup, test, postgres, ftp), HonSSH will accept any password but only with a 25% chance of success. When successful HonSSH will login to the honeypot server as the user and using the real_password defined.

Note that random mode can be confusing for the attacker once logged in and raise suspicion. If the attacker tries sudo, the password they logged in with will not be the same as the password actually configured on the honeypot server.

For a list of popular usernames and password combinations used in SSH scanning, Dragon Research Group produces a handy SSH Username and Password Tag Cloud.

Next edit honssh.cfg.

Under the [honeypot] section:

  • ssh_addr: Set this to the IP address of the NIC on the HonSSH server connected to the "outside" (192.168.0.1).
  • ssh_port: Set this to the port that the HonSSH daemon should listen on for incoming SSH connections (2222).
  • client_addr: Set this to the IP address of the NIC on the HonSSH server connected to the "inside" (192.168.1.254).

Under the [honeypot-static] section:

  • sensor_name: Set this to something meaningly, for example honssh-honeypot1.
  • honey_ip: Set this to the IP address of the honeypot server (192.168.1.193).

Under the [advNet] section:

  • enabled: Set this to true which will enabled the SNAT feature talked about previously.

Under the [spoof] section:

  • enabled: Set this to true.

Under the [download] section:

  • passive: Set this to true so HonSSH locally captures all files uploaded to the honeypot server via SFTP/SCP.
  • active: Set this to true so HonSSH locally captures all files downloaded to the honeypot server via wget.

Enabling email notifications under the [output-email] section is also useful to keep tabs on when a login has been successful and what was done.

Since the HonSSH server will also be acting as a router for the honeypot server, enable IP forwarding:

Secure the HonSSH server with suitable firewall rules using iptables:

It's a good idea to implement some basic filtering from the honeypot server. When the honeypot server becomes compromised it's highly likely it'll be used to scan other systems, take part in DDoS attacks etc. The below rules will implement some basic rate limiting as well as filtering egress access to SMTP and SSH:

Save the iptables rules:

Start the HonSSH daemon:

Finally, test connectivity and ensure that everything works as expected. Remember to forward TCP port 22 from the Internet to the HonSSH server on TCP port 2222.

Other Notes

As mentioned previously, I'm running the honeypot server under QEMU. On the host I have a cronjob that restores the honeypot server to a default state every few hours. The script looks like this:

HonSSH can run scripts on certain events, so it may be possible to somehow get HonSSH to notify the host to do this automatically after an attacker has closed an SSH session (maybe via an email thats picked up directly on the host which triggers a snapshot revert?).

I'm also running a packet capture on the HonSSH server for better analysis of compromises:

This runs tshark in screen (detaching the session), capturing traffic to and from the honeypot server (192.168.0.193) but not capturing SSH traffic. HonSSH is capturing the SSH sessions which can be replayed. A new capture file is created when the current capture reaches 20MB.

Finally

Be careful. You're allowing a machine connected to the Internet to become compromised. When (not if) this happens, it's likely the attacker will:

  1. Attempt to root the server.
  2. Start scanning for other vulnerable devices.
  3. Add the server to a botnet.
  4. Start sending out spam.
  5. All of the above.

Use a honeypot to learn more about attacks and attackers but always try to mitigate against adding to the problem.

Analyse any captured files in a sandboxed, offline, environment.

Don't run a honeypot on any public IP addresses/space you care about.

Share your experiences, captures, files and sessions! I'm sure the guys over at SANS ISC would be interested.

30Jul/16

A Guide to Using Let’s Encrypt

letsencrypt-logo-horizontal

Up until a few moments ago, I was using CAcert for all my certificate needs. A free service offering SSL/TLS certificates. The only issue with CAcert is that their Root Certificate is not included in all mainstream Operating Systems or browsers, meaning users will get a certificate error unless they choose to install the Root Certificate.

But now Let's Encrypt is on the scene. A free, open and automated certificate authority that is trusted and should work right off the bat in most modern Operating Systems and browsers.

And in the modern world encryption (and trust) is important and needed.

Here's a quick guide to using Let's Encrypt (LE). I'm using Debian with Apache, Dovecot and Postfix.

First download the official LE client from GitHub by cloning the repository. You'll want to run the LE client as a user who can gain root via sudo:

The LE client will build itself, pulling down any required dependancies and configure itself. The build is self contained in $HOME/.local/share/letsencrypt.

If you're running grsecurity and get a Segmentation Fault, you need to use paxctl to fix the python binary. Whenever letsencrypt-auto is run it'll self update so you may need to run paxctl often:

Now the LE client is installed, you're ready to start creating certificates. Normally when you purchase a certificate there are (or should be) several background checks that the issuer performs and LE is no different apart from it's all automated using something called ACME (Automatic Certificate Management Environment). This is essentially a two-factor authentication mechanism. ACME sends a HTTP request to the Common Name (CN) and if used the Subject Alternative names (SAN) in the certificate and requests a unique file with unique content. It's two-factor because a) you need access to DNS to configure the records for the CN and SANs in the certificate request and b) you need access to the machine the DNS records are pointing at to get the ACME challenge/response files on to. This stops people from requesting certificates for stuff they don't own.

But here's the problem. This works great when requesting certificates for websites because you've got a web server running. Not so great for a mail server which may not be running a web server on the same box. If you're in this situation you may have to temporarily stand up a web server on the mail server for ACME. But if your mail server is on the same box as your web server, you can use the following trick (specific to Apache but can be adapted easily for nginx).

You've probably got a default vhost. One that specifies the ServerName as _default_. This matches any other request where no vhost is defined. You've probably got this to redirect to your main site. For example:

The problem if you're trying to request a certificate for a mail server is that it's not going to have a virtual server configured in your web server. Therefore when the ACME makes a HTTP request to http://smtp.domain.com/.well-known/acme-challenge, the verification is going to fail.

If you're running Apache one workaround is to configure an Alias outside of any virtual hosts so it applies everywhere. Create /etc/apache/conf.d/letsencrypt-acme containing:

And reload Apache.

Create a file in /var/www/.well-known/acme-challenge called test.txt with some text in it. Now when you visit http://smtp.domain.com/.well-known/acme-challenge/test.txt you shouldn't get a 404 Not Found. Also note that if you're using the Redirect directive in an existing virtual host (e.g. to redirect http://domain.com to http://www.domain.com), you should be using the RedirectMatch directive like so:

ACME will follow redirect requests and the above ensures that any request to http://domain.com/.well-known/acme-challenge is redirected to http://www.domain.com/.well-known/acme-challenge, otherwise just using the standard Redirect directive will break things.

Why do this, though? Because it makes the following a lot easier.

As mentioned each CN or SAN has to be validated. By using the trick above you only need to tell the LE client one web root to use. Here's the command that will request a certificate for www.domain.com with webmail.domain.com as a SAN:

The ACME challenge/response gets written (temporarily) to /var/www/.well-known/acme-challenge. The ACME sends a HTTP request to http://www.domain.com/.well-known/acme-challenge and http://webmail.domain.com/.well-known/acme-challenge and both get served from /var/www/.well-known/acme-challenge.

If everything worked you should now have a public and private key in /etc/letsencrypt/live/www.domain.com as well as a full chain certificate.

When it comes to renewing certificates, simply run:

Now there's no excuse to ever use self-signed certificates!

26Jul/16

Reviving an Acer Aspire One ZG5 Netbook

I was given an Acer Aspire One ZG5 (A110) and asked to try to update it. There were a few problems with it. Firstly, it was running Ubuntu 12.04 but the upgrade to Precise Pangolin had broken and wasn't easily recoverable. Secondly, the battery appeared to be dead and wouldn't charge. In addition I also found that a BIOS password ("user" and "supervisor") had been set but the password wasn't known.

A Modern OS

When the Aspire One first came out there was a wide range of Operating Systems to choose from. It was capable of running Windows XP, various Linux flavours, FreeBSD and even OS X. A lot of customised Linux distributions started to appear, designed specifically for netbooks. However, fast forward a few years and Windows XP is dead and a lot of the Linux distributions for netbooks are no longer actively developed and have fallen behind the times.

Wikipedia has a page dedicated to comparisons of netbook-orientated Linux distributions. Most of these are no longer actively developed, apart from a few. First I tried Lubuntu but after installing successfully, found it would get stuck booting and I didn't have the inclination to troubleshoot it.

Next I tried Manjaro Netbook Edition, a community developed flavour of Manjaro Linux. This can be downloaded here. Specifically I downloaded the latest and greatest 32-bit version (direct link to ISO). Since the Aspire One doesn't have a CD/DVD drive, I used UNetBootin to create a bootable USB thumbdrive from the ISO, booted, installed and rebooted successfully.

After logging in for the first time I was prompted to install the linux-netbook-manjaro package and palemoon-atom package. The former is a Linux kernel optimised for netbooks and the latter is an optimised version of the Palemoon browser.

I found that the package manager was extremely slow to download anything. This turned out to be because the mirror list had select a location in South Africa first. The Manjaro wiki has some handy pacman tips to resolve this.

I was then able to perform an upgrade to the latest Manjaro release, 16.06.1 (Daniella), released 11th June 2016. So finally the Aspire One had a modern and fully functional OS, including sound, wireless, webcam etc. I also installed the Flash player plugin for Palemoon (although testing BBC iPlayer is slow with sound and video out of sync), AbiWord and Skype.

Battery and BIOS

As mentioned, the other issue was the battery. It wouldn't charge. This was either because the battery was dead or there was something wrong with the charging circuitry. But one clue was after I had installed Manjaro Linux, the battery was reported as "unknown". Maybe clearing the BIOS settings and/or re-flashing the BIOS would help.

As I pressed F2 to enter the BIOS setup I was prompted for a user password. Unfortunately no one knew what this was. My first thought was to remove the CMOS battery but after a quick Google, taking the Aspire One apart was a little too much trouble.

Introducing Hiren's Boot CD. I downloaded the ISO and again used UNetBootin to create a bootable USB thumbdrive. However, after booting from it I found the boot menu was broken. To fix this I had to copy the isolinux.cfg file found inside the HBCD directory on the USB thumbdrive to the root of the USB thumbdrive, replacing the syslinux.cfg file. In one of the menus is a bunch of BIOS/CMOS tools and one of those tools can dump the plaintext strings found in the CMOS. Here, I was able to find the "user" and "supervisor" BIOS passwords.

Now I had access to the BIOS I tried loading the default settings. Unfortunately this didn't resolve the battery issue. So next was to try flashing the BIOS. Enter the SNID (found on the underside of the Aspire One) in to Acer's download and support site to get the latest BIOS download. The problem here is that the BIOS update utility is geared towards Windows users.

Acer do have a help page covering updating the BIOS on the A110 or A115 models, which involves copying some files to a FAT32 formatted USB thumbdrive, holding Fn+Esc and powering on the netbook which should initiated the BIOS upgrade. But I found this didn't work (all the right lights flashed but nothing actually happened).

The easiest (and probably safest) way to update the BIOS is to again use UNetBootin to create a bootable USB thumbdrive of FreeDOS. After that, copy the DOS folder from the Acer BIOS update download to the root of the USB thumbdrive and boot the netbook in to FreeDOS. Change to the C: drive (this will be the USB thumbdrive) and then change directory in to the DOS folder. Run the batch file to start the BIOS upgrade.

Success. BIOS upgraded and as a bonus, the battery started charging and did hold charge too.

29Mar/16

Disabling WordPress XML-RPC and banning offenders with fail2ban

This isn't something new. SANS ISC reported on this 2 years ago. The bad guys love anything that can be used in a reflection DoS and the WordPress XML-RPC functionality is a prime candidate. There are various ways to disable it, through WordPress plugins for example, or by hacking away at code. All of these are fine if you're in control over what gets installed on the web server. In a shared hosting environment you've got to rely on your users.

Running Apache you can disable XML-RPC globally and simply with the following:

The configuration should be placed as part of the global Apache configuration. When any file called xmlrpc.php is requested, on any vhost, from an IP address not listed by the Require ip line, an Error 403 Forbidden will be served instead. This configuration should ensure that WordPress plugins like Jetpack continue to work.

I've seen a few examples where even after doing this the bad guys still continuously request xmlrpc.php even though they're being served a 403 error. To further protect the web server fail2ban can be deployed.

Firstly create a filter definition:

Then create the jail:

Now when someone requests xmlrpc.php 3 times within the defined findtime their IP address will be blocked.

21Mar/16

Banning repeat offenders with fail2ban

More and more I see fail2ban banning the same hosts repeatedly. One way to tackle this could be to increase the ban time but you could also have fail2ban monitor itself to find "repeat offenders" and then ban them for an extended period of time.

Firstly, create a filter definition:

This will be used against the fail2ban log and will find any hosts that have been unbanned. We don't want to monitor hosts that have been banned because, er, they're already banned. We also want to ignore any log entries that are generated by the jail itself.

Next edit jail.local to add a new jail:

This jail will monitor the /var/log/fail2ban.log file and use the repeat-offender filter that was defined earlier. If 3 unban's are seen within 5 hours, the host will be banned for 48 hours. You could adjust the banaction to use the route action which may give some performance benefits on a very busy server.

19May/15

loadbalancer.org – Linux feedback agent

I've been working with some loadbalancer.org appliances recently, load balancing traffic over MySQL and Apache servers running Linux. The load balancer supports a feedback agent where it can query the real server to gauge how utilised it is based on, for example, CPU load and then distribute the request to the real server that should perform the best.

Over on the loadbalancer.org blog is an article about the feedback agent and how to implement it for Linux servers. The shell script suggested is:

Although this does give you a 1 second average CPU, it's a bit too accurate and doesn't give much headroom. If the CPU suddenly spiked very quickly and then returned to normal the load balancer wouldn't know this. And indeed watching the real server weighting change on the load balancer v's what top is reporting confirms this. The weighting on a real server can drastically jump up and down.

A better feedback agent script is:

This will take a 3 second average reading and then report back the overall average. This prevents the real server weighting on the load balancer from fluctuating so much. Change the NUM_CHECKS variable to take more or less readings as required.

29Dec/14

Root shell on DrayTek AP 800

The DrayTek AP 800 is a 2.4Ghz 802.11n Access Point with the ability to make it dual band, 2.4Ghz and 5Ghz, with an optional USB dongle. It supports multi-SSID with VLAN tagging, built in RADIUS server, per-SSID/station bandwidth control and can act as a bridge, repeater etc.

As with all of these SOHO products it'd built on Linux. Which means somewhere there is a root shell lurking.

The DrayTek AP 800 has telnet enabled out of the box. Establish a telnet connection and login as the admin user. You'll be dropped in to a restricted busybox shell. To make it slightly less restrictive type rddebug. This will let you use commands such as ps and echo.

Now spawn telnetd on a different port and invoke a full shell, with:

Telnet to the AP on port 2323 to be dropped in to a root shell.

This will likely also work with the AP 900 and the 2860 series. Leave a comment if you've tried it.

Tagged as: , , No Comments
27Aug/14

Custom kernel on a DigitalOcean droplet – the right way

A few days ago I decided to create a VPS, known as a "droplet", with DigitalOcean. They claim a deployment time of 55 seconds. And 55 seconds after hitting the button I had a Debian 7 x64 droplet running. The plan was to migrate my current VPS to this DigitalOcean droplet. The first task I always undertake with any Linux deployment is to create a custom stripped down kernel patched with grsecurity. However, unknown to me, the way DigitalOcean boot your droplet with KVM means that you can only use a kernel of their choice.

But don't panic because there is a workaround by utilising kexec. kexec is used to speed up reboots. When you power on any machine it goes through a POST procedure to initialise the hardware. When you reboot a machine it goes through this POST procedure again which really isn't needed if the hardware is already initialised. Normally kexec is invoked during the reboot runlevel to load a kernel in to memory and jump straight in to it, bypassing the hardware initialisation. The trick is to reverse this and have the machine utilise kexec during the startup runlevel, therefore jumping in to the custom kernel straight away and only utilising DigitalOcean's choice of kernel as a bootstrap.

I've seen a lot of hacks where people suggest modifying the init scripts such as /etc/init.d/rcS to kexec the custom kernel on boot before any init scripts are executed. But this is very hacky and you could easily end up in a situation where the custom kernel doesn't boot but because there's no way to stop kexec running you could have a completely unusable droplet.

In my opinion the correct way to do this is to write a proper LSB init script. Note that the below is specific to Debian and Ubuntu. The init script should also give you the option to abort the kexec process in case something goes wrong. At least that way the droplet will still boot with DigitalOcean's kernel and you can get in and fix things.

The LSB init script is as follows:

The above should be placed in /etc/init.d/droplet-kernel and chmod 755.

In addition you need /etc/default/droplet-kernel:

Now use update-rc.d to install the init script:

Now when the droplet boots, one of the first thing it does is look at running kexec to load and boot a custom kernel. The /etc/defaults/droplet-kernel file contains all the customisable options, such as easily enabling/disabling this process, where the custom kernel and initrd images are and the option to override the kernel arguments (such as rootfs, verbosity etc).

The script runs interactive and will hold the boot routine for 10 seconds, giving you the option to press Ctrl+C to abort the kexec process. This is especially useful if something has gone wrong with the kernel that will be kexec'd. Pressing Ctrl+C will mean the droplet continues to boot using DigitalOcean's kernel and you should be able to fix things up.

You'll notice that when the droplet is kexec'd, it appends "kexeced" to the kernel cmdline argument. It does this to prevent any loops from occurring. The init script checks if "kexeced" is the last argument and if it is won't try to kexec the kernel. Otherwise the droplet would get in to an endless reboot.

When the droplet is rebooted (runlevel 6), it removes the "kexeced" argument so that we get the option to Ctrl+C during the bootup again. This avoids the need to shutdown the droplet and power it on again for kexec to kick in.

Here it is in action:

2Feb/12

Virtual Hosting With mod_proxy

The other day I had someone ask if there's a nice solution to the following problem:

Multiple web development virtual machines but only one external IP address.

The quick solution is to port forward on different ports to each virtual machine. For example 81 goes to VM1, 82 goes to VM2, 83 goes to VM3 etc. Which granted, would work, but isn't a "neat" solution.

Using mod_proxy under Apache is a much better solution to this problem.

Deploy a "front-end" server running Apache and mod_proxy. Create a virtual host for each virtual server and then using mod_proxy, reverse proxy to the virtual server. Port forward from the WAN to your front-end Apache server running mod_proxy.

Here's what an example config would look like on the front-end Apache server:

Requests for cust1.dev.domain.com would be reverse proxied to 192.168.0.100 and requests for cust2.dev.domain.com would be reverse proxied to 192.168.0.101. All with one external IP address and one port forward rule.

Just one of the many uses of mod_proxy. You can also use it for SSL bridging and SSL offloading. Neat!

Tagged as: , 1 Comment
2Jan/12

Creating a HA iSCSI Target Using Linux

Some time ago I created a High Availability iSCSI target using Ubuntu Linux, iscsi-target, DRBD and heartbeat. The HA cluster consisted of two nodes and the iSCSI initiators were Windows Server 2008. I was able to mount the LUN and copy a video to it, play it back and then pull the power from the primary iSCSI target. A few seconds later the second iSCSI target took over and video continued to play.

Pretty cool, huh?

Here is my guide if you want to try this. Although I've not gone back through the guide to make sure it's correct. But if you spot anything that's wrong or not very clear, please leave a comment.

Tagged as: , , 8 Comments