David Ramsden

Creating a Highly Interactive Honeypot With HonSSH

HonSSH is essentially an SSH proxy, acting like a Man-in-The-Middle attack. It sits between the attacker and a honeypot and proxies the SSH connections. By doing this it can log all interactions, spoof (rewrite) login passwords and even capture files downloaded by the attacker on to the honeypot for later analysis.

Below is my topology:

HonSSH topology

HonSSH topology

Configuring the Honeypot Server

For the honeypot server (the server attackers will login to), I'm using Ubuntu 14.04 but maybe using an unsupported Linux distribution may yield more interesting results. I'm using QEMU as the hypervisor and tried to configure the honeypot to be as "real" as possible, using an emulated Intel Gigabit Ethernet NIC and setting a valid Intel MAC address, using an emulated SATA adapter etc.

I installed the following on the honeypot:

  • OpenSSH server (essential!).
  • Apache.
  • PostgreSQL.
  • ProFTPd.
  • Build tools (GCC, make etc).

I set a password of 123456 for the system created "ftp" and "postgres" users and ensured their shells were valid. I created two new users called "setup" and "test" with a password of 123456. I gave the "setup" user root privileges via sudo. Finally, I also enabled the root account with a password of 123456.

The honeypot server needs to route via the HonSSH server and not via the firewall. This is because the advanced networking features in HonSSH will be used, allowing the HonSSH server to automatically source NAT (SNAT) the SSH connection to the honeypot server, so if the attacker ran a netstat they'd see their public IP address and not the IP address of the HonSSH server, therefore making it less suspicious.

After everything was configured I took a snapshot of the honeypot server so it could easily be restored to its non-pwned state.

Configuring the HonSSH Server

For the HonSSH server I'm using Debian. This time fully updated, support, patched etc.

Clone HonSSH from git:

Look at the requirements file and install the packages listed.

Copy the default honssh and users cfg files:

Edit the users.cfg file. This maps usernames and passwords sent by the attacker to HonSSH to the honeypot server. There are two modes. Fixed, where you supply a list of valid passwords that HonSSH will accept and random, where you specify a random chance that the password will be accepted. You also define the users and the "real password" that HonSSH will send to the honeypot server. My users.cfg looks something like:

In this scenario if an attacker tries to login with root, only the passwords jiamima, wubac and toor will be accepted. HonSSH will in turn login to the honeypot server as root but using the real_password defined. For the other users defined (setup, test, postgres, ftp), HonSSH will accept any password but only with a 25% chance of success. When successful HonSSH will login to the honeypot server as the user and using the real_password defined.

Note that random mode can be confusing for the attacker once logged in and raise suspicion. If the attacker tries sudo, the password they logged in with will not be the same as the password actually configured on the honeypot server.

For a list of popular usernames and password combinations used in SSH scanning, Dragon Research Group produces a handy SSH Username and Password Tag Cloud.

Next edit honssh.cfg.

Under the [honeypot] section:

  • ssh_addr: Set this to the IP address of the NIC on the HonSSH server connected to the "outside" (
  • ssh_port: Set this to the port that the HonSSH daemon should listen on for incoming SSH connections (2222).
  • client_addr: Set this to the IP address of the NIC on the HonSSH server connected to the "inside" (

Under the [honeypot-static] section:

  • sensor_name: Set this to something meaningly, for example honssh-honeypot1.
  • honey_ip: Set this to the IP address of the honeypot server (

Under the [advNet] section:

  • enabled: Set this to true which will enabled the SNAT feature talked about previously.

Under the [spoof] section:

  • enabled: Set this to true.

Under the [download] section:

  • passive: Set this to true so HonSSH locally captures all files uploaded to the honeypot server via SFTP/SCP.
  • active: Set this to true so HonSSH locally captures all files downloaded to the honeypot server via wget.

Enabling email notifications under the [output-email] section is also useful to keep tabs on when a login has been successful and what was done.

Since the HonSSH server will also be acting as a router for the honeypot server, enable IP forwarding:

Secure the HonSSH server with suitable firewall rules using iptables:

It's a good idea to implement some basic filtering from the honeypot server. When the honeypot server becomes compromised it's highly likely it'll be used to scan other systems, take part in DDoS attacks etc. The below rules will implement some basic rate limiting as well as filtering egress access to SMTP and SSH:

Save the iptables rules:

Start the HonSSH daemon:

Finally, test connectivity and ensure that everything works as expected. Remember to forward TCP port 22 from the Internet to the HonSSH server on TCP port 2222.

Other Notes

As mentioned previously, I'm running the honeypot server under QEMU. On the host I have a cronjob that restores the honeypot server to a default state every few hours. The script looks like this:

HonSSH can run scripts on certain events, so it may be possible to somehow get HonSSH to notify the host to do this automatically after an attacker has closed an SSH session (maybe via an email thats picked up directly on the host which triggers a snapshot revert?).

I'm also running a packet capture on the HonSSH server for better analysis of compromises:

This runs tshark in screen (detaching the session), capturing traffic to and from the honeypot server ( but not capturing SSH traffic. HonSSH is capturing the SSH sessions which can be replayed. A new capture file is created when the current capture reaches 20MB.


Be careful. You're allowing a machine connected to the Internet to become compromised. When (not if) this happens, it's likely the attacker will:

  1. Attempt to root the server.
  2. Start scanning for other vulnerable devices.
  3. Add the server to a botnet.
  4. Start sending out spam.
  5. All of the above.

Use a honeypot to learn more about attacks and attackers but always try to mitigate against adding to the problem.

Analyse any captured files in a sandboxed, offline, environment.

Don't run a honeypot on any public IP addresses/space you care about.

Share your experiences, captures, files and sessions! I'm sure the guys over at SANS ISC would be interested.


A Guide to Using Let’s Encrypt


Up until a few moments ago, I was using CAcert for all my certificate needs. A free service offering SSL/TLS certificates. The only issue with CAcert is that their Root Certificate is not included in all mainstream Operating Systems or browsers, meaning users will get a certificate error unless they choose to install the Root Certificate.

But now Let's Encrypt is on the scene. A free, open and automated certificate authority that is trusted and should work right off the bat in most modern Operating Systems and browsers.

And in the modern world encryption (and trust) is important and needed.

Here's a quick guide to using Let's Encrypt (LE). I'm using Debian with Apache, Dovecot and Postfix.

First download the official LE client from GitHub by cloning the repository. You'll want to run the LE client as a user who can gain root via sudo:

The LE client will build itself, pulling down any required dependancies and configure itself. The build is self contained in $HOME/.local/share/letsencrypt.

If you're running grsecurity and get a Segmentation Fault, you need to use paxctl to fix the python binary. Whenever letsencrypt-auto is run it'll self update so you may need to run paxctl often:

Now the LE client is installed, you're ready to start creating certificates. Normally when you purchase a certificate there are (or should be) several background checks that the issuer performs and LE is no different apart from it's all automated using something called ACME (Automatic Certificate Management Environment). This is essentially a two-factor authentication mechanism. ACME sends a HTTP request to the Common Name (CN) and if used the Subject Alternative names (SAN) in the certificate and requests a unique file with unique content. It's two-factor because a) you need access to DNS to configure the records for the CN and SANs in the certificate request and b) you need access to the machine the DNS records are pointing at to get the ACME challenge/response files on to. This stops people from requesting certificates for stuff they don't own.

But here's the problem. This works great when requesting certificates for websites because you've got a web server running. Not so great for a mail server which may not be running a web server on the same box. If you're in this situation you may have to temporarily stand up a web server on the mail server for ACME. But if your mail server is on the same box as your web server, you can use the following trick (specific to Apache but can be adapted easily for nginx).

You've probably got a default vhost. One that specifies the ServerName as _default_. This matches any other request where no vhost is defined. You've probably got this to redirect to your main site. For example:

The problem if you're trying to request a certificate for a mail server is that it's not going to have a virtual server configured in your web server. Therefore when the ACME makes a HTTP request to http://smtp.domain.com/.well-known/acme-challenge, the verification is going to fail.

If you're running Apache one workaround is to configure an Alias outside of any virtual hosts so it applies everywhere. Create /etc/apache/conf.d/letsencrypt-acme containing:

And reload Apache.

Create a file in /var/www/.well-known/acme-challenge called test.txt with some text in it. Now when you visit http://smtp.domain.com/.well-known/acme-challenge/test.txt you shouldn't get a 404 Not Found. Also note that if you're using the Redirect directive in an existing virtual host (e.g. to redirect http://domain.com to http://www.domain.com), you should be using the RedirectMatch directive like so:

ACME will follow redirect requests and the above ensures that any request to http://domain.com/.well-known/acme-challenge is redirected to http://www.domain.com/.well-known/acme-challenge, otherwise just using the standard Redirect directive will break things.

Why do this, though? Because it makes the following a lot easier.

As mentioned each CN or SAN has to be validated. By using the trick above you only need to tell the LE client one web root to use. Here's the command that will request a certificate for www.domain.com with webmail.domain.com as a SAN:

The ACME challenge/response gets written (temporarily) to /var/www/.well-known/acme-challenge. The ACME sends a HTTP request to http://www.domain.com/.well-known/acme-challenge and http://webmail.domain.com/.well-known/acme-challenge and both get served from /var/www/.well-known/acme-challenge.

If everything worked you should now have a public and private key in /etc/letsencrypt/live/www.domain.com as well as a full chain certificate.

When it comes to renewing certificates, simply run:

Now there's no excuse to ever use self-signed certificates!


Reviving an Acer Aspire One ZG5 Netbook

I was given an Acer Aspire One ZG5 (A110) and asked to try to update it. There were a few problems with it. Firstly, it was running Ubuntu 12.04 but the upgrade to Precise Pangolin had broken and wasn't easily recoverable. Secondly, the battery appeared to be dead and wouldn't charge. In addition I also found that a BIOS password ("user" and "supervisor") had been set but the password wasn't known.

A Modern OS

When the Aspire One first came out there was a wide range of Operating Systems to choose from. It was capable of running Windows XP, various Linux flavours, FreeBSD and even OS X. A lot of customised Linux distributions started to appear, designed specifically for netbooks. However, fast forward a few years and Windows XP is dead and a lot of the Linux distributions for netbooks are no longer actively developed and have fallen behind the times.

Wikipedia has a page dedicated to comparisons of netbook-orientated Linux distributions. Most of these are no longer actively developed, apart from a few. First I tried Lubuntu but after installing successfully, found it would get stuck booting and I didn't have the inclination to troubleshoot it.

Next I tried Manjaro Netbook Edition, a community developed flavour of Manjaro Linux. This can be downloaded here. Specifically I downloaded the latest and greatest 32-bit version (direct link to ISO). Since the Aspire One doesn't have a CD/DVD drive, I used UNetBootin to create a bootable USB thumbdrive from the ISO, booted, installed and rebooted successfully.

After logging in for the first time I was prompted to install the linux-netbook-manjaro package and palemoon-atom package. The former is a Linux kernel optimised for netbooks and the latter is an optimised version of the Palemoon browser.

I found that the package manager was extremely slow to download anything. This turned out to be because the mirror list had select a location in South Africa first. The Manjaro wiki has some handy pacman tips to resolve this.

I was then able to perform an upgrade to the latest Manjaro release, 16.06.1 (Daniella), released 11th June 2016. So finally the Aspire One had a modern and fully functional OS, including sound, wireless, webcam etc. I also installed the Flash player plugin for Palemoon (although testing BBC iPlayer is slow with sound and video out of sync), AbiWord and Skype.

Battery and BIOS

As mentioned, the other issue was the battery. It wouldn't charge. This was either because the battery was dead or there was something wrong with the charging circuitry. But one clue was after I had installed Manjaro Linux, the battery was reported as "unknown". Maybe clearing the BIOS settings and/or re-flashing the BIOS would help.

As I pressed F2 to enter the BIOS setup I was prompted for a user password. Unfortunately no one knew what this was. My first thought was to remove the CMOS battery but after a quick Google, taking the Aspire One apart was a little too much trouble.

Introducing Hiren's Boot CD. I downloaded the ISO and again used UNetBootin to create a bootable USB thumbdrive. However, after booting from it I found the boot menu was broken. To fix this I had to copy the isolinux.cfg file found inside the HBCD directory on the USB thumbdrive to the root of the USB thumbdrive, replacing the syslinux.cfg file. In one of the menus is a bunch of BIOS/CMOS tools and one of those tools can dump the plaintext strings found in the CMOS. Here, I was able to find the "user" and "supervisor" BIOS passwords.

Now I had access to the BIOS I tried loading the default settings. Unfortunately this didn't resolve the battery issue. So next was to try flashing the BIOS. Enter the SNID (found on the underside of the Aspire One) in to Acer's download and support site to get the latest BIOS download. The problem here is that the BIOS update utility is geared towards Windows users.

Acer do have a help page covering updating the BIOS on the A110 or A115 models, which involves copying some files to a FAT32 formatted USB thumbdrive, holding Fn+Esc and powering on the netbook which should initiated the BIOS upgrade. But I found this didn't work (all the right lights flashed but nothing actually happened).

The easiest (and probably safest) way to update the BIOS is to again use UNetBootin to create a bootable USB thumbdrive of FreeDOS. After that, copy the DOS folder from the Acer BIOS update download to the root of the USB thumbdrive and boot the netbook in to FreeDOS. Change to the C: drive (this will be the USB thumbdrive) and then change directory in to the DOS folder. Run the batch file to start the BIOS upgrade.

Success. BIOS upgraded and as a bonus, the battery started charging and did hold charge too.


Disabling WordPress XML-RPC and banning offenders with fail2ban

This isn't something new. SANS ISC reported on this 2 years ago. The bad guys love anything that can be used in a reflection DoS and the WordPress XML-RPC functionality is a prime candidate. There are various ways to disable it, through WordPress plugins for example, or by hacking away at code. All of these are fine if you're in control over what gets installed on the web server. In a shared hosting environment you've got to rely on your users.

Running Apache you can disable XML-RPC globally and simply with the following:

The configuration should be placed as part of the global Apache configuration. When any file called xmlrpc.php is requested, on any vhost, from an IP address not listed by the Require ip line, an Error 403 Forbidden will be served instead. This configuration should ensure that WordPress plugins like Jetpack continue to work.

I've seen a few examples where even after doing this the bad guys still continuously request xmlrpc.php even though they're being served a 403 error. To further protect the web server fail2ban can be deployed.

Firstly create a filter definition:

Then create the jail:

Now when someone requests xmlrpc.php 3 times within the defined findtime their IP address will be blocked.


Cisco two armed VPN concentrator and default route

Take the following scenario:

  • You have a hub site.
  • Branch (spoke) sites connect to the hub with a L2L IPsec tunnel.
  • All traffic must traverse the tunnel (no local breakout to the Internet).
  • At the hub, your VPN concentrator is separate from your firewall and runs in two armed mode. Where one interface is outside the firewall (public) to terminate the incoming tunnels and another interface is within a DMZ. As such no NAT is configured since the firewall will be doing this, routing and filtering traffic.

The VPN concentrator will have its default gateway pointing out of the public interface. This becomes a problem when you're tunnelling all traffic from the spokes over the L2L tunnel, especially for traffic destined to the Internet which should go via the hub site's central firewall.

On a Cisco ASA two default gateways can be specified. One for non-tunneled traffic and one for traffic exiting from a tunnel.

In the example above, any traffic exiting from a tunnel on the inside interface and not matching another route, will be routed towards Without this the traffic would be routed towards 111.222.333.444.

If a device running IOS is being used the same can be achieved using a route-map to match the traffic exiting the tunnel and then setting the next hop IP.


Tagged as: , , No Comments

Banning repeat offenders with fail2ban

More and more I see fail2ban banning the same hosts repeatedly. One way to tackle this could be to increase the ban time but you could also have fail2ban monitor itself to find "repeat offenders" and then ban them for an extended period of time.

Firstly, create a filter definition:

This will be used against the fail2ban log and will find any hosts that have been unbanned. We don't want to monitor hosts that have been banned because, er, they're already banned. We also want to ignore any log entries that are generated by the jail itself.

Next edit jail.local to add a new jail:

This jail will monitor the /var/log/fail2ban.log file and use the repeat-offender filter that was defined earlier. If 3 unban's are seen within 5 hours, the host will be banned for 48 hours. You could adjust the banaction to use the route action which may give some performance benefits on a very busy server.


DHCP Option 43 generator for Cisco Lightweight APs

I got lazy after having to create a load of DHCP scopes for Cisco Lightweight Access Points, each requiring Option 43 in TLV format. And now you can be lazy too.

Save the following as a HTML file and open in your favorite browser. In the text area enter your WLC IP addresses one per line and hit submit. This will generate the hex string to use in DHCP Option 43. Alternatively I've also put the code in to jsfiddle here.




loadbalancer.org – Linux feedback agent

I've been working with some loadbalancer.org appliances recently, load balancing traffic over MySQL and Apache servers running Linux. The load balancer supports a feedback agent where it can query the real server to gauge how utilised it is based on, for example, CPU load and then distribute the request to the real server that should perform the best.

Over on the loadbalancer.org blog is an article about the feedback agent and how to implement it for Linux servers. The shell script suggested is:

Although this does give you a 1 second average CPU, it's a bit too accurate and doesn't give much headroom. If the CPU suddenly spiked very quickly and then returned to normal the load balancer wouldn't know this. And indeed watching the real server weighting change on the load balancer v's what top is reporting confirms this. The weighting on a real server can drastically jump up and down.

A better feedback agent script is:

This will take a 3 second average reading and then report back the overall average. This prevents the real server weighting on the load balancer from fluctuating so much. Change the NUM_CHECKS variable to take more or less readings as required.


Cisco VSS, domain ID and virtual MAC addresses

The other weekend I connected a L2 circuit between two sites. At both ends were Cisco 6500 Catalyst switches running VSS. The interfaces they connected to were configured as L3 and EIGRP was run between the two sites to share routes. But as soon as they were connected the neighbors started flapping.

Troubleshooting started and as always you start at the lowest OSI layer and work up. Bingo! The issue was at Layer 2 as I could see ARP was incomplete on both sides for the neighbor addresses. Checking the MAC address for the interface the L2 circuit was connected to at site A and the MAC address for the interface the L2 circuit was connected to at site B showed the same MAC. How could this happen?

As mentioned in the first sentence both ends had a Cisco 6500 Catalyst switches running VSS. One of the first things you do when configuring VSS so set the switch virtual domain ID. Cisco recommend that you enable virtual MAC addresses (mac-address use-virtual) under the switch virtual domain. I'll explain why Cisco recommend this option. When when the first switch comes up, VSS uses the MAC address pool from that member and uses that pool across all L3 interfaces. This MAC address pool is maintained by VSS when one (and only one) switch is reloaded. But if the entire VSS is reloaded and the other switch happens to come up first the MAC address pool will change. This shouldn't be a huge deal but if there are any other devices out there that are ignoring gratuitous ARP they will require manual intervention to get them working which will cause further service disruption.

Hence Cisco recommend using mac-address use-virtual under the switch virtual domain ID. This ensures the same MAC address pool is used at all times. No exceptions. But the switch virtual domain ID is significant in determining the virtual MAC address pool. It's used in the formula to calculate this pool. As per the Cisco documentation:

The MAC address range reserved for the VSS is derived from a reserved pool of addresses with the domain ID encoded in the leading 6 bits of the last octet and trailing 2 bits of the previous octet of the mac-address. The last two bits of the first octet is allocated for protocol mac-address which is derived by adding the protocol ID (0 to 3) to the router MAC address.

When I checked both switches I found they both had a switch virtual domain ID of 10. Therefore the virtual MAC address on the L3 interfaces were both 0008.e3ff.fc28. We can use the formula to check this:

6th octet (28) to binary: 00101000
Remove trailing 2 bits: 001010
001010 (bin) to decimal: 10

But what are the options for fixing the problem where the MAC addresses are the same on both sides?

  1. On one side, under the L3 interface use mac-address H.H.H.H
  2. Change the switch virtual domain ID on one VSS - Possible to do but requires a complete outage as a VSS reload is required.
  3. Remove mac-address use-virtual from the switch virtual domain ID - Not recommended as discussed previously.

Option 1 seems like the most viable option but how do you guarantee the MAC address you manually assign is unique? Will there be issues in the future? If we pick an arbitrary number between 1 and 255 (switch virtual domain ?) we can then use the formula to calculate a "safe" MAC address as long as no one in the future connects a VSS with this arbitrary number as the switch virtual domain ID. I decided to choose 99.

Virtual MAC address with domain ID 10: 0008.e3ff.fc28
Last two octets (fc28) hex to binary: 1111110000101000
99 (dec) to binary: 01100011
Insert 01100011 after leading 6 bits and before trailing 2 bits: 1111110110001100
1111110110001100 (bin) to hex: fd8c

Hence if a switch virtual domain ID of 99 was used the virtual MAC address assigned to a L3 interface would be 0008.e3ff.fd8c.

Problem solved. It was just unfortunate that the switch virtual domain ID happened to be the same. No one ever saw the two sites needing to be connected this way. If you're deploying VSS in your organisation, work smart and use unique switch virtual domain IDs everywhere. If you happen to connect to a 3rd party first check if they're using VSS and if they are, check what they have as their switch virtual domain ID. If there's a conflict someone will need to manually set a MAC address on an interface.


Root shell on DrayTek AP 800

The DrayTek AP 800 is a 2.4Ghz 802.11n Access Point with the ability to make it dual band, 2.4Ghz and 5Ghz, with an optional USB dongle. It supports multi-SSID with VLAN tagging, built in RADIUS server, per-SSID/station bandwidth control and can act as a bridge, repeater etc.

As with all of these SOHO products it'd built on Linux. Which means somewhere there is a root shell lurking.

The DrayTek AP 800 has telnet enabled out of the box. Establish a telnet connection and login as the admin user. You'll be dropped in to a restricted busybox shell. To make it slightly less restrictive type rddebug. This will let you use commands such as ps and echo.

Now spawn telnetd on a different port and invoke a full shell, with:

Telnet to the AP on port 2323 to be dropped in to a root shell.

This will likely also work with the AP 900 and the 2860 series. Leave a comment if you've tried it.

Tagged as: , , No Comments