David Ramsden

A Guide to Using Let’s Encrypt

Up until a few moments ago, I was using CAcert for all my certificate needs. A free service offering SSL/TLS certificates. The only issue with CAcert is that their Root Certificate is not included in all mainstream Operating Systems or browsers, meaning users will get a certificate error unless they choose to install the Root Certificate.

But now Let's Encrypt is on the scene. A free, open and automated certificate authority that is trusted and should work right off the bat in most modern Operating Systems and browsers.

And in the modern world encryption (and trust) is important and needed.

Here's a quick guide to using Let's Encrypt (LE). I'm using Debian with Apache, Dovecot and Postfix.

First download the official LE client from GitHub by cloning the repository. You'll want to run the LE client as a user who can gain root via sudo:

The LE client will build itself, pulling down any required dependancies and configure itself. The build is self contained in $HOME/.local/share/letsencrypt.

If you're running grsecurity and get a Segmentation Fault, you need to use paxctl to fix the python binary. Whenever letsencrypt-auto is run it'll self update so you may need to run paxctl often:

Now the LE client is installed, you're ready to start creating certificates. Normally when you purchase a certificate there are (or should be) several background checks that the issuer performs and LE is no different apart from it's all automated using something called ACME (Automatic Certificate Management Environment). This is essentially a two-factor authentication mechanism. ACME sends a HTTP request to the Common Name (CN) and if used the Subject Alternative names (SAN) in the certificate and requests a unique file with unique content. It's two-factor because a) you need access to DNS to configure the records for the CN and SANs in the certificate request and b) you need access to the machine the DNS records are pointing at to get the ACME challenge/response files on to. This stops people from requesting certificates for stuff they don't own.

But here's the problem. This works great when requesting certificates for websites because you've got a web server running. Not so great for a mail server which may not be running a web server on the same box. If you're in this situation you may have to temporarily stand up a web server on the mail server for ACME. But if your mail server is on the same box as your web server, you can use the following trick (specific to Apache but can be adapted easily for nginx).

You've probably got a default vhost. One that specifies the ServerName as _default_. This matches any other request where no vhost is defined. You've probably got this to redirect to your main site. For example:

The problem if you're trying to request a certificate for a mail server is that it's not going to have a virtual server configured in your web server. Therefore when the ACME makes a HTTP request to http://smtp.domain.com/.well-known/acme-challenge, the verification is going to fail.

If you're running Apache one workaround is to configure an Alias outside of any virtual hosts so it applies everywhere. Create /etc/apache/conf.d/letsencrypt-acme containing:

And reload Apache.

Create a file in /var/www/.well-known/acme-challenge called test.txt with some text in it. Now when you visit http://smtp.domain.com/.well-known/acme-challenge/test.txt you shouldn't get a 404 Not Found. Also note that if you're using the Redirect directive in an existing virtual host (e.g. to redirect http://domain.com to http://www.domain.com), you should be using the RedirectMatch directive like so:

ACME will follow redirect requests and the above ensures that any request to http://domain.com/.well-known/acme-challenge is redirected to http://www.domain.com/.well-known/acme-challenge, otherwise just using the standard Redirect directive will break things.

Why do this, though? Because it makes the following a lot easier.

As mentioned each CN or SAN has to be validated. By using the trick above you only need to tell the LE client one web root to use. Here's the command that will request a certificate for www.domain.com with webmail.domain.com as a SAN:

The ACME challenge/response gets written (temporarily) to /var/www/.well-known/acme-challenge. The ACME sends a HTTP request to http://www.domain.com/.well-known/acme-challenge and http://webmail.domain.com/.well-known/acme-challenge and both get served from /var/www/.well-known/acme-challenge.

If everything worked you should now have a public and private key in /etc/letsencrypt/live/www.domain.com as well as a full chain certificate.

When it comes to renewing certificates, simply run:

Now there's no excuse to ever use self-signed certificates!


Banning repeat offenders with fail2ban

More and more I see fail2ban banning the same hosts repeatedly. One way to tackle this could be to increase the ban time but you could also have fail2ban monitor itself to find "repeat offenders" and then ban them for an extended period of time.

Firstly, create a filter definition:

This will be used against the fail2ban log and will find any hosts that have been unbanned. We don't want to monitor hosts that have been banned because, er, they're already banned. We also want to ignore any log entries that are generated by the jail itself.

Next edit jail.local to add a new jail:

This jail will monitor the /var/log/fail2ban.log file and use the repeat-offender filter that was defined earlier. If 3 unban's are seen within 5 hours, the host will be banned for 48 hours. You could adjust the banaction to use the route action which may give some performance benefits on a very busy server.


loadbalancer.org – Linux feedback agent

I've been working with some loadbalancer.org appliances recently, load balancing traffic over MySQL and Apache servers running Linux. The load balancer supports a feedback agent where it can query the real server to gauge how utilised it is based on, for example, CPU load and then distribute the request to the real server that should perform the best.

Over on the loadbalancer.org blog is an article about the feedback agent and how to implement it for Linux servers. The shell script suggested is:

Although this does give you a 1 second average CPU, it's a bit too accurate and doesn't give much headroom. If the CPU suddenly spiked very quickly and then returned to normal the load balancer wouldn't know this. And indeed watching the real server weighting change on the load balancer v's what top is reporting confirms this. The weighting on a real server can drastically jump up and down.

A better feedback agent script is:

This will take a 3 second average reading and then report back the overall average. This prevents the real server weighting on the load balancer from fluctuating so much. Change the NUM_CHECKS variable to take more or less readings as required.