Letsencrypt in 15 minutes

      No Comments on Letsencrypt in 15 minutes

I was looking for a simple  way to use Let’s Encrypt to enable https for a web site and I found a Docker image nmarus/docker-haproxy-certbot which met my needs.

Remember, Let’s Encrypt represents a complete break from traditional certificate issuers in that:
(a) its free.
(b) certificate creation, installation and renewal is fully automated.

These are huge advantages relative to working with the previous certificate issuers and anyone who deploys anything to the internet should immediately take advantage of them. Let’s Encrypt’s audacious goal is to improve the whole internet by getting everyone to use https.

Let’s Encrypt provides a “certbot” which handles the whole lifecycle of the certificates for you. There’s plenty of Let’s Encrypt documentation on how to install the certbot into popular web servers (like apache) or proxy servers (like HAProxy). However, what we are doing below is packaging a HAProxy instance with certbot installed as a Docker container, so you can simply put it in front of one or more web properties you want certificates for. That way, you don’t need to touch your existing server or proxy configuration to use Let’s Encrypt certificates.

In our case, our web site was already a Docker container, so I just had to modify the docker-compose file from:

version: "2"
services:
 web-site:
  image: web-site
  restart: always
  ports:
   - 80:80

to:

version: "2"
services:
 web-site:
  image: web-site
  restart: always

 haproxy-certbot:
  image: nmarus/haproxy-certbot
  container_name: haproxy-certbot
  restart: always
  ports:
   - 80:80
   - 443:443
  links:
   - web-site
  cap_add:
   - NET_ADMIN
  volumes:
   - ~/data/config:/config
   - ~/data/letsencrypt:/etc/letsencrypt
   - ~/data/certs:/usr/local/etc/haproxy/certs.d

Where haproxy-certbot is our certificate issuing, SSL terminating transparent proxy which takes care of all certificate related activities and then passes http requests on to our original service.

I first needed to create the three directories “~/data/config”, “~/data/letsencrypt” and “~/data/certs” on my docker host (which the haproxy-certbot container needs for persistent storage of its proxy configuration file and the certificates).

I then took the example haproxy.cfg file provided (see https://hub.docker.com/r/nmarus/haproxy-certbot), and copied it to the ~/data/config directory and changed the backend “my_http_backend” in the haproxy.cfg file to:

backend my_http_backend
  mode http
  balance leastconn
  option tcp-check
  option log-health-checks
  server web-site web-site:80 check port 80

This means that the proxy server now forwards requests to port 80 (http) on the address “web-site”, which is the address of the web-site container, provided to the proxy container via the docker links instruction.

I brought both containers up with “docker-compose up -d”, and checked that my web-site was still available over http.

At this point, our new proxy is passing through http requests to the backend. But it is also ready to handle the requests which Let’s Encrypt will use to verify that you own the domain and issue you the requested certificate.

I then asked Let’s Encrypt to create a certificate with the command:

docker exec haproxy-certbot certbot-certonly --domain <hostname> --email <your email address>

This caused Let’s Encrypt to verify that I really owned the domain by visiting the address I provided and checking that it reaches the container. Let’s Encrypt issued the certificate and the certificate was stored (in the ~/data/certs directory which I provided to the container).

I then refreshed the proxy with:

docker exec haproxy-certbot haproxy-refresh

And I was then immediately able to visit the website with https.

Let’s Encrypt certificates are short-lived (a few months), but the haproxy-certbot container automatically renews them for you before they expire.

On another server I had several different microservices for which I wanted https access, so I configured a second haproxy instance as the back-end rather than a web-site. That way, I had one proxy instance handling SSL termination/certificate administration and another routing requests to the various microservices (based on HAProxy host header rules).

The great thing about this approach is that you don’t have to mess around with your existing http services or proxies, instead you simply put this container in front of them.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.