HAProxy option redispatch & backup servers

      No Comments on HAProxy option redispatch & backup servers

When we’re creating an API with fault-tolerance, to reduce the number of single points of failure, we often use Postgres BDR, a multi-master database, which allows us to provide two backend servers, each on a separate physical host and each with its own database. However, we’d prefer to keep most requests on one server as long as its available (for efficiency of database caching).

The standard way to do this in HAProxy is by designating one backend server as “backup”:

backend api_backend
        option forwardfor except 127.0.0.1
        option redispatch
        option httpchk GET /check.txt HTTP/1.0
        server server1 10.0.0.5:8080 check inter 5000
        server server2 10.0.0.6:8080 check backup inter 5000

However, during releases, when we update server2 and then server1, we noticed clients getting 503 errors, even though there was always at least one backend available at any time.

I created a client-side bash script to test this issue – it dispatches a GET request to the api every 500ms.

while true; do curl http://host/api/v1/system/health; sleep 0.5; done

I then issued a rolling update across the servers, taking down server 2, updating it, bringing it up and the repeating on server 1. Sure enough, I was always seeing 1 or 2 requests failing with a http 503 error.

The “option redispatch” should take care of this by redispatching failed requests to the other server, but it turns out that it doesn’t work as expected when one backend is designated as backup. So I removed the backup option and retested.

backend api_backend
        option forwardfor except 127.0.0.1
        option redispatch
        option httpchk GET /check.txt HTTP/1.0
        server server1 10.0.0.5:8080 check inter 5000
        server server2 10.0.0.6:8080 check inter 5000

Now the rolling updates worked smoothly without the client getting any 503s. But, I don’t want requests distributed over both backends evenly (for the reasons discussed above).

After some research, it turns out we can achieve the same effect as the backup option using a stick-table and applying a greater weight to server 1 as follows:

backend api_backend
        option forwardfor except 127.0.0.1
        option redispatch
        option httpchk GET /check.txt HTTP/1.0
        stick match dst_port
        stick-table type integer size 100 expire 96h
        server server1 10.0.0.5:8080 check weight 100
        server server2 10.0.0.6:8080 check weight 1

Now we can do the rolling update of the backend servers and no 503s are generated. Yay!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.