Category Archives: Other

Docker workshop

Overview:

We recently carried out a short introductory Docker workshop, starting from scratch, installing Docker and taking it through to the point where a software stack, consisting of several linked containers, are deployed using docker-compose. Here’s what we covered.

Docker concepts:

Docker containers are easy-to-deploy units of software, analogous to  the shipping containers used by the transport industry, which simplifies the job of shipping diverse goods around the world.

Docker images are the templates for the containers. Every Docker container is started from an image. Images are defined by a Dockerfile which contains instructions for building the image, based on an existing image (for for instance, a web-server image will be based on an OS image, simply adding a layer of web-server software to it).

A Docker registry is where images are stored. Every machine where Docker is installed has a local registry. Additionally, Docker provides a central registry (from which images are fetched if they aren’t available locally). And finally, you can host your own private registries.

Starting point:

A freshly installed Ubuntu 16.04 server, called docker-test.

Installing Docker:

We won’t use the standard Docker package available from the Ubuntu repositories because Docker is changing fast – instead we’ll add the Docker apt-repository and install the latest version from there.

The last two commands ensure that we can run Docker commands without sudo.

Check that Docker is correctly installed with:

If everything is OK, Docker will download the hello-world image from the central Docker registry and run it.

Building a simple website container:

The purpose of Docker is to allow you to build and deploy something like a website without having to worry about details like which OS is to be used, which web-server etc. What we want is to have an image called say “website” which takes some files we give it and publishes them via a web-server.

First create a directory “website” where we will work on creating our container.

Create a text file called “Dockerfile” containing the following:

This defines a new image, based on the existing image “alpine”, a compact Linux OS image. A web-server, “lighttpd” is installed. Then our website content (index.html) is copied to the web-server content folder. and finally the web-server is started.

Next create a simple index.html as content for the website:

Now build an image from the Dockerfile and call it “website”.

The “docker images” command will show us the newly built image.

We can now run a container based on our new image as follows:

This will bring up the website container based on our “website” image and publishing the web content on port 8088 (the -p parameter maps the host port 8088 to the standard web-server port 80 within the container).

The “docker ps” command will show us the running container:

Point a browser at http://docker-test:8088 and you’ll see our simple “Hello World!” web-page, served by our new container.

Basic Docker commands:

Persistent data (stateful containers):

When a container is rebuilt from an image, it loses any changes which were made to it since it was last built. In order to preserve state (for instance, a database container will usually need to preserve its database contents, even if the container is rebuilt), this state must be maintained by the host and provided to the container by means of “volumes”.

To illustrate this, we’ll create a database container using the postgresql database server. First we create a data directory on the docker host, which will maintain the persistent state of the database.

Then we create a database container based on a standard “postgres” image from the main Docker registry. We pass the -v parameter instructing docker to map the host directory (~/data)  to the container path /var/lib/postgresql/data (where postgresql stores its database contents).

If you now look in the host directory ~/data, you will see that postgresql has created a set of database files there. Note: you’ll need to use sudo to list the files because postgresql has modified the file permissions.

Now connect to the new database server (docker-test:5432, user: postgres, password: postgres) with a postgres client (e.g. pgadmin) and create a database called “test”.

To demonstrate that the host is maintaining state for the container, we’ll now recreate the container and image from scratch.

Then we’ll run the container again (which will again fetch the image from the main Docker registry because we removed it from the local Docker registry with the “docker rmi” command).

Reconnect with postgres client – our new “test” database is still there even though we rebuilt the container (because we specified a persistent volume).

docker-compose – deploying a whole stack

The philosophy of a container is that its supposed to do just one thing well – this reduces complexity and increases reusability. So you shouldn’t use a single container to deploy several components. For instance, if you have a web application stack which consists of a database, a REST server, a client web application and a proxy server, then this stack should be deployed as four containers.

For this workshop, we’ll deploy a database and a REST server as a stack, using docker-compose to deploy the stack in a single operation.

We first need to install docker-compose (its an add-on tool for Docker).

Now we’ll create a small REST server in python and deploy it in a container.

Enter the following python code into test.py:

Our sample REST server will respond to a GET request to /hello with “Hello from the rest server!”

Next we’ll create a Dockerfile for the REST server image

And enter and save the following content into Dockerfile:

Now we can build and run the container.

Point a browser at http://docker-test:5000/hello and the REST server should return “Hello from the rest server!”

So now we have a REST server running as a container. The next step is to hook up the rest server to the database, so instead of always returning a fixed text string, it can do a more real-world task of returning the result of a query against the database.

Using the postgres client, connect to the database and create a database “test”, with a table “test” and one int column “test”. Insert a few values which our REST server will sum.

Now we’ll update the code of our REST server to sum the values from the test table.

We’ll need to update the REST server image to install the psycopg2 database library

Point a browser at http://docker-test:5000/hello and the REST server should now return “20” (the sum of the two values in the database).

So we now have two containers, one of which uses the other.

However, we are still starting both separately and in a specific order. We also currently have ports 5000 (rest server http) and 5432 (postgres) open and we have a hard-coded reference to “docker-test” in the rest server code. We could of course allow the database server to be passed in as a command line variable or as an environment variable, but docker-compose provides a better way, by linking the containers, so that the database is first started, then a private network is created to link the two containers and the address of the database server on that network is passed to the rest container, who can then communicate privately with its database container.

Lets update our REST server code, to reference the database as “database” instead of the host name “docker-test”. We’ll then use docker-compose to ensure that the host name “database” will be pointing to the database container.

Now we’ll create a docker-compose.yml file to link the REST server to the database server, by means of the host-name “database” (defined by the name of the service in the docker-compose.yml file). We no longer need to expose the postgresql port (5432), since docker-compose will provide a private network between the containers, allowing the REST server to access port 5432 inside the database container without it being exposed to the host.

The depends_on instruction ensures that the REST server container is started only after the database server container has been started.

Remove the current “rest” and “database” containers, rebuild the rest server image (with the updated test.py) and bring them back up as a stack with docker-compose

Check it again by browsing to http://docker-test:5000/hello (the REST server should still return “20”).

If you run the “docker ps” command, you’ll see that the containers are no longer called “rest” and “database”, but that docker-compose has constructed names based on the service names in the docker-compose.yml file.

You’ll also noticed that the database container is no longer publishing port 5432 – its only available on the private network which docker-compose creates between the containers. So this means that only port 5000 – the port published by the rest server – is now exposed to the outside world.

Note that we’re doing everything (building the image and deploying the container) on a single machine – in the real world, the docker images are built during the development cycle on a developer workstation (or a continuous integration server like jenkins) and pushed to a remote registry (like docker hub or a private registry). During deployment, the images are pulled from the remote registry and the containers started on the production server.

Docker swarm

Docker swarm is a very interesting extension to Docker which allows you to deploy your software stacks across a cluster of worker machines. Its easy to set up – one Docker host creates the swarm and becomes the swarm manager, and other hosts join the swarm. Using an extension to the docker-compose.yml, software stacks can be deployed across the cluster with replicas for fault tolerance and load balancing. We didn’t have time to cover docker swarm in this workshop, but we’ll cover it soon.

Contact us!

We’ve already gathered lot of experience using Docker to help our customers efficiently deploy their software stacks. If you’re interested in having us help your organisation get up and running with containers, get in touch with us at info@armstrongconsulting.com

eCobertura makes code coverage easy

We used to wait for Jenkins to produce us a Cobertura report and of course nobody read it until delivery time arrived and we realised we hadn’t met the SLAs. Enter the eCobertura eclipse plugin which provides you with visual code coverage directly from within the eclipse editor. Just run your unit tests and you immediately see the source lines in green and red. Wow! How did we ever do without this?

Unable to start VMs on ESX5i

We came across a weird problem on ESX5i. Occasionally, one of our hosts would suddenly be unable to start any VMs – the running VMs were fine, but any attempt to start new ones would fail with an “Unknown internal error”. The first time this happened, I restarted the management agents and finally suspended all VMs and rebooted the server, after which everything was OK again for a few months and then the problem occurred again.

This time I decided to figure out what was going on. The log files in /var/log contain a lot of useful information and I was able to see that the problem was actually caused by ESX being out of disk space on the device used for /var/log. What was happening was that the driver for the Adaptec 5405z RAID controller in the machine was writing a huge log file which was not being rotated, so after a few months it consumed all the disk space.

The workaround was to add a line to the crontab (note: you also have to add a line to /etc/rc.local to readd the line to the crontab, otherwise it’ll be lost on next reboot) which deletes the adaptec log file periodically:

/etc/rc.local

echo "0 0 * * 0 rm /var/log/arcconf.log" >> /var/spool/cron/crontabs/root

/var/spool/cron/crontabs/root

0 0 * * 0 rm /var/log/arcconf.log

Seems strange that such a robust product as ESX5 doesn’t protect itself against this situation.

Anyway, the moral of this post is that if ESXi is producing any error messages you can’t interpret, have a look in /var/log.

Simple form validation in Wicket

I used to add custom FormValidators to forms where multi-field validation was required (like checking if two copies of an entered password match.

However, this approach has some problems – I had to list the dependent components and if some of them were hidden at validation time, the validator didn’t work.

Reading around, I saw the recommendation to use the onValidate() method of the form to do validation and this seems a lot more straightforward.

Here’s a sample wicket form with some validation logic to check if the current password matches and if the two copies of the new password are the same.

final Form form = new Form("form", new CompoundPropertyModel(user)) {
	private static final long serialVersionUID = 1L;

	@Override
	protected void onValidate() {
		super.onValidate();

		User user = model.getObject();
		String current_password_entered = currentPasswordField.getConvertedInput();
		String new_password_entered = newPasswordField.getConvertedInput();
		String confirm_new_password_entered = confirmNewPasswordField.getConvertedInput();

		if (current_password_entered != null
		&& !User.encryptPassword(current_password_entered).equals(user.getPassword()))
			error(getString("current_password_not_correct"));

		if (new_password_entered != null && !new_password_entered.isEmpty()) {
			if (new_password_entered.equals(current_password_entered)) {
				error(getString("new_password_same_as_current_password"));
			} else {
				if (!new_password_entered.equals(confirm_new_password_entered))
					error(getString("new_passwords_dont_match"));
			}
		}
	}
};
add(form);

Fault-tolerant ESX datastores for free

Preamble: NFS works great for ESX datastores. Its a whole lot easier to manage than iSCSI and although iSCSI is generally considered to perform better, we find that the flexibility of using NFS more than makes up for the lost performance.

If you have sufficient budget, there are great solutions from Dell, HP etc. where you can get fault-tolerant ESX installations already setup in a rack, which not only provide data-store fault tolerance, but also VM failover and so on. But there’s a lot of people or companies out there who have several servers running the free ESX5i hypervisor and who would still like to have some fault-tolerance. This article is for those people.

What’s not so great about shared storage (like NFS or iSCSI) is that you generally have a lot of extra boxes around and complicated network configuration. If you want fault tolerance, you’ll need two ESX servers, two NFS servers and two switches (and a bunch of cables to connect them all). Furthermore, you’ll add a lot of complexity to configure those (often proprietary) NFS boxes. I recently configured a couple of Lefthand boxes for a customer and it not trivial to set up.

So I figured there must be an easier way – after all, ESX is an amazing platform for reducing the number of boxes in the rack, so why would I want to start adding boxes again if I don’t have to.

The first important point is that ESX provides the VMXNET3 10Gb virtual ethernet adapter, so that even if your ESX server does not have 10Gb network cards, the VMs running on the server can communicate with each other at 10Gb and, more importantly for our purposes, ESX itself can communicate with its VMs at 10Gb speeds. So if we run an NFS server as a VM on the ESX server, and use it as an NFS datastore for the ESX server, then the server will see it as a 10Gbps NFS server.

OK, but what about the fault-tolerance? For that, we need to replicate the NFS server’s disk to another server. So, if we don’t have a real 10Gb network, that’ll have to be across 1Gb. Does that slow things down? Apparently not much – we’re using DRBD asynchronously which causes a minimal performance hit.

So what we do is to clone the NFS server VM to a second ESX server and set up DRBD replication between the two VMs. We’re using Ubuntu 11.10 server (you’ll need a reasonably recent Linux distribution to get the 10Gbps support with the VMXNET3 virtual network adapter).

Because you only get the 10Gbps datastore access to the NFS server when the NFS VM is hosted on the local ESX server, this is not really shared storage (or at least its shared only at 1Gbps to other ESX servers). However, for our fault-tolerance purposes that doesn’t matter much. In fact, from a scalability point of view, it makes sense to provide each ESX server with a locally-running NFS datastore, accessed at 10Gbps and replicated to another ESX server. This scheme also has the advantage that each ESX server is autonomous – ESX servers with remote datastores always make me a bit nervous – any problems on the network and the VMs are likely to freak out. This way, the ESX server is completely self-contained – it only needs another ESX server for fault-tolerance. Even if the network fails, the local NFS datastore will be unaffected (except that fault-tolerance is temporarily suspended) and when the network is available again, the DRBD secondary will simply catch up again automatically, providing fault-tolerance again.

ESX Server 2 can do the same thing with another pair of NFS servers (a local one for fast access and a remote one on ESX Server 1 for fault tolerance). This idea can be scaled indefinitely – each ESX server having its own local NFS VM running its datastore and replicating to another ESX server. The major advantage of this approach is that its more scalable than a single fault-tolerant pair of shared NFS servers and you get 10Gbps access for free. Conversely, the price you pay for this is that you have a separate NFS server for each ESX server which makes administration more complex than for a single shared datastore (but hey, you can’t have everything, at least not for free).

You could additionally configure the nfs servers to fail over a shared ip address to the secondary – we haven’t bothered to do this since if the primary nfs server fails, its likely that the whole ESX server has failed. If that’s the case, we need to promote the DRBD secondary to primary manually, start the NFS server and register the VMs.

And how does it perform? Pretty well actually. The benchmarks below are made on an ESX5 host with a single i7 930 CPU, 24GB RAM, an Adaptec 5405z controller and 4x SATAII disks in RAID5.

Disk performance of a VM running directly on the ESX host (i.e. on the local datastore).

hdparm -tT /dev/sda

/dev/sda:
Timing cached reads:   12396 MB in  2.00 seconds = 6201.81 MB/sec
Timing buffered disk reads: 398 MB in  3.00 seconds = 132.59 MB/sec

dd if=/dev/zero of=ddfile bs=8k count=20000
20000+0 records in
20000+0 records out
163840000 bytes (164 MB) copied, 0.274105 s, 598 MB/s

And now the disk performance of a VM running on our fault-tolerant nfs datastore:

hdparm -tT /dev/sda

/dev/sda:
Timing cached reads:   12242 MB in  2.00 seconds = 6124.43 MB/sec
Timing buffered disk reads: 258 MB in  3.00 seconds =  85.89 MB/sec

dd if=/dev/zero of=ddfile bs=8k count=20000
20000+0 records in
20000+0 records out
163840000 bytes (164 MB) copied, 0.707547 s, 232 MB/s

As you can see from the above, the VM running on the fault-tolerant NFS datastore is not as fast as the VM running on the local datastore, but it’s sufficiently fast for the subset of your VMs which require more fault-tolerance than that provided by daily backups.

Speaking of backup, we’re using ghettoVCB to backup 150 VMs (>1TB) every night via NFS to a Netgear ReadyNAS Ultra 6 device. We then rsync them to a remote data-center for offsite, versioned storage.

Using mocking to keep development agile

We had built up a mocked service layer for our Wicket Tester unit tests (with Mockito). This enables our unit test to start any page or panel in our app with reasonable test data. However, when it came to development, we were still using the real service layer – which meant that as our application got bigger it took longer and longer to start and navigate to the page under development.

Why not use the mocked services during development too? Good idea, we thought. We added development entry points to our unit tests which allow us to start a minimal Jetty container to run the pages under development in a mocked service sandbox.

Turns out this works great – start time goes down from 10 seconds to 1 second and you have all the power of the mock scenarios already developed for unit tests during development.

This not only saves time during development, but encourages better unit tests (since you build better mock scenarios during development and can then reuse them for unit tests).

It also finally allows us to decouple application development from service development, meaning that the application developers can start immediately with application development and contribute to the definition of the service layers from a client perspective by creating mocks as needed.

Using Maven for production deployment

We use Maven to manage dependencies during development. This entails adding a pom.xml file to our Eclipse project which defines the jars which the application depends. Maven then takes care of fetching the right version of the jars from a number of repositories (central maven repository, vendor specific repositories, our own repository).

This works pretty well and its hard to imagine developing complex projects without this capability. However, when it comes to ensuring that an application is delivered to the production environment with all its dependencies, you’re pretty much on your own. You have to build either a war file or a jar-with-dependencies – both of which can be very tricky and lead to problems occurring in production which you never saw during development.

Additionally, our applications tend to have a lot of dependencies and the war files get huge.

So, we thought, why not just use Maven on production servers to fetch applications and their dependencies.

To do this we maintain a pom.xml file on the production server with the application listed as a dependency. We use the maven goals “versions:use-latest-releases” and “versions:commit” to update the pom file automatically to the latest release version. We then use the “dependency:build-classpath” goal to build a class path from the repository and finally run the application.

The Palchinsky Principles

My holiday reading this year was “Adapt” by Tim Harford. One of the most interesting parts was about Peter Palchinsky, a great russian engineer who was an advisor to both the Tsar and to Stalin. Although his uncompromising honesty got him exiled to Siberia by the Tsar, pardoned again and finally murdered by Stalin’s secret police, he nevertheless had time to formulate 3 principles for innovation:

  1. Try lots of things, expecting many to fail.
  2. Make sure the failures are survivable.
  3. Learn from the failures.

As Tim Harford points out, this is roughly how evolution works in nature and it seems applicable to software development. Since customers don’t generally appreciate failures on their projects, this trial and error cycle needs to be carried out outside of production projects – in the “20%” projects we do on our own time.

monit – trust is good, control better

monit is a very cool system for keeping your linux servers working – highly recommended. With a few lines of configuration, you can have it check any aspect of your system and services and when problems occur, have it alert or take remedial actions (like restarting services, cleaning up log or temporary files etc).

For example, we had the problem that we are running clustered web apps (using Terracotta) in VMware VMs. The cluster nodes were being suspended regularly for backups and this caused them to be evicted from the cluster. A simple solution was to use monit to monitor the apps (via the same health check port we were using for the availability check for the HAProxy load balancer) and to restart the services if the health check fails (as happens after the VM is unsuspended after the backup).

Here’s the line from the monitrc file we use to monitor the health check port (9000 in this case):

check host webapp with address localhost
if failed port 9000 protocol http with timeout 2 seconds then exec “/webapp/restart.sh”
The web app in this case is a java wicket app, running under jetty which also runs a health check on port 9000. A http query to this port causes the app to check its connection to its database and its cluster node status. If either fails, it returns a http error status. This health check is used by HAProxy (which takes failed nodes out of the load-balanced pool) and by monit (which restarts the services). This combination provides us with 100% uptime for these apps.

Java logging in development and production

Logging is tricky. Even some major open-source projects don’t do it correctly, so if you use their libraries, you end up with log files you didn’t ask for cluttering your machine.

Current best practices are to use a logging facade like commons-logging or slf4j to avoid these kind of problems by allowing libraries to conform to whatever logging strategy the application which uses the library is using. This means that if your app logs to myapp.log, then the library using slf4j will also log to myapp.log.

Here’s how we use slf4j in our projects:

Our libraries use slf4j-api – here’s the maven dependency you’ll need:

<dependency>
	<groupId>org.slf4j</groupId>
	<artifactId>slf4j-api</artifactId>
	<version>1.4.2</version>
</dependency>

Our applications use slf4j-log4j12 – here’s the maven dependency you’ll need:

<dependency>
	<groupId>org.slf4j</groupId>
	<artifactId>slf4j-log4j12</artifactId>
	<version>1.4.2</version>
</dependency>

The code to log looks like this (its the same in libraries or applications):

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
...
public class MyClass {
	static Logger log = LoggerFactory.getLogger(MyClass.class);

	public Application() {
		log.info("some logging info");
	}
}

We want to have simple console logging during development. To do this we use a simple log4j.properties file containing only a console appender as shown below.

log4j.debug=false
log4j.rootLogger=INFO, Stdout
log4j.appender.Stdout=org.apache.log4j.ConsoleAppender
log4j.appender.Stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.Stdout.layout.conversionPattern=%-5p - %-26.26c{1} - %mn

We put this file in a Settings folder in our home directory.

During development we need to get log4j to load this file so we get console logging. We do this by defining a system property “log4j.configuration” in the Eclipse Preferences/Installed JREs/Edit/Default VM Arguments (that way it applies for all development projects):

-Dlog4j.configuration=file:///Users/roger/Settings/log4j.properties

During production, we do this same thing, but this time we pass a log4j configuration with a rolling file appender as shown below:

log4j.debug=false
log4j.rootLogger=INFO, R
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=myapp.log
log4j.appender.R.MaxFileSize=5000KB
log4j.appender.R.MaxBackupIndex=10
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d %p %t %c - %m%n

So the startup in production looks something like this:

java -Dlog4j.configuration=file:///home/administrator/log4j.properties -jar myapp.jar