Overview:
We recently carried out a short introductory Docker workshop, starting from scratch, installing Docker and taking it through to the point where a software stack, consisting of several linked containers, are deployed using docker-compose. Here’s what we covered.
Docker concepts:
Docker containers are easy-to-deploy units of software, analogous to the shipping containers used by the transport industry, which simplifies the job of shipping diverse goods around the world.
Docker images are the templates for the containers. Every Docker container is started from an image. Images are defined by a Dockerfile which contains instructions for building the image, based on an existing image (for for instance, a web-server image will be based on an OS image, simply adding a layer of web-server software to it).
A Docker registry is where images are stored. Every machine where Docker is installed has a local registry. Additionally, Docker provides a central registry (from which images are fetched if they aren’t available locally). And finally, you can host your own private registries.
Starting point:
A freshly installed Ubuntu 16.04 server, called docker-test.
Installing Docker:
We won’t use the standard Docker package available from the Ubuntu repositories because Docker is changing fast – instead we’ll add the Docker apt-repository and install the latest version from there.
$ ssh administrator@docker-test $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" $ sudo apt-get update $ sudo apt-get install -y docker-ce $ sudo usermod -aG docker ${USER} $ su - ${USER}
The last two commands ensure that we can run Docker commands without sudo.
Check that Docker is correctly installed with:
$ docker run hello-world
If everything is OK, Docker will download the hello-world image from the central Docker registry and run it.
Building a simple website container:
The purpose of Docker is to allow you to build and deploy something like a website without having to worry about details like which OS is to be used, which web-server etc. What we want is to have an image called say “website” which takes some files we give it and publishes them via a web-server.
First create a directory “website” where we will work on creating our container.
$ mkdir website $ cd website
Create a text file called “Dockerfile” containing the following:
FROM alpine RUN apk update \ && apk add lighttpd \ && rm -rf /var/cache/apk/* COPY ./index.html /var/www/localhost/htdocs CMD ["lighttpd", "-D", "-f", "/etc/lighttpd/lighttpd.conf"]
This defines a new image, based on the existing image “alpine”, a compact Linux OS image. A web-server, “lighttpd” is installed. Then our website content (index.html) is copied to the web-server content folder. and finally the web-server is started.
Next create a simple index.html as content for the website:
<html> <body> Hello World! </body> </html>
Now build an image from the Dockerfile and call it “website”.
$ docker build . -t website
The “docker images” command will show us the newly built image.
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE website latest 23853e62e631 34 seconds ago 11.4MB hello-world latest 725dcfab7d63 3 days ago 1.84kB alpine latest 053cde6e8953 3 days ago 3.97MB
We can now run a container based on our new image as follows:
$ docker run --name website -d -p 8088:80 website
This will bring up the website container based on our “website” image and publishing the web content on port 8088 (the -p parameter maps the host port 8088 to the standard web-server port 80 within the container).
The “docker ps” command will show us the running container:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9f5612aa7b81 website "lighttpd -D -f /e..." 29 seconds ago Up 29 seconds 0.0.0.0:8088->80/tcp website
Point a browser at http://docker-test:8088 and you’ll see our simple “Hello World!” web-page, served by our new container.
Basic Docker commands:
# list running containers $ docker ps # list all containers (including stopped ones) $ docker ps -a # list images (in the local docker registry) $ docker images # create a container from an image $ docker run -d --name <container name> <image> # view logs (default stdout) of a container $ docker logs <container name > # provides shell access within a container $ docker exec -it <container name> /bin/sh # start a container $ docker start <container name> # stop a container $ docker stop <container name> # get a shell in a temporary container (to try something out) # in this case, an ubuntu image. # --rm will destroy the container again as soon as you exit. $ docker run --rm -it ubuntu bash
Persistent data (stateful containers):
When a container is rebuilt from an image, it loses any changes which were made to it since it was last built. In order to preserve state (for instance, a database container will usually need to preserve its database contents, even if the container is rebuilt), this state must be maintained by the host and provided to the container by means of “volumes”.
To illustrate this, we’ll create a database container using the postgresql database server. First we create a data directory on the docker host, which will maintain the persistent state of the database.
$ cd $ mkdir data
Then we create a database container based on a standard “postgres” image from the main Docker registry. We pass the -v parameter instructing docker to map the host directory (~/data) to the container path /var/lib/postgresql/data (where postgresql stores its database contents).
$ docker run -d --name database -p 5432:5432 -v ~/data:/var/lib/postgresql/data postgres
If you now look in the host directory ~/data, you will see that postgresql has created a set of database files there. Note: you’ll need to use sudo to list the files because postgresql has modified the file permissions.
$ sudo ls data
Now connect to the new database server (docker-test:5432, user: postgres, password: postgres) with a postgres client (e.g. pgadmin) and create a database called “test”.
To demonstrate that the host is maintaining state for the container, we’ll now recreate the container and image from scratch.
$ docker rm -f database $ docker rmi postgres
Then we’ll run the container again (which will again fetch the image from the main Docker registry because we removed it from the local Docker registry with the “docker rmi” command).
$ docker run -d --name database -p 5432:5432 -v ~/data:/var/lib/postgresql/data postgres
Reconnect with postgres client – our new “test” database is still there even though we rebuilt the container (because we specified a persistent volume).
docker-compose – deploying a whole stack
The philosophy of a container is that its supposed to do just one thing well – this reduces complexity and increases reusability. So you shouldn’t use a single container to deploy several components. For instance, if you have a web application stack which consists of a database, a REST server, a client web application and a proxy server, then this stack should be deployed as four containers.
For this workshop, we’ll deploy a database and a REST server as a stack, using docker-compose to deploy the stack in a single operation.
We first need to install docker-compose (its an add-on tool for Docker).
$ sudo apt-get install docker-compose
Now we’ll create a small REST server in python and deploy it in a container.
$ cd $ mkdir rest $ cd rest $ nano test.py
Enter the following python code into test.py:
from flask import Flask app = Flask(__name__) @app.route("/hello") def hello(): return "Hello from the rest server!" app.run(debug=True,host='0.0.0.0')
Our sample REST server will respond to a GET request to /hello with “Hello from the rest server!”
Next we’ll create a Dockerfile for the REST server image
$ nano Dockerfile
And enter and save the following content into Dockerfile:
FROM python RUN pip install flask COPY test.py . CMD python test.py
Now we can build and run the container.
# build the image (and call it "rest") $ docker build -t rest . # run the container from the image (call the container "rest" as well) $ docker run -d --name rest -p 5000:5000 rest
Point a browser at http://docker-test:5000/hello and the REST server should return “Hello from the rest server!”
So now we have a REST server running as a container. The next step is to hook up the rest server to the database, so instead of always returning a fixed text string, it can do a more real-world task of returning the result of a query against the database.
Using the postgres client, connect to the database and create a database “test”, with a table “test” and one int column “test”. Insert a few values which our REST server will sum.
insert into test values(5); insert into test values(15); select sum(test) from test;
Now we’ll update the code of our REST server to sum the values from the test table.
$ nano test.py
import psycopg2 from flask import Flask app = Flask(__name__) @app.route("/hello") def hello(): conn = psycopg2.connect("host='docker-test' dbname='test' user='postgres' password='postgres'") cursor = conn.cursor() cursor.execute("SELECT sum(test) FROM test") sum = cursor.fetchone()[0] conn.close () return str(sum) app.run(debug=True,host="0.0.0.0")
We’ll need to update the REST server image to install the psycopg2 database library
$ nano Dockerfile
FROM python RUN pip install flask psycopg2 COPY test.py . CMD python test.py
# remove the container based on the current image, then rebuild the image $ docker rm -f rest $ docker build . -t rest $ docker run -d --name rest -p 5000:5000 rest
Point a browser at http://docker-test:5000/hello and the REST server should now return “20” (the sum of the two values in the database).
So we now have two containers, one of which uses the other.
However, we are still starting both separately and in a specific order. We also currently have ports 5000 (rest server http) and 5432 (postgres) open and we have a hard-coded reference to “docker-test” in the rest server code. We could of course allow the database server to be passed in as a command line variable or as an environment variable, but docker-compose provides a better way, by linking the containers, so that the database is first started, then a private network is created to link the two containers and the address of the database server on that network is passed to the rest container, who can then communicate privately with its database container.
Lets update our REST server code, to reference the database as “database” instead of the host name “docker-test”. We’ll then use docker-compose to ensure that the host name “database” will be pointing to the database container.
$ nano test.py
import psycopg2 from flask import Flask app = Flask(__name__) @app.route("/") def hello(): conn = psycopg2.connect("host='database' dbname='test' user='postgres' password='postgres'") cursor = conn.cursor() cursor.execute("SELECT sum(test) FROM test") sum = cursor.fetchone()[0] conn.close () return str(sum) app.run(debug=True,host="0.0.0.0")
Now we’ll create a docker-compose.yml file to link the REST server to the database server, by means of the host-name “database” (defined by the name of the service in the docker-compose.yml file). We no longer need to expose the postgresql port (5432), since docker-compose will provide a private network between the containers, allowing the REST server to access port 5432 inside the database container without it being exposed to the host.
The depends_on instruction ensures that the REST server container is started only after the database server container has been started.
$ nano docker-compose.yml
version: '2' services: database: image: postgres restart: always volumes: - ~/data:/var/lib/postgresql/data rest: image: rest restart: always ports: - 5000:5000 links: - database depends_on: - database
Remove the current “rest” and “database” containers, rebuild the rest server image (with the updated test.py) and bring them back up as a stack with docker-compose
$ docker rm -f rest $ docker rm -f database $ docker build . -t rest $ docker-compose up -d
Check it again by browsing to http://docker-test:5000/hello (the REST server should still return “20”).
If you run the “docker ps” command, you’ll see that the containers are no longer called “rest” and “database”, but that docker-compose has constructed names based on the service names in the docker-compose.yml file.
You’ll also noticed that the database container is no longer publishing port 5432 – its only available on the private network which docker-compose creates between the containers. So this means that only port 5000 – the port published by the rest server – is now exposed to the outside world.
Note that we’re doing everything (building the image and deploying the container) on a single machine – in the real world, the docker images are built during the development cycle on a developer workstation (or a continuous integration server like jenkins) and pushed to a remote registry (like docker hub or a private registry). During deployment, the images are pulled from the remote registry and the containers started on the production server.
Nesting docker containers:
What if you want to use a container from within another container? Rather than run a docker host within a docker container, you just give your docker container access to the docker host, so that the container can run other docker containers on the host. The way you do this is by mapping the host’s /var/run/docker.sock into the container’s /var/run/docker.sock as follows:
volumes: - /var/run/docker.sock:/var/run/docker.sock
Setting the timezone of a docker container:
The default timezone of a Docker container is likely to be UTC (at least for most Linux images). There are multiple different ways to set the timezone – for instance you can set the timezone to that of the host. 5 different methods are explained in https://bobcares.com/blog/change-time-in-docker-container. However, the simplest way is via the TZ environment variable (note: this needs to be supported by the underlying image, so check that the image is based on a linux distribution which supports this environment variable).
environment: - TZ=Europe/Vienna
Docker log rotation
$ sudo nano /etc/docker/daemon.json { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } }
Docker swarm
Docker swarm is an extension to Docker which allows you to deploy your software stacks across a cluster of worker machines. Its easy to set up – one Docker host creates the swarm and becomes the swarm manager, and other hosts join the swarm. Using an extension to the docker-compose.yml, software stacks can be deployed across the cluster with replicas for fault tolerance and load balancing. We didn’t have time to cover docker swarm in this workshop, but we’ll cover it soon.
Contact us!
We’ve already gathered lot of experience using Docker to help our customers efficiently deploy their software stacks. If you’re interested in having us help your organisation get up and running with containers, get in touch with us at info@armstrongconsulting.com