To run tests against a series of Docker images, Docker-compose seems like a perfect fit. But this comes with some challenges. I recommend reading this article as well if you need to extract files and do other stuff, this article will focus primarily on networking.

Autor: Jan Rameš
Datum: 18.2.2021, 12 min.

› Introduction

Running single containers in GitLab CI is fairly easy with services, the runner will handle the life-cycle of these containers (well almost, it works as long as you don’t terminate the job manually, this may get fixed in the future), the runner will handle the networking part (so the runner container sees the service container), this is done through the legacy container links.

But running a set of services in a predefined configuration is another challenge.

In GitLab CI you have two options to run a container, either use Docker-in-Docker using the docker:dind service which will add some layer of protection or you just use a shared Docker socket aka Docker-outside-of-Docker, or DooD in short (which is what we use). Keep in mind that using a shared Docker socket needs some special treatment since you have access to the other images running on the same Docker host as the runner itself, you need to be aware of security and performance implications.

First, you have to make sure you do proper cleanup after each job (the manual job termination will be out of your hands though, that being said: do not terminate jobs that run other containers manually, in this case the after_script won’t be called and you’ll be left with dangling containers, this includes containers executed through GitLab’s own services). This is best handled in after_script that runs for failed jobs as well. All commands in the after_script should be called with || true to make sure all cleanup commands run. Also, you have to make sure the containers won’t interfere with other containers executed in another job so you should add CI_JOB_ID (or something similar) to the container name. Also make sure to not expose ports on your docker host (maybe you’re running in a swarm and it might be configured to expose those ports to the internet, one cannot be really sure so it’s best not to do that at all).

› The solution

Docker-compose will create a separate network for the containers it runs. The plan is to attach the runner container to this network and let Docker do the rest. Luckily this works just fine and the runner container can reach the containers spun by Docker-compose by their hostname (far better than fetching IPs) so the solution looks pretty similar to how services work.

A failure during cleanup may keep some resources allocated so keep an eye and do it manually in case that happens (this generally shouldn’t happen, but you want to be on the safe side), it is important to keep the cleanup going even in case of a failure to clean the most resources.

The most likely cause of an after_script failure is a problem with starting the containers or attaching to the network which may happen during testing of the whole process, it is important to always plan for failure. If you start a container then the network attach fails, the subsequent network detach fails as well but the containers will be stopped later. I learned the hard way so big thanks to our DevOps guys who were kind enough to do manual cleanup for me (as I don’t have access to the runners, which is probably a good thing 😉).

› The code

First set the project name to distinguish the current job from other jobs by setting the COMPOSE_PROJECT_NAME variable. After that, we start the composed containers in detached mode (we don’t want to wait for container termination right 😉):

 docker-compose up --detach

Then we need to get the currently running container id (the container GitLab runner starts for us, ie. docker/compose), so we can attach to the network created by Docker-compose. After some googling I came out with this solution:

 CONTAINER_ID=$(basename $(cat /proc/1/cpuset))

We can now attach to the network created by Docker-compose, thankfully this has convention so we can use the COMPOSE_PROJECT_NAME with a suffix to do that:

 docker network connect ${COMPOSE_PROJECT_NAME}_default $CONTAINER_ID

Now everything should be set up and we can do whatever we need the containers for, in this case, only test the Nginx server is up and running on a correct hostname:

 wget -O - http://nginx

Once we’re done with the containers, it’s time to clean stuff up, remember to use || true to make sure the cleanup will continue even in case of a failure.

Detach runner container from the network:

 docker network disconnect ${COMPOSE_PROJECT_NAME}_default $CONTAINER_ID || true

Finally, stop the containers and remove them.

 docker-compose down || true

Now no dangling resources should be left on the docker host and the job may finish.

› Proof of concept solution

› Closing

This article demonstrates how to run and communicate with containers run with Docker-compose from GitLab CI. To distribute docker-compose.yml you may use artifacts or Git sub-modules.

Make sure to remove exposed ports from docker-compose.yml this can be done by configuration overrides for example.

It may be useful to display logs, do this in after_script using docker-compose logs this way you won’t miss them even in case of failure.

Sometimes it may be needed to wait a while before the containers are up and running. A simple sleep 5s command may save you a lot of headaches (in our sample we would place it just before the wget command).

3