Ever tried to spin up two Docker containers, one depending on the other, only to find out the port(s) of the first container aren't actually up and running yet causing the second container to fail to start up?
For a while I stuck some sleep x statements in there to delay the second container but this was of course... a very flaky solution.
The solution is another docker container with the sole purpose of waiting for the port(s) of the first container to become available. Once the wait container has successfully exited the second container can safely be started.
Simple test bash script:
#! /bin/bash #start a database container docker run -d --name database -e MYSQL_ROOT_PASSWORD=my-secret-pw mysql #wait for the mysql port to become available if docker run --link database:database --rm martin/wait ; then echo "yay :) we can now start a container that uses the database" else echo "oh no :( database is unavailable" fi #just cleaning up docker rm -f database
The wait container by default checks all
exposed ports in the linked container. You can limit which ports or even check entirely different URLs. See the github page for more details. If you need to check ports in multiple containers link them all in the wait command.
The same concept can be used in systemd by adding the wait command in ExecStartPre
ExecStartPre=/usr/bin/docker run --link database:database --rm martin/wait ExecStart=/usr/bin/docker run --rm app-that-uses-database