Roll your own Docker registry with Docker Compose, Supervisor and Nginx

As soon as you are using Docker for building proprietary or otherwise internal projects you will need private repositories. Either you choose a paid service or you will need to run your own secure registry.

This article describes how to get your own registry running on Ubuntu with the recently introduced Docker Compose, which will tie the registry API together with a Redis cache, a persistent storage container and a frontend for browsing your image repositories. All the above sits behind a Nginx Webserver which handles authentication and SSL encryption. The container processes are controlled by Supervisor. If that sounds good to you, let's get to work...


It is assumed that you have Ubuntu 14.04 (other version and distros will probably work as well with slight modifications). Since we run the main parts of the registry on Docker you obviously need to have Docker running on your machine. Check the official installation instructions if you haven't done that already. In addition to Docker we need Docker Compose. Get the most recent release from the official site.

Docker Compose

After having Docker and Docker Compose running, continue with creating a docker-compose.yml file which defines and configures your containers. In case you haven't worked with Docker Compose or it's predecessor fig, go have a quick look over there and see what it's all about.

First we create a directory to put the container config.

sudo mkdir -p /usr/share/docker-registry

Great, now let's add a Docker Compose config file and define the containers.

sudo vim /usr/share/docker-registry/docker-compose.yml

  image: busybox 
    - /var/lib/docker/registry 
  command: true

For this particlar setup we chose to store the registry data to the local disks, making use of a data container. The defined volumes will be hooked up with the backend later. Next step is adding a cache to speed up the registry.

  image: redis 

That's all you really need as far as the cache goes. The container will also be hooked up to the backend later. If you should want to access the Redis store from your host, you can expose the port 6379 via the config. Let's get to the backend:

  image: registry 
    - cache 
    - storage 
    - storage
    STORAGE_PATH: /var/lib/docker/registry 
    SEARCH_BACKEND: sqlalchemy 
    CACHE_REDIS_HOST: cache 

Note that the exposed port 5000 is bound to, so it can be accessed from the host only and is not public to the internet. Link the storage and cache containers and tell the backend container to use the defined volume from the previously defined storage for persistance. The environment part ist adding some configuration to the registry app via environment variables and telling it about the cache and our settings flavour. If you want to store the data on S3, for example you can easily adjust the settings. Have a look at the docker-registry documentation to see what's possible.

Thankfully konradkleine has build a nice frontend for the registry and we can swiftly add it to our stack. We will expose port 80 to our host and pass mandatory configuration.

  image: konradkleine/docker-registry-frontend 

Now, let's see if all that is working properly. Go to /usr/share/docker/registry/ or where ever you have placed your yml and start up your containers with sudo docker-compose up. Now would be an excellent time to get up, do some stretching or jumping jacks and wait until all the images are pulled. But you're too curious to do that on your first attempt, aren't you? After that's done you should have your containers running. To verify that all is working fine you can run curl -v and should see a blank JSON response. Likewise you can have a look at curl -v and you should get a response from your frontend. If that is working, exit docker-compose with CTRL-C.


To control the Compose process we make use of Supervisor. If you don't have it installed, do so with sudo apt-get install supervisor. Then create the configuration file with sudo vim /etc/supervisor/conf.d/docker-registry.conf and add the program section:

command=docker-compose up 

Now, load the config with sudo supervisorctl reload. The docker-registry program should now start automatically. Supervisor will write stdout and stderr to logfiles in /var/log/supervisor/. Of course you can use any other process control as well.


Since you surely want to use registry remotely, the last step is to make it available from the outside. So, go ahead and set up Nginx to handle authentication. To install Nginx on the host, run sudo apt-get install nginx and add a configuration file with sudo vim /etc/nginx/sites-available/ Alternatively, a Nginx container could be used as well.

upstream docker-backend {

upstream docker-frontend {

First, define the upstreams with the backend listening on port 5000 and the frontend listening on port 8081.

server { 

  listen 443 ssl; 
  ssl_certificate /etc/nginx/ssl/your.pem; 
  ssl_certificate_key /etc/nginx/ssl/your.key;
  ssl_ciphers 'AES256+EECDH:AES256+EDH::!EECDH+aRSA+RC4:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS';
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_session_cache shared:SSL:10m;
  ssl_stapling on;
  ssl_stapling_verify on;
  ssl_prefer_server_ciphers on;
  add_header Strict-Transport-Security max-age=31536000;
  add_header X-Frame-Options DENY;

  client_max_body_size 0; 

  proxy_set_header Host $host; 
  proxy_set_header X-Real-IP $remote_addr; 
  proxy_set_header Authorization "";

  location / { 
    auth_basic "Docker Registry"; 
    auth_basic_user_file /etc/nginx/auth/registry_passwd; 
    proxy_pass http://docker-backend; 

  location ~ ^/ui(/?)(.*)$ { 
    auth_basic "Docker Registry"; 
    auth_basic_user_file /etc/nginx/auth/registry_passwd; 
    proxy_pass http://docker-frontend/$2$is_args$args; 

  location /v1/_ping { 
    proxy_pass http://docker-backend; 

  location /v1/users { 
    proxy_pass http://docker-backend; 

Nginx will listen only for SSL enabled connections. If you don't have a SSL certificate, you should really get one. The Docker daemon expects the registry to be available via https. if you really do not want to use SSL you have to start your docker daemons that want access with and set your nginx up for port 80.

Next, set up the backend to root, mount the frontend to /ui/ and enable http basic auth. To create you credentials you can use the apache utils. Get them via apt-get sudo apt-get install apache2-utils and run sudo mkdir -p /etc/nginx/auth/ && sudo htpasswd -c /etc/nginx/auth/passwd username. The docker client needs the /v1/_ping and /v1/users endpoints to be accessible without authentication, those will not be protected.

Now, activate the config with sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/, check if the syntax is OK and restart nginx sudo nginx -t && sudo service nginx restart.

... and that was that.


To use your shiny, new registry login from your machine with docker login Enter the http auth credentials and ignore all references to email and activation. As a quick test you can do the following on your machine: Pull a image, eg. busybox with docker pull busybox , tag it docker tag busybox and push it docker push Now you should have your first image repository available.


If you are using the registry in production you probably want to have some kind of backup going. A quick way is to use rsync within a special backup container if you don't have a huge amount of data. You could run the following command as a cron job

sudo docker run -ti --volumes-from=dockerregistry_storage_1 -v $(pwd)/backup:/backup kfinteractive/backup-tools rsync -avz /var/lib/docker/registry/ /backup/

This command launches a container with rsync, picks up the volumes from the storage container, mounts a backup folder from the host and archives the data with rsync to the host. You can pick it up from there or sync directly to another backup storage.

If you have improvements, if you find errors or if you just want to make snarky remarks, get in touch.

About the author: Philipp Wintermantel

With great pleasure and persistence he dedicates himself to the most sinister depths of technology.