Roll your own Docker registry with Docker Compose, Supervisor and Nginx

As soon as you are using Docker for building proprietary or otherwise internal projects you will need private repositories. Either you choose a paid service or you will need to run your own secure registry.

This article describes how to get your own registry running on Ubuntu with the recently introduced Docker Compose, which will tie the registry API together with a Redis cache, a persistent storage container and a frontend for browsing your image repositories. All the above sits behind a Nginx Webserver which handles authentication and SSL encryption. The container processes are controlled by Supervisor. If that sounds good to you, let's get to work...

Preparations

It is assumed that you have Ubuntu 14.04 (other version and distros will probably work as well with slight modifications). Since we run the main parts of the registry on Docker you obviously need to have Docker running on your machine. Check the official installation instructions if you haven't done that already. In addition to Docker we need Docker Compose. Get the most recent release from the official site.

Docker Compose

After having Docker and Docker Compose running, continue with creating a docker-compose.yml file which defines and configures your containers. In case you haven't worked with Docker Compose or it's predecessor fig, go have a quick look over there and see what it's all about.

First we create a directory to put the container config.

sudo mkdir -p /usr/share/docker-registry

Great, now let's add a Docker Compose config file and define the containers.

sudo vim /usr/share/docker-registry/docker-compose.yml

storage: 
  image: busybox 
  volumes: 
    - /var/lib/docker/registry 
  command: true

For this particlar setup we chose to store the registry data to the local disks, making use of a data container. The defined volumes will be hooked up with the backend later. Next step is adding a cache to speed up the registry.

cache: 
  image: redis 

That's all you really need as far as the cache goes. The container will also be hooked up to the backend later. If you should want to access the Redis store from your host, you can expose the port 6379 via the config. Let's get to the backend:

backend: 
  image: registry 
  ports: 
    - 127.0.0.1:5000:5000 
  links: 
    - cache 
    - storage 
  volumes_from: 
    - storage
  environment: 
    SETTINGS_FLAVOR: local 
    STORAGE_PATH: /var/lib/docker/registry 
    SEARCH_BACKEND: sqlalchemy 
    CACHE_REDIS_HOST: cache 
    CACHE_REDIS_PORT: 6379 
    CACHE_LRU_REDIS_HOST: cache 
    CACHE_LRU_REDIS_PORT: 6379

Note that the exposed port 5000 is bound to 127.0.0.1, so it can be accessed from the host only and is not public to the internet. Link the storage and cache containers and tell the backend container to use the defined volume from the previously defined storage for persistance. The environment part ist adding some configuration to the registry app via environment variables and telling it about the cache and our settings flavour. If you want to store the data on S3, for example you can easily adjust the settings. Have a look at the docker-registry documentation to see what's possible.

Thankfully konradkleine has build a nice frontend for the registry and we can swiftly add it to our stack. We will expose port 80 to our host and pass mandatory configuration.

frontend: 
  image: konradkleine/docker-registry-frontend 
  ports: 
    - 127.0.0.1:8081:80 
  environment: 
    ENV_DOCKER_REGISTRY_HOST: backend 
    ENV_DOCKER_REGISTRY_PORT: 5000

Now, let's see if all that is working properly. Go to /usr/share/docker/registry/ or where ever you have placed your yml and start up your containers with sudo docker-compose up. Now would be an excellent time to get up, do some stretching or jumping jacks and wait until all the images are pulled. But you're too curious to do that on your first attempt, aren't you? After that's done you should have your containers running. To verify that all is working fine you can run curl -v 127.0.0.1:5000/v1/_ping and should see a blank JSON response. Likewise you can have a look at curl -v 127.0.0.1:8081 and you should get a response from your frontend. If that is working, exit docker-compose with CTRL-C.

Supervisor

To control the Compose process we make use of Supervisor. If you don't have it installed, do so with sudo apt-get install supervisor. Then create the configuration file with sudo vim /etc/supervisor/conf.d/docker-registry.conf and add the program section:

[program:docker-registry] 
command=docker-compose up 
directory=/usr/share/docker-registry/
redirect_stderr=true 
autostart=true 
autorestart=true 
priority=10 

Now, load the config with sudo supervisorctl reload. The docker-registry program should now start automatically. Supervisor will write stdout and stderr to logfiles in /var/log/supervisor/. Of course you can use any other process control as well.

Nginx

Since you surely want to use registry remotely, the last step is to make it available from the outside. So, go ahead and set up Nginx to handle authentication. To install Nginx on the host, run sudo apt-get install nginx and add a configuration file with sudo vim /etc/nginx/sites-available/reg.example.com. Alternatively, a Nginx container could be used as well.

upstream docker-backend {
server 127.0.0.1:5000;
}

upstream docker-frontend {
server 127.0.0.1:8081;
}

First, define the upstreams with the backend listening on port 5000 and the frontend listening on port 8081.

server { 
  server_name reg.example.com;

  listen 443 ssl; 
  ssl_certificate /etc/nginx/ssl/your.pem; 
  ssl_certificate_key /etc/nginx/ssl/your.key;
  ssl_ciphers 'AES256+EECDH:AES256+EDH::!EECDH+aRSA+RC4:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS';
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_session_cache shared:SSL:10m;
  ssl_stapling on;
  ssl_stapling_verify on;
  ssl_prefer_server_ciphers on;
  add_header Strict-Transport-Security max-age=31536000;
  add_header X-Frame-Options DENY;

  client_max_body_size 0; 

  proxy_set_header Host $host; 
  proxy_set_header X-Real-IP $remote_addr; 
  proxy_set_header Authorization "";

  location / { 
    auth_basic "Docker Registry"; 
    auth_basic_user_file /etc/nginx/auth/registry_passwd; 
    proxy_pass http://docker-backend; 
  }

  location ~ ^/ui(/?)(.*)$ { 
    auth_basic "Docker Registry"; 
    auth_basic_user_file /etc/nginx/auth/registry_passwd; 
    proxy_pass http://docker-frontend/$2$is_args$args; 
  }

  location /v1/_ping { 
    proxy_pass http://docker-backend; 
  }

  location /v1/users { 
    proxy_pass http://docker-backend; 
  }
}

Nginx will listen only for SSL enabled connections. If you don't have a SSL certificate, you should really get one. The Docker daemon expects the registry to be available via https. if you really do not want to use SSL you have to start your docker daemons that want access with --insecure-registry=reg.example.com and set your nginx up for port 80.

Next, set up the backend to root, mount the frontend to /ui/ and enable http basic auth. To create you credentials you can use the apache utils. Get them via apt-get sudo apt-get install apache2-utils and run sudo mkdir -p /etc/nginx/auth/ && sudo htpasswd -c /etc/nginx/auth/passwd username. The docker client needs the /v1/_ping and /v1/users endpoints to be accessible without authentication, those will not be protected.

Now, activate the config with sudo ln -s /etc/nginx/sites-available/reg.example.com /etc/nginx/sites-enabled/reg.example.com, check if the syntax is OK and restart nginx sudo nginx -t && sudo service nginx restart.

... and that was that.

Usage

To use your shiny, new registry login from your machine with docker login https://reg.example.com. Enter the http auth credentials and ignore all references to email and activation. As a quick test you can do the following on your machine: Pull a image, eg. busybox with docker pull busybox , tag it docker tag busybox reg.example.com/busybox and push it docker push reg.example.com/busybox. Now you should have your first image repository available.

Backup

If you are using the registry in production you probably want to have some kind of backup going. A quick way is to use rsync within a special backup container if you don't have a huge amount of data. You could run the following command as a cron job

sudo docker run -ti --volumes-from=dockerregistry_storage_1 -v $(pwd)/backup:/backup kfinteractive/backup-tools rsync -avz /var/lib/docker/registry/ /backup/

This command launches a container with rsync, picks up the volumes from the storage container, mounts a backup folder from the host and archives the data with rsync to the host. You can pick it up from there or sync directly to another backup storage.

If you have improvements, if you find errors or if you just want to make snarky remarks, get in touch.

About the author: Philipp Wintermantel

With great pleasure and persistence he dedicates himself to the most sinister depths of technology.