Skip to main content
Back to blogs
Docker

Setting Up a Private Docker Registry You Can Actually Trust

Running your own registry with Nginx, TLS, and authentication — why relying solely on Docker Hub for production images falls short.

March 5, 20265 min read
dockerregistrynginxlinuxsecurity

Docker Hub is fine for open-source images. But the moment you're building proprietary services — especially for blockchain infrastructure where image integrity is critical — you need a private registry you control.

This is the production setup covered here: Docker Registry behind Nginx with TLS termination and HTTP basic auth.

Why Self-Host a Registry

Three reasons to stop relying exclusively on Docker Hub:

  • Rate limits — Docker Hub's pull rate limits have caused CI/CD pipeline failures during peak build times
  • Image integrity — teams need to know exactly where images are stored and who has access
  • Network latency — pulling from a local or same-region registry is significantly faster than pulling from Docker Hub on every deploy

The Compose Stack

docker-compose.yml
services:
  registry:
    image: registry:2
    restart: unless-stopped
    volumes:
      - registry-data:/var/lib/registry
      - ./auth:/auth:ro
    environment:
      REGISTRY_AUTH: htpasswd
      REGISTRY_AUTH_HTPASSWD_REALM: "Private Registry"
      REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
      REGISTRY_STORAGE_DELETE_ENABLED: "true"
    networks:
      - internal
 
  nginx:
    image: nginx:alpine
    restart: unless-stopped
    ports:
      - "443:443"
    volumes:
      - ./nginx/registry.conf:/etc/nginx/conf.d/default.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt:ro
    depends_on:
      - registry
    networks:
      - internal
 
volumes:
  registry-data:
 
networks:
  internal:
    driver: bridge

Key decisions:

  • Registry data on a named volume — survives container recreation, easy to back up
  • Auth directory mounted read-only — the registry can read credentials but can't modify them
  • Delete enabled — without this, you can never clean up old images
  • Registry not exposed to host — only Nginx is, on port 443

Authentication

Generate credentials with htpasswd:

setup-auth.sh
#!/bin/bash
mkdir -p auth
 
# Create the first user
htpasswd -Bc auth/htpasswd deployer
# -B uses bcrypt hashing (stronger than default)
# -c creates the file (only use -c for the first user)
 
# Add additional users without -c
htpasswd -B auth/htpasswd ci-bot

Nginx Configuration

nginx/registry.conf
upstream registry {
    server registry:5000;
}
 
server {
    listen 443 ssl;
    server_name registry.example.com;
 
    ssl_certificate     /etc/letsencrypt/live/registry.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/registry.example.com/privkey.pem;
 
    # Required for large image layer uploads
    client_max_body_size 0;
    chunked_transfer_encoding on;
 
    location / {
        # Required headers for Docker Registry V2 API
        proxy_pass http://registry;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
 
        proxy_read_timeout 900;
        proxy_send_timeout 900;
    }
}

Two settings that trip people up:

  • client_max_body_size 0 — disables the upload size limit. Docker image layers can be hundreds of megabytes. Without this, you'll get 413 Request Entity Too Large errors on push.
  • proxy_read_timeout 900 — large image pushes take time. The default 60-second timeout will cause failures on slow connections.

TLS with Let's Encrypt

Docker requires HTTPS for any registry that isn't localhost. No exceptions.

setup-tls.sh
#!/bin/bash
# Install certbot
apt install -y certbot
 
# Get the certificate (stop Nginx first to free port 443)
docker compose stop nginx
certbot certonly --standalone -d registry.example.com
docker compose start nginx

Set up automatic renewal:

crontab
0 3 * * 1 certbot renew --quiet --pre-hook "docker compose -f /opt/registry/docker-compose.yml stop nginx" --post-hook "docker compose -f /opt/registry/docker-compose.yml start nginx"

Testing the Registry

test-registry.sh
# Login
docker login registry.example.com
# Enter username and password when prompted
 
# Tag a local image for the private registry
docker tag myapp:latest registry.example.com/myapp:latest
 
# Push
docker push registry.example.com/myapp:latest
 
# Pull from another machine
docker pull registry.example.com/myapp:latest

If login fails with a 502 Bad Gateway, the issue is almost always Nginx not being able to reach the registry container. Check that both services are on the same Docker network.

Garbage Collection

Docker Registry doesn't automatically clean up deleted image layers. Without periodic garbage collection, disk usage grows indefinitely:

gc.sh
#!/bin/bash
# Run garbage collection on the registry
docker compose exec registry bin/registry \
  garbage-collect /etc/docker/registry/config.yml \
  --delete-untagged
 
echo "Registry garbage collection complete"

CI/CD Integration

In your CI pipeline, authenticate and push automatically:

.github/workflows/build.yml
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
 
      - name: Login to private registry
        run: echo "$REGISTRY_PASSWORD" | docker login registry.example.com -u ci-bot --password-stdin
 
      - name: Build and push
        run: |
          docker build -t registry.example.com/myapp:${{ github.sha }} .
          docker push registry.example.com/myapp:${{ github.sha }}

Key Takeaways

  1. Never expose the registry directly — always put Nginx (or another reverse proxy) in front for TLS and auth
  2. Set client_max_body_size 0 — the single most common cause of registry push failures
  3. Use bcrypt for passwords — default MD5 is unacceptable for production
  4. Schedule garbage collection — disk usage will grow unbounded without it
  5. Back up your volume — losing your registry data means rebuilding every image from source
Share