Docker Best Practices: Multi-Stage Builds and Security

Master Docker containerization with multi-stage builds, security hardening, optimization techniques, and production-ready container patterns.

Portrait of Maria Garcia
Maria GarciaContainerization Expert
Aug 30, 20255 min read
Illustration for Docker Best Practices: Multi-Stage Builds and Security

Docker Best Practices: Multi-Stage Builds and Security

Docker has revolutionized application deployment, but following best practices is crucial for creating secure, efficient, and maintainable containers. This guide covers multi-stage builds, security hardening, and optimization techniques.

Multi-Stage Builds

Basic Multi-Stage Build

Separate build and runtime environments:

# Build stage
FROM node:18-alpine AS builder

WORKDIR /app

# Copy package files
COPY package*.json ./
RUN npm ci --only=production

# Copy source code
COPY . .

# Build the application
RUN npm run build

# Production stage
FROM node:18-alpine AS production

# Install dumb-init for proper signal handling
RUN apk add --no-cache dumb-init

# Create app user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

WORKDIR /app

# Copy built application from builder stage
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json

USER nextjs

EXPOSE 3000

# Use dumb-init to handle signals properly
ENTRYPOINT ["dumb-init", "--"]
CMD ["npm", "start"]

Advanced Multi-Stage Build

Multiple build stages for complex applications:

# Dependencies stage
FROM golang:1.19-alpine AS deps
WORKDIR /go/src/app
COPY go.mod go.sum ./
RUN go mod download

# Build stage
FROM deps AS build
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

# Test stage
FROM deps AS test
COPY . .
RUN go test -v ./...

# Security scan stage
FROM deps AS security
RUN apk add --no-cache curl
COPY . .
RUN curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
RUN trivy fs --exit-code 1 --no-progress --format json .

# Production stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=build /go/src/app/main .
CMD ["./main"]

Security Best Practices

Non-Root User

Always run containers as non-root user:

FROM ubuntu:20.04

# Create a non-root user
RUN groupadd -r appuser && useradd -r -g appuser appuser

# Set ownership of app directory
RUN mkdir /app && chown appuser:appuser /app

# Switch to non-root user
USER appuser

WORKDIR /app
COPY --chown=appuser:appuser . .

CMD ["./myapp"]

Minimal Base Images

Use minimal base images for smaller attack surface:

# Instead of ubuntu:20.04
FROM ubuntu:20.04 AS base
RUN apt-get update && apt-get install -y \
    ca-certificates \
    && rm -rf /var/lib/apt/lists/*

# Use distroless images for Go applications
FROM gcr.io/distroless/static-debian11
COPY --from=build /app/myapp /
CMD ["/myapp"]

# Use scratch for truly minimal images
FROM scratch
ADD myapp /
CMD ["/myapp"]

Security Scanning

Integrate security scanning in your build process:

FROM node:18-alpine AS security-scan

WORKDIR /app
COPY package*.json ./
RUN npm audit --audit-level high --production

# Use Trivy for container scanning
FROM aquasec/trivy:latest AS trivy
COPY --from=builder /app/myapp .
RUN trivy filesystem --exit-code 1 --no-progress .

Secret Management

Proper handling of secrets:

# Don't do this - secrets in image
# ENV API_KEY=secret123

# Use build args for build-time secrets
ARG GITHUB_TOKEN
RUN git config --global credential.helper store \
    && echo "https://oauth2:${GITHUB_TOKEN}@github.com" > ~/.git-credentials

# Use runtime secrets via environment or mounted files
# docker run -e API_KEY=$API_KEY myapp
# docker run -v /host/secrets:/secrets myapp

Image Optimization

Layer Caching

Order commands to maximize layer caching:

# Good - frequently changing files at the end
FROM node:18-alpine
WORKDIR /app

# Dependencies first (changes less frequently)
COPY package*.json ./
RUN npm ci

# Source code (changes more frequently)
COPY . .

# Build
RUN npm run build

# Bad - source code before dependencies
FROM node:18-alpine
WORKDIR /app
COPY . .          # This invalidates cache for all subsequent layers
RUN npm install   # Even if package.json didn't change
RUN npm run build

Multi-Stage for Smaller Images

Reduce final image size:

FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Before: ~900MB (includes dev dependencies and build tools)
# After: ~80MB (only production runtime)
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
CMD ["npm", "start"]

.dockerignore

Exclude unnecessary files:

# Dependencies
node_modules
npm-debug.log*

# Git
.git
.gitignore

# Docker
Dockerfile*
docker-compose*

# Documentation
README.md
docs/

# Environment files
.env
.env.local

# IDE
.vscode
.idea

# OS
.DS_Store
Thumbs.db

# Logs
logs
*.log

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Coverage directory used by tools like istanbul
coverage/

# Build outputs
dist/
build/
.next/
.nuxt/

Docker Compose for Development

Development Environment

version: "3.8"
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile.dev
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
    depends_on:
      - db
      - redis

  db:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp_dev
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Production Environment

version: "3.8"
services:
  app:
    image: myapp:latest
    ports:
      - "80:3000"
    environment:
      - NODE_ENV=production
    depends_on:
      - db
      - redis
    deploy:
      replicas: 3
      restart_policy:
        condition: on-failure

  db:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp_prod
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    deploy:
      placement:
        constraints:
          - node.role == manager

  redis:
    image: redis:7-alpine
    deploy:
      replicas: 1

volumes:
  postgres_data:
    driver: local

Health Checks

Container Health Checks

FROM nginx:alpine

# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost/health || exit 1

COPY nginx.conf /etc/nginx/nginx.conf

Application Health Checks

// health.js
const express = require("express");
const app = express();

app.get("/health", (req, res) => {
  // Check database connection
  db.ping((err) => {
    if (err) {
      return res.status(503).json({ status: "unhealthy", database: "down" });
    }

    // Check external services
    Promise.all([
      checkRedis(),
      checkExternalAPI(),
    ]).then(() => {
      res.json({ status: "healthy" });
    }).catch(() => {
      res.status(503).json({ status: "degraded" });
    });
  });
});

app.listen(3000);

Resource Management

Memory and CPU Limits

services:
  app:
    image: myapp:latest
    deploy:
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.25"
          memory: 256M

Docker Desktop Resource Allocation

// ~/.docker/config.json
{
  "cpu": 2,
  "memory": 4096
}

Networking

Network Security

services:
  app:
    networks:
      - frontend
      - backend

  db:
    networks:
      - backend

  redis:
    networks:
      - backend

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true # No external access

Service Discovery

// Use environment variables for service discovery
const dbHost = process.env.DB_HOST || "db";
const redisHost = process.env.REDIS_HOST || "redis";

// Or use Docker DNS
const dbConnection = mysql.createConnection({
  host: "db", // Service name from docker-compose
  user: "user",
  password: "password",
  database: "myapp",
});

Monitoring and Logging

Centralized Logging

FROM node:18-alpine

# Install logging driver
RUN apk add --no-cache rsyslog

# Configure logging
COPY rsyslog.conf /etc/rsyslog.conf

# Your application
COPY . /app
WORKDIR /app

CMD ["rsyslogd", "-n", "-f", "/etc/rsyslog.conf"]

Container Metrics

services:
  app:
    image: myapp:latest
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
    deploy:
      labels:
        - "prometheus-job=app"
        - "prometheus-port=3000"

CI/CD Integration

GitHub Actions with Docker

name: Build and Push Docker Image

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Log in to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: myapp:latest,myapp:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

Conclusion

Following Docker best practices ensures your containers are secure, efficient, and maintainable. Multi-stage builds reduce image size, proper security practices minimize attack surfaces, and optimization techniques improve performance. Regular security scanning and proper resource management are essential for production deployments.