Navigation

Devops

Docker Tutorial for Beginners 2025 Complete Guide Containers Images Dockerfile Docker Compose

#devops #docker
Learn Docker from a developer who went from "why do we need containers?" to containerizing production systems at Microsoft and Amazon, with practical examples.
Docker Tutorial for Beginners 2025 Complete Guide Containers Images Dockerfile Docker Compose

Docker: From Container Confusion to Deployment Confidence

Two years ago, a senior engineer at Amazon told me to "just containerize it" and I nodded knowingly while having absolutely no idea what that meant. Docker felt like this mysterious black box that somehow made deployment easier, but I couldn't wrap my head around why we needed it.

Then I spent a weekend trying to set up a development environment that matched production, installing different Python versions, managing conflicting dependencies, and nearly throwing my laptop out the window. That's when Docker suddenly made perfect sense.

The "It Works on My Machine" Problem

Here's the scenario that made me a Docker convert:

# My local machine (macOS)
$ python --version
Python 3.9.1

$ pip install -r requirements.txt
Successfully installed...

$ python app.py
Server running on http://localhost:5000

# Teammate's machine (Ubuntu)
$ python --version  
Python 3.8.5

$ pip install -r requirements.txt
ERROR: Could not find a version that satisfies the requirement...

# Production server (CentOS)
$ python --version
Python 2.7.5  # 😱

$ pip install -r requirements.txt
ModuleNotFoundError: No module named 'ssl'...

Sound familiar? Different operating systems, different Python versions, different system libraries. Every deployment was like playing Russian roulette with dependencies.

Docker: The Universal Translator

Docker solves this by packaging your application with everything it needs to run:

# Dockerfile - like a recipe for your app environment
FROM python:3.9-slim

# Set working directory
WORKDIR /app

# Copy requirements first (for Docker layer caching)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose the port the app runs on
EXPOSE 5000

# Command to run the application
CMD ["python", "app.py"]

Now everyone gets the exact same environment:

# On any machine with Docker
$ docker build -t my-coffee-app .
$ docker run -p 5000:5000 my-coffee-app

# Works identically on macOS, Linux, Windows, production, your teammate's machine...
Server running on http://localhost:5000

Docker Concepts: The Coffee Shop Analogy

I finally understood Docker when I thought of it like a coffee shop franchise:

Image: The blueprint/recipe for making a specific coffee drink Container: An actual cup of coffee made from that recipe Dockerfile: The instruction manual for making the recipe Registry: The cookbook where all recipes are stored (Docker Hub)

# Pull a recipe (image) from the cookbook (registry)
docker pull python:3.9-slim

# Make a coffee (container) from the recipe (image)
docker run python:3.9-slim

# See all the coffee cups (containers) you've made
docker ps

# See all the recipes (images) you have
docker images

Building Your First Container

Let's containerize a simple Flask coffee shop API:

The Application

# app.py
from flask import Flask, jsonify
import os
import socket

app = Flask(__name__)

@app.route('/')
def home():
    return jsonify({
        'message': 'Welcome to Maya\'s Containerized Coffee Shop!',
        'hostname': socket.gethostname(),
        'environment': os.environ.get('ENVIRONMENT', 'development')
    })

@app.route('/menu')
def menu():
    drinks = [
        {'name': 'Flat White', 'price': 4.50},
        {'name': 'Taro Bubble Tea', 'price': 5.50},
        {'name': 'Cortado', 'price': 4.00}
    ]
    return jsonify({'drinks': drinks})

@app.route('/health')
def health():
    return jsonify({'status': 'healthy'})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=True)
# requirements.txt
Flask==2.3.3
gunicorn==21.2.0

The Dockerfile

# Use official Python runtime as base image
FROM python:3.9-slim

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

# Set work directory
WORKDIR /app

# Install system dependencies
RUN apt-get update \
    && apt-get install -y --no-install-recommends gcc \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --upgrade pip \
    && pip install --no-cache-dir -r requirements.txt

# Copy project
COPY . .

# Create non-root user for security
RUN adduser --disabled-password --gecos '' appuser \
    && chown -R appuser:appuser /app
USER appuser

# Expose port
EXPOSE 5000

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:5000/health || exit 1

# Run the application
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]

Building and Running

# Build the image
docker build -t coffee-shop-api:v1.0 .

# Run the container
docker run -d \
    --name coffee-shop \
    -p 8080:5000 \
    -e ENVIRONMENT=production \
    coffee-shop-api:v1.0

# Check if it's running
docker ps

# View logs
docker logs coffee-shop

# Test the API
curl http://localhost:8080/
curl http://localhost:8080/menu

# Stop and remove
docker stop coffee-shop
docker rm coffee-shop

Essential Docker Commands

Container Management

# Run containers
docker run nginx                    # Run in foreground
docker run -d nginx                 # Run in background (detached)
docker run -it ubuntu bash          # Interactive mode with terminal
docker run --rm nginx               # Remove container when it stops
docker run --name my-app nginx      # Give container a name

# Container lifecycle
docker start container_name         # Start stopped container
docker stop container_name          # Gracefully stop container
docker restart container_name       # Restart container
docker kill container_name          # Force stop container
docker rm container_name            # Remove stopped container
docker rm -f container_name         # Force remove running container

# Information and debugging
docker ps                          # List running containers
docker ps -a                       # List all containers (including stopped)
docker logs container_name         # View container logs
docker logs -f container_name      # Follow logs in real-time
docker exec -it container_name bash # Execute command in running container
docker inspect container_name      # Detailed container information

Image Management

# Image operations
docker images                      # List local images
docker pull ubuntu:20.04          # Download image from registry
docker build -t my-app:v1.0 .     # Build image from Dockerfile
docker tag my-app:v1.0 my-app:latest # Tag image
docker push my-app:v1.0           # Push image to registry
docker rmi image_name              # Remove image
docker system prune               # Clean up unused images/containers

Docker Compose: Multi-Container Magic

Real applications usually need multiple services. Docker Compose orchestrates them:

# docker-compose.yml
version: '3.8'

services:
  # Web application
  web:
    build: .
    ports:
      - "8080:5000"
    environment:
      - ENVIRONMENT=development
      - DATABASE_URL=postgresql://user:password@db:5432/coffee_shop
    depends_on:
      - db
      - redis
    volumes:
      - .:/app  # Mount source code for development
    restart: unless-stopped

  # PostgreSQL database
  db:
    image: postgres:13
    environment:
      POSTGRES_DB: coffee_shop
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "5432:5432"
    restart: unless-stopped

  # Redis cache
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    restart: unless-stopped

  # Nginx reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - web
    restart: unless-stopped

volumes:
  postgres_data:
# Compose commands
docker-compose up                  # Start all services
docker-compose up -d               # Start in background
docker-compose down                # Stop and remove all services
docker-compose build               # Build/rebuild services
docker-compose logs web            # View logs for specific service
docker-compose exec web bash       # Execute command in service container
docker-compose ps                  # List running services

Real-World Docker Patterns

Multi-Stage Builds (Smaller Images)

# Multi-stage build for production-ready images
# Stage 1: Build environment
FROM node:16 AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

# Stage 2: Production environment
FROM nginx:alpine AS production

# Copy built assets from builder stage
COPY --from=builder /app/dist /usr/share/nginx/html

# Copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Development vs Production Dockerfile

# Dockerfile.dev (for development)
FROM python:3.9-slim

WORKDIR /app

COPY requirements-dev.txt .
RUN pip install -r requirements-dev.txt

# Mount source code as volume for hot reloading
VOLUME /app

EXPOSE 5000
CMD ["flask", "run", "--host=0.0.0.0", "--debug"]
# Dockerfile.prod (for production)
FROM python:3.9-slim

WORKDIR /app

# Install only production dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# Security: run as non-root user
RUN adduser --disabled-password appuser
USER appuser

EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]

Docker Ignore File

# .dockerignore
# Prevent unnecessary files from being copied to image
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.pytest_cache
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
.coverage
.coverage.*
.pytest_cache
htmlcov/
.tox/
.mypy_cache/
.DS_Store

Environment-Specific Configurations

Using Environment Variables

# app.py - Reading configuration from environment
import os

class Config:
    DATABASE_URL = os.environ.get('DATABASE_URL', 'sqlite:///local.db')
    REDIS_URL = os.environ.get('REDIS_URL', 'redis://localhost:6379')
    SECRET_KEY = os.environ.get('SECRET_KEY', 'dev-secret-key')
    DEBUG = os.environ.get('DEBUG', 'False').lower() == 'true'
# docker-compose.override.yml (for development)
version: '3.8'

services:
  web:
    environment:
      - DEBUG=true
      - LOG_LEVEL=debug
    volumes:
      - .:/app  # Mount source for hot reload
    command: ["flask", "run", "--host=0.0.0.0", "--debug"]
# docker-compose.prod.yml (for production)
version: '3.8'

services:
  web:
    environment:
      - DEBUG=false
      - LOG_LEVEL=info
    restart: always
    command: ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]

Secrets Management

# docker-compose.yml with secrets
version: '3.8'

services:
  web:
    build: .
    environment:
      - DATABASE_URL_FILE=/run/secrets/db_url
    secrets:
      - db_url

secrets:
  db_url:
    file: ./secrets/database_url.txt

Docker Security Best Practices

Secure Dockerfile

# Security-focused Dockerfile
FROM python:3.9-slim

# Don't run as root
RUN groupadd -r appgroup && useradd -r -g appgroup appuser

# Install security updates
RUN apt-get update && apt-get upgrade -y \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app

# Copy and install requirements first (better caching)
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip \
    && pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY --chown=appuser:appgroup . .

# Switch to non-root user
USER appuser

# Use specific port
EXPOSE 8000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

# Use exec form for better signal handling
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]

Container Security Scanning

# Scan images for vulnerabilities
docker scan my-app:latest

# Use official, minimal base images
FROM python:3.9-alpine  # Smaller attack surface
FROM nginx:alpine
FROM node:16-alpine

Monitoring and Logging

Container Monitoring

# docker-compose.yml with monitoring
version: '3.8'

services:
  web:
    build: .
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  # Monitoring stack
  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml

  grafana:
    image: grafana/grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin

Centralized Logging

# ELK Stack for logging
version: '3.8'

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"

  logstash:
    image: docker.elastic.co/logstash/logstash:7.14.0
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf

  kibana:
    image: docker.elastic.co/kibana/kibana:7.14.0
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch

  app:
    build: .
    logging:
      driver: gelf
      options:
        gelf-address: "udp://localhost:12201"

Docker in CI/CD

GitHub Actions with Docker

# .github/workflows/docker.yml
name: Docker Build and Deploy

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
    - uses: actions/checkout@v2
    
    - name: Build test image
      run: docker build -f Dockerfile.test -t coffee-shop:test .
    
    - name: Run tests
      run: |
        docker run --rm coffee-shop:test pytest
        docker run --rm coffee-shop:test flake8
    
  build-and-push:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    
    steps:
    - uses: actions/checkout@v2
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v1
    
    - name: Login to DockerHub
      uses: docker/login-action@v1
      with:
        username: ${{ secrets.DOCKERHUB_USERNAME }}
        password: ${{ secrets.DOCKERHUB_TOKEN }}
    
    - name: Build and push
      uses: docker/build-push-action@v2
      with:
        context: .
        push: true
        tags: |
          mayachen/coffee-shop:latest
          mayachen/coffee-shop:${{ github.sha }}
        cache-from: type=gha
        cache-to: type=gha,mode=max

Common Docker Mistakes (I Made Them All)

Mistake 1: Huge Images

# Bad: Huge image with unnecessary tools
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y \
    python3 python3-pip curl wget vim git \
    build-essential sudo nano emacs
# Result: 800MB+ image

# Good: Minimal base image
FROM python:3.9-alpine
RUN apk add --no-cache gcc musl-dev
# Result: 50MB image

Mistake 2: Running as Root

# Bad: Running as root (security risk)
FROM python:3.9
COPY . /app
WORKDIR /app
CMD ["python", "app.py"]

# Good: Non-root user
FROM python:3.9
RUN useradd -m appuser
COPY --chown=appuser:appuser . /app
USER appuser
WORKDIR /app
CMD ["python", "app.py"]

Mistake 3: Ignoring Layer Caching

# Bad: Changes to code invalidate dependency layer
FROM python:3.9
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

# Good: Copy requirements first
FROM python:3.9
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]

Troubleshooting Docker

Common Issues and Solutions

# Container keeps restarting
docker logs container_name
docker exec -it container_name bash

# Can't connect to service
docker network ls
docker network inspect bridge

# Out of disk space
docker system df
docker system prune -a --volumes

# Permission issues
# On Linux, check user ID mapping
id
docker run --rm -it --user $(id -u):$(id -g) ubuntu bash

# Port already in use
netstat -tulpn | grep :8080
docker ps | grep 8080

Debugging Dockerfile

# Add debugging layers
FROM python:3.9-slim

# Debug: See what's in the container
RUN ls -la /

WORKDIR /app

# Debug: Check working directory
RUN pwd && ls -la

COPY requirements.txt .

# Debug: Verify file was copied
RUN ls -la && cat requirements.txt

RUN pip install -r requirements.txt

# Debug: Check installed packages
RUN pip list

COPY . .

# Debug: Final state
RUN ls -la && python --version

CMD ["python", "app.py"]

Final Thoughts: Docker as a Developer Superpower

That weekend of dependency hell that drove me to Docker? It was the best learning experience I ever had. Docker didn't just solve my deployment problems - it fundamentally changed how I think about applications and environments.

Containers aren't just about deployment consistency (though that's huge). They're about reproducible environments, scalable architectures, and the confidence to ship code knowing it will work the same way everywhere.

Whether you're building a simple Flask app or a complex microservices architecture, Docker gives you superpowers: instant environment setup, fearless deployments, easy scaling, and the ability to experiment without breaking your machine.

Start simple: containerize one application. Then add Docker Compose for multi-service setups. Before you know it, you'll wonder how you ever developed without containers.

Remember: Docker isn't magic - it's just Linux namespaces and cgroups wrapped in a friendly API. But sometimes, a little abstraction makes all the difference between "it works on my machine" and "it works everywhere."


Currently writing this from a coffee shop in Fremont, where I'm containerizing a side project while debugging a Docker networking issue. The irony isn't lost on me. Share your Docker journey @maya_codes_pnw - from container confusion to orchestration mastery! 🐳☕

Share this article

Add Comment

No comments yet. Be the first to comment!

More from Devops