πŸ“˜ Overview

Docker has revolutionized how developers build, ship, and run applications.
This guide will walk you through everything you need to learn Docker β€” from basic concepts to advanced DevOps usage β€” across Windows, Unix (Linux), and macOS.

You’ll learn:

  • What Docker is and why it matters in DevOps.
  • How it enables microservices.
  • Installation, Dockerfile creation, sample app deployments, persistent storage, and networking.
  • Advanced Docker commands, Docker Compose, and AI integration.

1️⃣ What is Docker?

Docker is an open-source containerization platform that automates the deployment of applications inside containers β€” lightweight, self-contained environments that include everything an application needs to run.

Instead of using full-fledged Virtual Machines, Docker containers share the host operating system kernel, making them:

  • Faster to start and stop (in seconds)
  • Smaller in size
  • More efficient in resource usage
  • Portable across environments (dev, staging, prod)

Example analogy:

Think of Docker containers as shipping containers for software β€” they isolate your application and its dependencies so you can run it anywhere seamlessly.


2️⃣ Docker Architecture & Core Components

Understanding Docker’s architecture is crucial for effective containerization. Docker follows a client-server architecture with several key components working together.

πŸ—οΈ Docker Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Docker CLI    │────│   Docker Daemon  │────│   Container     β”‚
β”‚   (Client)      β”‚    β”‚   (dockerd)      β”‚    β”‚   Runtime       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β”‚                       β”‚                       β”‚
    β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”            β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”           β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”
    β”‚ Docker  β”‚            β”‚  Images   β”‚           β”‚Containers β”‚
    β”‚Commands β”‚            β”‚Repository β”‚           β”‚(Running)  β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”§ Core Components Explained

1. Docker Client (docker CLI)

  • Primary interface for users to interact with Docker
  • Sends commands to Docker daemon via REST API
  • Can connect to remote Docker daemons

2. Docker Daemon (dockerd)

  • Background service that manages Docker objects
  • Listens for Docker API requests
  • Manages containers, images, networks, and volumes
  • Can communicate with other daemons

3. Docker Images

  • Read-only templates used to create containers
  • Built from Dockerfiles with layered file system
  • Stored in registries (Docker Hub, AWS ECR, etc.)
  • Versioned using tags

4. Docker Containers

  • Running instances of Docker images
  • Isolated processes with their own file system
  • Share the host OS kernel
  • Can be started, stopped, moved, and deleted

5. Docker Registry

  • Centralized location for storing and distributing images
  • Docker Hub is the default public registry
  • Private registries for enterprise use

6. Docker Volumes

  • Persistent data storage mechanism
  • Survive container lifecycle
  • Can be shared between containers

7. Docker Networks

  • Enable communication between containers
  • Provide isolation and security
  • Support multiple network drivers

πŸ”„ Docker Workflow

  1. Build: Create Docker image from Dockerfile
  2. Ship: Push image to registry (Docker Hub, ECR, etc.)
  3. Run: Pull image and create containers
# Build phase
docker build -t myapp:v1.0 .

# Ship phase
docker push myregistry/myapp:v1.0

# Run phase
docker pull myregistry/myapp:v1.0
docker run -d -p 8080:80 myregistry/myapp:v1.0

🏷️ Docker vs Virtual Machines

AspectDocker ContainersVirtual Machines
Resource UsageLightweight, shares host OSHeavy, full OS per VM
Startup TimeSecondsMinutes
IsolationProcess-levelHardware-level
PortabilityHigh (same OS kernel)Medium (hypervisor dependent)
SecurityGood (process isolation)Excellent (full isolation)
ScalabilityExcellentGood

🌐 Container Runtime Environment

Container Runtime Hierarchy:

Docker CLI
    ↓
Docker Engine (dockerd)
    ↓
containerd (High-level container runtime)
    ↓
runc (Low-level container runtime)
    ↓
Linux Kernel (namespaces, cgroups)

Key Technologies:

  • Namespaces: Provide isolation (PID, Network, Mount, User, IPC, UTS)
  • Control Groups (cgroups): Resource limiting and monitoring
  • Union File Systems: Efficient layered file system
  • libcontainer: Low-level interface to kernel features

3️⃣ Why Docker is Important to Learn in DevOps

DevOps is about bridging the gap between developers and IT operations through automation, consistency, and collaboration.

Docker plays a key role by enabling:

  • Microservice adoptions in organizations: Organizations are mostly adopting microservice architecture for new Applications and docker containers are really important for running them.
  • Environment Consistency: Applications run identically on every machine.
  • Faster CI/CD Pipelines: Build once, deploy anywhere.
  • Isolation: Each app or service runs in its own environment.
  • Scalability: Run multiple container instances for load balancing.
  • Immutability: Versioned images allow safe rollbacks.

In a modern CI/CD pipeline:

  1. Developers build and test containers locally.
  2. Jenkins/GitHub Actions push the image to a container registry.
  3. Operations deploy the container to production (e.g., Kubernetes).

That’s why Docker is a must-learn skill for any DevOps engineer.


3️⃣ What is Microservice Architecture & How Docker Helps

Microservices Architecture is a design pattern where a large application is split into smaller, independent services β€” each responsible for one function (e.g., user authentication, payments, notifications).

Each microservice:

  • Runs in its own process (or container)
  • Communicates over APIs (usually REST or gRPC)
  • Can be updated or scaled independently

πŸš€ How Docker Helps

  • Each microservice can be packaged as a separate Docker image.
  • Simplifies versioning and scaling.
  • Makes local development easier with Docker Compose.
  • Enables smooth deployment to orchestration tools like Kubernetes.

Example:
In an e-commerce app:

  • auth-service (handles login) β†’ 1 container
  • cart-service (handles cart operations) β†’ 1 container
  • order-service (handles payments) β†’ 1 container

All running together on the same laptop!


4️⃣ How to Set Up Docker on Windows, Unix, or macOS

πŸͺŸ Windows Setup

  1. Install Docker Desktop for Windows:
    πŸ‘‰ https://www.docker.com/products/docker-desktop
  2. Enable WSL 2 (Windows Subsystem for Linux).
  3. Open PowerShell and verify installation:
    # Check Docker version
    docker --version
    
    # Check Docker Compose version
    docker-compose --version
    
    # Test Docker installation
    docker run hello-world
    
    # Check Docker daemon status
    docker info
    

Expected Output:

Hello from Docker!
This message shows that your installation appears to be working correctly.

Common Windows Issues & Solutions:

  • Hyper-V Error: Enable Hyper-V in Windows Features
  • WSL 2 Not Found: Update Windows to version 2004 or higher
  • Permission Denied: Run PowerShell as Administrator

🐧 Linux (Ubuntu Example)

# Update package index
sudo apt update

# Install Docker
sudo apt install docker.io -y

# Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker

# Verify installation
docker --version
sudo docker run hello-world

Post-Installation Setup:

# Add your user to docker group (recommended)
sudo usermod -aG docker $USER

# Apply group changes (logout/login or run)
newgrp docker

# Test without sudo
docker run hello-world

# Check Docker service status
sudo systemctl status docker

Alternative Installation (Official Docker Repository):

# Remove old versions
sudo apt-get remove docker docker-engine docker.io containerd runc

# Install dependencies
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Set up repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

🍎 macOS

  1. Install via Homebrew:
    brew install --cask docker
    
  2. Launch Docker Desktop.
  3. Verify:
    docker run hello-world
    

5️⃣ Dockerfile β€” Your Blueprint for Images

A Dockerfile defines how to build a Docker image β€” think of it as a recipe.

Basic Dockerfile Example

FROM nginx:latest
COPY ./index.html /usr/share/nginx/html
EXPOSE 80

Comprehensive Dockerfile Commands

CommandDescriptionExample
FROMBase image (e.g., ubuntu, node, nginx)FROM node:16-alpine
WORKDIRSet working directory inside containerWORKDIR /app
COPYCopy files/directories from host to imageCOPY . /app
ADDCopy files + extract archives & download URLsADD app.tar.gz /app/
RUNExecute shell commands during buildRUN npm install
ENVDefine environment variablesENV NODE_ENV=production
ARGBuild-time variablesARG VERSION=1.0
EXPOSEDeclare port numberEXPOSE 3000
VOLUMECreate mount point for volumesVOLUME ["/data"]
USERSet user for subsequent instructionsUSER node
CMDDefault command to run (can be overridden)CMD ["npm", "start"]
ENTRYPOINTSet entry process (cannot be overridden)ENTRYPOINT ["./entrypoint.sh"]
HEALTHCHECKDefine container health checkHEALTHCHECK CMD curl -f http://localhost/ || exit 1
LABELAdd metadata to imageLABEL version="1.0" maintainer="user@example.com"

Real-World Node.js Application Dockerfile

# Use official Node.js runtime as base image
FROM node:16-alpine

# Set metadata
LABEL maintainer="your-email@example.com"
LABEL version="1.0"
LABEL description="Sample Node.js application"

# Create app directory
WORKDIR /usr/src/app

# Copy package files first (for better caching)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production && npm cache clean --force

# Create non-root user for security
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodeuser -u 1001

# Copy application code
COPY --chown=nodeuser:nodejs . .

# Switch to non-root user
USER nodeuser

# Expose port
EXPOSE 3000

# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:3000/health || exit 1

# Define startup command
CMD ["node", "server.js"]

Python Flask Application Dockerfile

# Use official Python runtime
FROM python:3.9-slim

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

# Set work directory
WORKDIR /app

# Install system dependencies
RUN apt-get update \
    && apt-get install -y --no-install-recommends gcc \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy project files
COPY . .

# Create non-root user
RUN useradd --create-home --shell /bin/bash app
RUN chown -R app:app /app
USER app

# Expose port
EXPOSE 5000

# Run the application
CMD ["python", "app.py"]

Dockerfile Best Practices

🎯 Optimization Tips:

  • Use specific tags instead of latest: FROM node:16-alpine
  • Leverage multi-stage builds for smaller images
  • Order instructions by change frequency (least to most)
  • Use .dockerignore to exclude unnecessary files
  • Combine RUN commands to reduce layers
  • Use COPY instead of ADD when possible
  • Run as non-root user for security

πŸ“ Multi-Stage Build Example:

# Build stage
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

🚫 .dockerignore Example:

node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.tmp

6️⃣ Deploying a Sample App with Docker

Let’s deploy a simple NGINX website.

Step 1: Create index.html

<h1>Hello from Dockerized NGINX!</h1>

Step 2: Create a Dockerfile

FROM nginx:latest
COPY ./index.html /usr/share/nginx/html
EXPOSE 80

Step 3: Build & Run

docker build -t my-nginx .
docker run -d -p 8080:80 my-nginx

Open http://localhost:8080 in your browser βœ…


7️⃣ Essential Docker Commands with Examples

πŸƒβ€β™‚οΈ Container Lifecycle Commands

Build an Image:

# Build image from current directory
docker build -t myapp:v1.0 .

# Build with custom Dockerfile
docker build -f Dockerfile.prod -t myapp:prod .

# Build without cache
docker build --no-cache -t myapp:latest .

Run Containers:

# Run container in background
docker run -d --name webserver -p 8080:80 nginx

# Run with environment variables
docker run -e NODE_ENV=production -e PORT=3000 myapp

# Run with volume mount
docker run -v /host/path:/container/path myapp

# Run interactive container
docker run -it ubuntu:20.04 /bin/bash

# Run and remove after exit
docker run --rm -it python:3.9 python

Container Management:

# List all containers (running + stopped)
docker ps -a

# List only running containers
docker ps

# Start/stop containers
docker start container_name
docker stop container_name
docker restart container_name

# Pause/unpause containers
docker pause container_name
docker unpause container_name

# Remove containers
docker rm container_name
docker rm -f running_container  # Force remove

πŸ–ΌοΈ Image Management Commands

# List all images
docker images

# Remove images
docker rmi image_name:tag
docker rmi $(docker images -q)  # Remove all images

# Pull images from registry
docker pull ubuntu:20.04
docker pull nginx:alpine

# Tag images
docker tag myapp:latest myapp:v1.0

# Push to registry
docker push username/myapp:v1.0

# Save/load images
docker save myapp:latest > myapp.tar
docker load < myapp.tar

πŸ” Monitoring and Debugging Commands

# View container logs
docker logs container_name
docker logs -f container_name    # Follow logs
docker logs --tail 50 container_name  # Last 50 lines

# Execute commands in running container
docker exec -it container_name bash
docker exec container_name ls -la /app

# Inspect containers/images
docker inspect container_name
docker inspect image_name

# View resource usage
docker stats
docker stats container_name

# View container processes
docker top container_name

# Copy files between host and container
docker cp file.txt container_name:/app/
docker cp container_name:/app/logs/ ./local_logs/

🧹 Cleanup Commands

# Remove stopped containers
docker container prune

# Remove unused images
docker image prune
docker image prune -a  # Remove all unused images

# Remove unused volumes
docker volume prune

# Remove unused networks
docker network prune

# Remove all unused objects
docker system prune
docker system prune -a --volumes  # Aggressive cleanup

# Show disk usage
docker system df

🌐 Network Management

# List networks
docker network ls

# Create custom network
docker network create my-network
docker network create --driver bridge my-bridge-network

# Connect container to network
docker network connect my-network container_name

# Disconnect container from network
docker network disconnect my-network container_name

# Inspect network
docker network inspect my-network

# Remove network
docker network rm my-network

πŸ’Ύ Volume Management

# List volumes
docker volume ls

# Create volume
docker volume create my-volume

# Inspect volume
docker volume inspect my-volume

# Remove volume
docker volume rm my-volume

# Run container with volume
docker run -v my-volume:/data nginx

πŸ”§ Quick Command Reference

CommandPurposeExample
docker psList running containersdocker ps -a
docker imagesList local imagesdocker images --filter dangling=true
docker buildBuild image from Dockerfiledocker build -t myapp .
docker runCreate and run containerdocker run -d -p 80:80 nginx
docker execExecute command in running containerdocker exec -it myapp bash
docker logsView container logsdocker logs -f myapp
docker stopStop running containerdocker stop myapp
docker rmRemove containerdocker rm myapp
docker rmiRemove imagedocker rmi myapp:latest

8️⃣ Persistent Volumes & Database Deployment

Containers are ephemeral β€” data is lost when stopped.
Use volumes to persist data between container restarts.

Example: Running MySQL with a persistent volume:

docker volume create mysql_data
docker run -d   --name mysql-db   -e MYSQL_ROOT_PASSWORD=admin   -v mysql_data:/var/lib/mysql   mysql:latest

To view volumes:

docker volume ls

This ensures database data survives container restarts or upgrades.


9️⃣ Networking & DNS in Docker

Docker provides virtual networks that allow containers to communicate securely.

Types of Docker Networks

Network TypeDescription
bridgeDefault, private network on a single host
hostShares host network namespace
noneNo networking
overlayMulti-host networking for Swarm/Kubernetes

Example:

docker network create my-network
docker run -d --network my-network --name web nginx
docker run -d --network my-network --name db mysql

Docker provides automatic DNS resolution β€” containers can reach each other by name (web, db, etc.) without needing IPs.


πŸ”Ÿ Docker Compose β€” Simplify Multi-Container Apps

Docker Compose allows you to define a complete multi-container setup in a single YAML file.

Example docker-compose.yml:

version: "3"
services:
  web:
    image: nginx
    ports:
      - "8080:80"
  db:
    image: mysql
    environment:
      MYSQL_ROOT_PASSWORD: admin

Run all containers together:

docker-compose up -d

Now your NGINX frontend and MySQL database run seamlessly together.


1️⃣1️⃣ Three-Tier Microservice App Deployment

Example: Frontend + Backend API + Database

docker-compose.yml

version: "3"
services:
  frontend:
    image: nginx
    ports:
      - "8080:80"
  api:
    build: ./api
    depends_on:
      - db
  db:
    image: postgres
    environment:
      POSTGRES_PASSWORD: example

Each service runs in its own container but communicates internally through Docker networking.
This mirrors real-world enterprise architectures.


1️⃣2️⃣ Advanced Docker Commands

CommandDescription
docker inspect <id>View detailed metadata
docker statsMonitor real-time resource usage
docker system pruneClean unused images/containers
docker save / loadExport/import images
docker export / importExport/import containers
docker updateChange resource limits
docker cpCopy files between host and container
docker commitCreate an image from a container’s state

1️⃣3️⃣ Role of AI in Docker

Docker is crucial in AI/ML development and deployment:

  • Model Packaging: Package models and dependencies in containers for reproducibility.
  • MLOps Pipelines: Tools like Kubeflow and MLflow rely on Docker for experiment tracking and deployment.
  • Edge AI: Deploy AI models in lightweight containers on IoT or edge devices.
  • AI-Driven DevOps: AI tools now analyze container metrics to predict scaling needs and detect anomalies automatically.

Example:
A TensorFlow model can be containerized as:

FROM tensorflow/serving
COPY ./model /models/my_model
ENV MODEL_NAME=my_model

Then deployed via Docker or Kubernetes for real-time inference.


1️⃣4️⃣ Real-World Docker Examples

πŸš€ Complete LAMP Stack with Docker Compose

Project Structure:

lamp-stack/
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ apache/
β”‚   └── Dockerfile
β”œβ”€β”€ php/
β”‚   └── Dockerfile
└── www/
    └── index.php

docker-compose.yml:

version: '3.8'

services:
  apache:
    build: ./apache
    ports:
      - "80:80"
    volumes:
      - ./www:/var/www/html
    depends_on:
      - mysql
      - php
    networks:
      - lamp-network

  php:
    build: ./php
    volumes:
      - ./www:/var/www/html
    networks:
      - lamp-network

  mysql:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: rootpassword
      MYSQL_DATABASE: myapp
      MYSQL_USER: appuser
      MYSQL_PASSWORD: apppassword
    ports:
      - "3306:3306"
    volumes:
      - mysql_data:/var/lib/mysql
    networks:
      - lamp-network

  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    environment:
      PMA_HOST: mysql
      PMA_USER: root
      PMA_PASSWORD: rootpassword
    ports:
      - "8080:80"
    depends_on:
      - mysql
    networks:
      - lamp-network

volumes:
  mysql_data:

networks:
  lamp-network:
    driver: bridge

🐍 Python Development Environment

Directory Structure:

flask-app/
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ app.py
└── .env

Dockerfile:

FROM python:3.9-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    default-libmysqlclient-dev \
    && rm -rf /var/lib/apt/lists/*

# Copy and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 5000

CMD ["python", "app.py"]

docker-compose.yml:

version: '3.8'

services:
  web:
    build: .
    ports:
      - "5000:5000"
    environment:
      - FLASK_ENV=development
      - DATABASE_URL=postgresql://user:password@db:5432/myapp
    volumes:
      - .:/app
    depends_on:
      - db
      - redis

  db:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

🌐 Full-Stack Application (React + Node.js + MongoDB)

docker-compose.yml:

version: '3.8'

services:
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - REACT_APP_API_URL=http://localhost:5000/api
    volumes:
      - ./frontend:/app
      - /app/node_modules
    stdin_open: true
    tty: true

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - "5000:5000"
    environment:
      - NODE_ENV=development
      - MONGODB_URI=mongodb://mongodb:27017/myapp
      - JWT_SECRET=your-secret-key
    volumes:
      - ./backend:/app
      - /app/node_modules
    depends_on:
      - mongodb

  mongodb:
    image: mongo:4.4
    ports:
      - "27017:27017"
    environment:
      - MONGO_INITDB_ROOT_USERNAME=admin
      - MONGO_INITDB_ROOT_PASSWORD=password
      - MONGO_INITDB_DATABASE=myapp
    volumes:
      - mongodb_data:/data/db

volumes:
  mongodb_data:

πŸ”§ Development vs Production Configurations

docker-compose.dev.yml:

version: '3.8'

services:
  app:
    build:
      target: development
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

docker-compose.prod.yml:

version: '3.8'

services:
  app:
    build:
      target: production
    environment:
      - NODE_ENV=production
    restart: unless-stopped
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

Usage:

# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d

πŸ“Š Monitoring Stack (Prometheus + Grafana)

version: '3.8'

services:
  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'

  grafana:
    image: grafana/grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana_data:/var/lib/grafana
    depends_on:
      - prometheus

  node-exporter:
    image: prom/node-exporter
    ports:
      - "9100:9100"
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'

volumes:
  prometheus_data:
  grafana_data:

1️⃣5️⃣ Docker Security Best Practices

πŸ”’ Container Security Guidelines

1. Use Non-Root Users:

# Create and use non-root user
RUN adduser -D -s /bin/sh appuser
USER appuser

2. Scan Images for Vulnerabilities:

# Using Docker scan
docker scan myapp:latest

# Using Trivy
trivy image myapp:latest

# Using Snyk
snyk container test myapp:latest

3. Limit Resources:

# Limit memory and CPU
docker run -m 512m --cpus="1.5" myapp

# In Docker Compose
services:
  app:
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '1.5'

4. Use Read-Only File Systems:

docker run --read-only --tmpfs /tmp myapp

5. Secure Secrets Management:

# Using Docker secrets
services:
  app:
    image: myapp
    secrets:
      - db_password
      
secrets:
  db_password:
    external: true

1️⃣6️⃣ Troubleshooting Common Docker Issues

πŸ› Common Problems and Solutions

Problem: Container Exits Immediately

# Check exit code and logs
docker ps -a
docker logs container_name

# Run interactively to debug
docker run -it myapp /bin/sh

Problem: Port Already in Use

# Find process using port
lsof -i :8080

# Kill process
kill -9 <PID>

# Or use different port
docker run -p 8081:80 nginx

Problem: Out of Disk Space

# Check disk usage
docker system df

# Clean up
docker system prune -a --volumes

# Remove specific items
docker container prune
docker image prune -a
docker volume prune

Problem: Slow Build Times

# Use .dockerignore
node_modules
.git
*.log

# Optimize layer order
COPY package*.json ./
RUN npm install
COPY . .

# Use multi-stage builds
FROM node:16 AS builder
# Build steps...

FROM nginx:alpine AS production
COPY --from=builder /app/dist /usr/share/nginx/html

πŸ“š Official Documentation & Learning Resources

Docker Official Documentation:

Docker Hub & Registry:

Docker Desktop & Tools:

πŸ—οΈ Architecture & Advanced Topics

Container Technology:

Production & Enterprise:

πŸ› οΈ Development & CI/CD Resources

Development Workflow:

CI/CD Integration:

🌟 Advanced Learning & Certification

Certification Programs:

Community & Support:

Kubernetes Integration:

πŸ”§ Tools & Ecosystem

Container Security:

Monitoring & Observability:

Registry Alternatives:


1️⃣8️⃣ Conclusion

Docker is the backbone of modern DevOps and microservice architectures.
It provides consistency, scalability, and automation across all stages of development and deployment.

By mastering Docker, you can:

  • Build portable applications
  • Deploy at scale in cloud environments
  • Transition easily to orchestration tools like Kubernetes
  • Integrate AI workflows into your DevOps lifecycle
  • Implement secure, scalable microservices architectures

🎯 Next Steps

  1. Practice: Set up the examples in this guide
  2. Learn Kubernetes: Scale your containers with orchestration
  3. CI/CD Integration: Automate builds with GitHub Actions/Jenkins
  4. Production Deployment: Deploy to AWS ECS, Google Cloud Run, or Azure Container Instances
  5. Monitoring: Implement logging and monitoring for containerized applications

πŸš€ Learn, experiment, and containerize your next big idea with Docker!