π Overview
Docker has revolutionized how developers build, ship, and run applications.
This guide will walk you through everything you need to learn Docker β from basic concepts to advanced DevOps usage β across Windows, Unix (Linux), and macOS.
Youβll learn:
- What Docker is and why it matters in DevOps.
- How it enables microservices.
- Installation, Dockerfile creation, sample app deployments, persistent storage, and networking.
- Advanced Docker commands, Docker Compose, and AI integration.
1οΈβ£ What is Docker?
Docker is an open-source containerization platform that automates the deployment of applications inside containers β lightweight, self-contained environments that include everything an application needs to run.
Instead of using full-fledged Virtual Machines, Docker containers share the host operating system kernel, making them:
- Faster to start and stop (in seconds)
- Smaller in size
- More efficient in resource usage
- Portable across environments (dev, staging, prod)
Example analogy:
Think of Docker containers as shipping containers for software β they isolate your application and its dependencies so you can run it anywhere seamlessly.
2οΈβ£ Docker Architecture & Core Components
Understanding Docker’s architecture is crucial for effective containerization. Docker follows a client-server architecture with several key components working together.
ποΈ Docker Architecture Overview
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Docker CLI ββββββ Docker Daemon ββββββ Container β
β (Client) β β (dockerd) β β Runtime β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β β β
β β β
ββββββΌβββββ βββββββΌββββββ βββββββΌββββββ
β Docker β β Images β βContainers β
βCommands β βRepository β β(Running) β
βββββββββββ βββββββββββββ βββββββββββββ
π§ Core Components Explained
1. Docker Client (docker CLI)
- Primary interface for users to interact with Docker
- Sends commands to Docker daemon via REST API
- Can connect to remote Docker daemons
2. Docker Daemon (dockerd)
- Background service that manages Docker objects
- Listens for Docker API requests
- Manages containers, images, networks, and volumes
- Can communicate with other daemons
3. Docker Images
- Read-only templates used to create containers
- Built from Dockerfiles with layered file system
- Stored in registries (Docker Hub, AWS ECR, etc.)
- Versioned using tags
4. Docker Containers
- Running instances of Docker images
- Isolated processes with their own file system
- Share the host OS kernel
- Can be started, stopped, moved, and deleted
5. Docker Registry
- Centralized location for storing and distributing images
- Docker Hub is the default public registry
- Private registries for enterprise use
6. Docker Volumes
- Persistent data storage mechanism
- Survive container lifecycle
- Can be shared between containers
7. Docker Networks
- Enable communication between containers
- Provide isolation and security
- Support multiple network drivers
π Docker Workflow
- Build: Create Docker image from Dockerfile
- Ship: Push image to registry (Docker Hub, ECR, etc.)
- Run: Pull image and create containers
# Build phase
docker build -t myapp:v1.0 .
# Ship phase
docker push myregistry/myapp:v1.0
# Run phase
docker pull myregistry/myapp:v1.0
docker run -d -p 8080:80 myregistry/myapp:v1.0
π·οΈ Docker vs Virtual Machines
| Aspect | Docker Containers | Virtual Machines |
|---|---|---|
| Resource Usage | Lightweight, shares host OS | Heavy, full OS per VM |
| Startup Time | Seconds | Minutes |
| Isolation | Process-level | Hardware-level |
| Portability | High (same OS kernel) | Medium (hypervisor dependent) |
| Security | Good (process isolation) | Excellent (full isolation) |
| Scalability | Excellent | Good |
π Container Runtime Environment
Container Runtime Hierarchy:
Docker CLI
β
Docker Engine (dockerd)
β
containerd (High-level container runtime)
β
runc (Low-level container runtime)
β
Linux Kernel (namespaces, cgroups)
Key Technologies:
- Namespaces: Provide isolation (PID, Network, Mount, User, IPC, UTS)
- Control Groups (cgroups): Resource limiting and monitoring
- Union File Systems: Efficient layered file system
- libcontainer: Low-level interface to kernel features
3οΈβ£ Why Docker is Important to Learn in DevOps
DevOps is about bridging the gap between developers and IT operations through automation, consistency, and collaboration.
Docker plays a key role by enabling:
- Microservice adoptions in organizations: Organizations are mostly adopting microservice architecture for new Applications and docker containers are really important for running them.
- Environment Consistency: Applications run identically on every machine.
- Faster CI/CD Pipelines: Build once, deploy anywhere.
- Isolation: Each app or service runs in its own environment.
- Scalability: Run multiple container instances for load balancing.
- Immutability: Versioned images allow safe rollbacks.
In a modern CI/CD pipeline:
- Developers build and test containers locally.
- Jenkins/GitHub Actions push the image to a container registry.
- Operations deploy the container to production (e.g., Kubernetes).
Thatβs why Docker is a must-learn skill for any DevOps engineer.
3οΈβ£ What is Microservice Architecture & How Docker Helps
Microservices Architecture is a design pattern where a large application is split into smaller, independent services β each responsible for one function (e.g., user authentication, payments, notifications).
Each microservice:
- Runs in its own process (or container)
- Communicates over APIs (usually REST or gRPC)
- Can be updated or scaled independently
π How Docker Helps
- Each microservice can be packaged as a separate Docker image.
- Simplifies versioning and scaling.
- Makes local development easier with Docker Compose.
- Enables smooth deployment to orchestration tools like Kubernetes.
Example:
In an e-commerce app:
auth-service(handles login) β 1 containercart-service(handles cart operations) β 1 containerorder-service(handles payments) β 1 container
All running together on the same laptop!
4οΈβ£ How to Set Up Docker on Windows, Unix, or macOS
πͺ Windows Setup
- Install Docker Desktop for Windows:
π https://www.docker.com/products/docker-desktop - Enable WSL 2 (Windows Subsystem for Linux).
- Open PowerShell and verify installation:
# Check Docker version docker --version # Check Docker Compose version docker-compose --version # Test Docker installation docker run hello-world # Check Docker daemon status docker info
Expected Output:
Hello from Docker!
This message shows that your installation appears to be working correctly.
Common Windows Issues & Solutions:
- Hyper-V Error: Enable Hyper-V in Windows Features
- WSL 2 Not Found: Update Windows to version 2004 or higher
- Permission Denied: Run PowerShell as Administrator
π§ Linux (Ubuntu Example)
# Update package index
sudo apt update
# Install Docker
sudo apt install docker.io -y
# Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Verify installation
docker --version
sudo docker run hello-world
Post-Installation Setup:
# Add your user to docker group (recommended)
sudo usermod -aG docker $USER
# Apply group changes (logout/login or run)
newgrp docker
# Test without sudo
docker run hello-world
# Check Docker service status
sudo systemctl status docker
Alternative Installation (Official Docker Repository):
# Remove old versions
sudo apt-get remove docker docker-engine docker.io containerd runc
# Install dependencies
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
π macOS
- Install via Homebrew:
brew install --cask docker - Launch Docker Desktop.
- Verify:
docker run hello-world
5οΈβ£ Dockerfile β Your Blueprint for Images
A Dockerfile defines how to build a Docker image β think of it as a recipe.
Basic Dockerfile Example
FROM nginx:latest
COPY ./index.html /usr/share/nginx/html
EXPOSE 80
Comprehensive Dockerfile Commands
| Command | Description | Example |
|---|---|---|
FROM | Base image (e.g., ubuntu, node, nginx) | FROM node:16-alpine |
WORKDIR | Set working directory inside container | WORKDIR /app |
COPY | Copy files/directories from host to image | COPY . /app |
ADD | Copy files + extract archives & download URLs | ADD app.tar.gz /app/ |
RUN | Execute shell commands during build | RUN npm install |
ENV | Define environment variables | ENV NODE_ENV=production |
ARG | Build-time variables | ARG VERSION=1.0 |
EXPOSE | Declare port number | EXPOSE 3000 |
VOLUME | Create mount point for volumes | VOLUME ["/data"] |
USER | Set user for subsequent instructions | USER node |
CMD | Default command to run (can be overridden) | CMD ["npm", "start"] |
ENTRYPOINT | Set entry process (cannot be overridden) | ENTRYPOINT ["./entrypoint.sh"] |
HEALTHCHECK | Define container health check | HEALTHCHECK CMD curl -f http://localhost/ || exit 1 |
LABEL | Add metadata to image | LABEL version="1.0" maintainer="user@example.com" |
Real-World Node.js Application Dockerfile
# Use official Node.js runtime as base image
FROM node:16-alpine
# Set metadata
LABEL maintainer="your-email@example.com"
LABEL version="1.0"
LABEL description="Sample Node.js application"
# Create app directory
WORKDIR /usr/src/app
# Copy package files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production && npm cache clean --force
# Create non-root user for security
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodeuser -u 1001
# Copy application code
COPY --chown=nodeuser:nodejs . .
# Switch to non-root user
USER nodeuser
# Expose port
EXPOSE 3000
# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Define startup command
CMD ["node", "server.js"]
Python Flask Application Dockerfile
# Use official Python runtime
FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set work directory
WORKDIR /app
# Install system dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project files
COPY . .
# Create non-root user
RUN useradd --create-home --shell /bin/bash app
RUN chown -R app:app /app
USER app
# Expose port
EXPOSE 5000
# Run the application
CMD ["python", "app.py"]
Dockerfile Best Practices
π― Optimization Tips:
- Use specific tags instead of
latest:FROM node:16-alpine - Leverage multi-stage builds for smaller images
- Order instructions by change frequency (least to most)
- Use
.dockerignoreto exclude unnecessary files - Combine RUN commands to reduce layers
- Use COPY instead of ADD when possible
- Run as non-root user for security
π Multi-Stage Build Example:
# Build stage
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
π« .dockerignore Example:
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.tmp
6οΈβ£ Deploying a Sample App with Docker
Letβs deploy a simple NGINX website.
Step 1: Create index.html
<h1>Hello from Dockerized NGINX!</h1>
Step 2: Create a Dockerfile
FROM nginx:latest
COPY ./index.html /usr/share/nginx/html
EXPOSE 80
Step 3: Build & Run
docker build -t my-nginx .
docker run -d -p 8080:80 my-nginx
Open http://localhost:8080 in your browser β
7οΈβ£ Essential Docker Commands with Examples
πββοΈ Container Lifecycle Commands
Build an Image:
# Build image from current directory
docker build -t myapp:v1.0 .
# Build with custom Dockerfile
docker build -f Dockerfile.prod -t myapp:prod .
# Build without cache
docker build --no-cache -t myapp:latest .
Run Containers:
# Run container in background
docker run -d --name webserver -p 8080:80 nginx
# Run with environment variables
docker run -e NODE_ENV=production -e PORT=3000 myapp
# Run with volume mount
docker run -v /host/path:/container/path myapp
# Run interactive container
docker run -it ubuntu:20.04 /bin/bash
# Run and remove after exit
docker run --rm -it python:3.9 python
Container Management:
# List all containers (running + stopped)
docker ps -a
# List only running containers
docker ps
# Start/stop containers
docker start container_name
docker stop container_name
docker restart container_name
# Pause/unpause containers
docker pause container_name
docker unpause container_name
# Remove containers
docker rm container_name
docker rm -f running_container # Force remove
πΌοΈ Image Management Commands
# List all images
docker images
# Remove images
docker rmi image_name:tag
docker rmi $(docker images -q) # Remove all images
# Pull images from registry
docker pull ubuntu:20.04
docker pull nginx:alpine
# Tag images
docker tag myapp:latest myapp:v1.0
# Push to registry
docker push username/myapp:v1.0
# Save/load images
docker save myapp:latest > myapp.tar
docker load < myapp.tar
π Monitoring and Debugging Commands
# View container logs
docker logs container_name
docker logs -f container_name # Follow logs
docker logs --tail 50 container_name # Last 50 lines
# Execute commands in running container
docker exec -it container_name bash
docker exec container_name ls -la /app
# Inspect containers/images
docker inspect container_name
docker inspect image_name
# View resource usage
docker stats
docker stats container_name
# View container processes
docker top container_name
# Copy files between host and container
docker cp file.txt container_name:/app/
docker cp container_name:/app/logs/ ./local_logs/
π§Ή Cleanup Commands
# Remove stopped containers
docker container prune
# Remove unused images
docker image prune
docker image prune -a # Remove all unused images
# Remove unused volumes
docker volume prune
# Remove unused networks
docker network prune
# Remove all unused objects
docker system prune
docker system prune -a --volumes # Aggressive cleanup
# Show disk usage
docker system df
π Network Management
# List networks
docker network ls
# Create custom network
docker network create my-network
docker network create --driver bridge my-bridge-network
# Connect container to network
docker network connect my-network container_name
# Disconnect container from network
docker network disconnect my-network container_name
# Inspect network
docker network inspect my-network
# Remove network
docker network rm my-network
πΎ Volume Management
# List volumes
docker volume ls
# Create volume
docker volume create my-volume
# Inspect volume
docker volume inspect my-volume
# Remove volume
docker volume rm my-volume
# Run container with volume
docker run -v my-volume:/data nginx
π§ Quick Command Reference
| Command | Purpose | Example |
|---|---|---|
docker ps | List running containers | docker ps -a |
docker images | List local images | docker images --filter dangling=true |
docker build | Build image from Dockerfile | docker build -t myapp . |
docker run | Create and run container | docker run -d -p 80:80 nginx |
docker exec | Execute command in running container | docker exec -it myapp bash |
docker logs | View container logs | docker logs -f myapp |
docker stop | Stop running container | docker stop myapp |
docker rm | Remove container | docker rm myapp |
docker rmi | Remove image | docker rmi myapp:latest |
8οΈβ£ Persistent Volumes & Database Deployment
Containers are ephemeral β data is lost when stopped.
Use volumes to persist data between container restarts.
Example: Running MySQL with a persistent volume:
docker volume create mysql_data
docker run -d --name mysql-db -e MYSQL_ROOT_PASSWORD=admin -v mysql_data:/var/lib/mysql mysql:latest
To view volumes:
docker volume ls
This ensures database data survives container restarts or upgrades.
9οΈβ£ Networking & DNS in Docker
Docker provides virtual networks that allow containers to communicate securely.
Types of Docker Networks
| Network Type | Description |
|---|---|
bridge | Default, private network on a single host |
host | Shares host network namespace |
none | No networking |
overlay | Multi-host networking for Swarm/Kubernetes |
Example:
docker network create my-network
docker run -d --network my-network --name web nginx
docker run -d --network my-network --name db mysql
Docker provides automatic DNS resolution β containers can reach each other by name (web, db, etc.) without needing IPs.
π Docker Compose β Simplify Multi-Container Apps
Docker Compose allows you to define a complete multi-container setup in a single YAML file.
Example docker-compose.yml:
version: "3"
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: admin
Run all containers together:
docker-compose up -d
Now your NGINX frontend and MySQL database run seamlessly together.
1οΈβ£1οΈβ£ Three-Tier Microservice App Deployment
Example: Frontend + Backend API + Database
docker-compose.yml
version: "3"
services:
frontend:
image: nginx
ports:
- "8080:80"
api:
build: ./api
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: example
Each service runs in its own container but communicates internally through Docker networking.
This mirrors real-world enterprise architectures.
1οΈβ£2οΈβ£ Advanced Docker Commands
| Command | Description |
|---|---|
docker inspect <id> | View detailed metadata |
docker stats | Monitor real-time resource usage |
docker system prune | Clean unused images/containers |
docker save / load | Export/import images |
docker export / import | Export/import containers |
docker update | Change resource limits |
docker cp | Copy files between host and container |
docker commit | Create an image from a containerβs state |
1οΈβ£3οΈβ£ Role of AI in Docker
Docker is crucial in AI/ML development and deployment:
- Model Packaging: Package models and dependencies in containers for reproducibility.
- MLOps Pipelines: Tools like Kubeflow and MLflow rely on Docker for experiment tracking and deployment.
- Edge AI: Deploy AI models in lightweight containers on IoT or edge devices.
- AI-Driven DevOps: AI tools now analyze container metrics to predict scaling needs and detect anomalies automatically.
Example:
A TensorFlow model can be containerized as:
FROM tensorflow/serving
COPY ./model /models/my_model
ENV MODEL_NAME=my_model
Then deployed via Docker or Kubernetes for real-time inference.
1οΈβ£4οΈβ£ Real-World Docker Examples
π Complete LAMP Stack with Docker Compose
Project Structure:
lamp-stack/
βββ docker-compose.yml
βββ apache/
β βββ Dockerfile
βββ php/
β βββ Dockerfile
βββ www/
βββ index.php
docker-compose.yml:
version: '3.8'
services:
apache:
build: ./apache
ports:
- "80:80"
volumes:
- ./www:/var/www/html
depends_on:
- mysql
- php
networks:
- lamp-network
php:
build: ./php
volumes:
- ./www:/var/www/html
networks:
- lamp-network
mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: myapp
MYSQL_USER: appuser
MYSQL_PASSWORD: apppassword
ports:
- "3306:3306"
volumes:
- mysql_data:/var/lib/mysql
networks:
- lamp-network
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: mysql
PMA_USER: root
PMA_PASSWORD: rootpassword
ports:
- "8080:80"
depends_on:
- mysql
networks:
- lamp-network
volumes:
mysql_data:
networks:
lamp-network:
driver: bridge
π Python Development Environment
Directory Structure:
flask-app/
βββ Dockerfile
βββ docker-compose.yml
βββ requirements.txt
βββ app.py
βββ .env
Dockerfile:
FROM python:3.9-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
default-libmysqlclient-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 5000
CMD ["python", "app.py"]
docker-compose.yml:
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
environment:
- FLASK_ENV=development
- DATABASE_URL=postgresql://user:password@db:5432/myapp
volumes:
- .:/app
depends_on:
- db
- redis
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
postgres_data:
π Full-Stack Application (React + Node.js + MongoDB)
docker-compose.yml:
version: '3.8'
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- REACT_APP_API_URL=http://localhost:5000/api
volumes:
- ./frontend:/app
- /app/node_modules
stdin_open: true
tty: true
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "5000:5000"
environment:
- NODE_ENV=development
- MONGODB_URI=mongodb://mongodb:27017/myapp
- JWT_SECRET=your-secret-key
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- mongodb
mongodb:
image: mongo:4.4
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=myapp
volumes:
- mongodb_data:/data/db
volumes:
mongodb_data:
π§ Development vs Production Configurations
docker-compose.dev.yml:
version: '3.8'
services:
app:
build:
target: development
volumes:
- .:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npm run dev
docker-compose.prod.yml:
version: '3.8'
services:
app:
build:
target: production
environment:
- NODE_ENV=production
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
Usage:
# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
π Monitoring Stack (Prometheus + Grafana)
version: '3.8'
services:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
depends_on:
- prometheus
node-exporter:
image: prom/node-exporter
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
volumes:
prometheus_data:
grafana_data:
1οΈβ£5οΈβ£ Docker Security Best Practices
π Container Security Guidelines
1. Use Non-Root Users:
# Create and use non-root user
RUN adduser -D -s /bin/sh appuser
USER appuser
2. Scan Images for Vulnerabilities:
# Using Docker scan
docker scan myapp:latest
# Using Trivy
trivy image myapp:latest
# Using Snyk
snyk container test myapp:latest
3. Limit Resources:
# Limit memory and CPU
docker run -m 512m --cpus="1.5" myapp
# In Docker Compose
services:
app:
deploy:
resources:
limits:
memory: 512M
cpus: '1.5'
4. Use Read-Only File Systems:
docker run --read-only --tmpfs /tmp myapp
5. Secure Secrets Management:
# Using Docker secrets
services:
app:
image: myapp
secrets:
- db_password
secrets:
db_password:
external: true
1οΈβ£6οΈβ£ Troubleshooting Common Docker Issues
π Common Problems and Solutions
Problem: Container Exits Immediately
# Check exit code and logs
docker ps -a
docker logs container_name
# Run interactively to debug
docker run -it myapp /bin/sh
Problem: Port Already in Use
# Find process using port
lsof -i :8080
# Kill process
kill -9 <PID>
# Or use different port
docker run -p 8081:80 nginx
Problem: Out of Disk Space
# Check disk usage
docker system df
# Clean up
docker system prune -a --volumes
# Remove specific items
docker container prune
docker image prune -a
docker volume prune
Problem: Slow Build Times
# Use .dockerignore
node_modules
.git
*.log
# Optimize layer order
COPY package*.json ./
RUN npm install
COPY . .
# Use multi-stage builds
FROM node:16 AS builder
# Build steps...
FROM nginx:alpine AS production
COPY --from=builder /app/dist /usr/share/nginx/html
1οΈβ£7οΈβ£ Useful Official Resources & Links
π Official Documentation & Learning Resources
Docker Official Documentation:
- π Docker Documentation - Complete official documentation
- π Docker Get Started Guide - Step-by-step tutorial
- π Docker Training - Interactive online training
- π§ͺ Play with Docker - Browser-based Docker playground
Docker Hub & Registry:
- π³ Docker Hub - Official container registry
- π Docker Hub Official Images - Curated official images
- π¦ Docker Hub Repositories - Community images
Docker Desktop & Tools:
- π» Docker Desktop - Desktop application for Windows/Mac
- π§ Docker Compose - Multi-container applications
- π Docker Extensions - Extend Docker Desktop functionality
ποΈ Architecture & Advanced Topics
Container Technology:
- ποΈ Docker Architecture - System architecture overview
- π Docker Security - Security best practices
- π Docker Networking - Container networking guide
- πΎ Docker Storage - Data persistence and volumes
Production & Enterprise:
- βΈοΈ Docker Enterprise - Enterprise container platform
- π Docker Swarm - Container orchestration
- π Docker Logging - Log management
- π Docker Monitoring - Metrics and monitoring
π οΈ Development & CI/CD Resources
Development Workflow:
- π¨ Docker Build - Image building reference
- π Dockerfile Best Practices - Writing efficient Dockerfiles
- π§Ή Multi-stage Builds - Optimized builds
- π Docker BuildKit - Advanced build features
CI/CD Integration:
- π GitHub Actions - Docker in GitHub workflows
- π§ Jenkins Docker - Jenkins with Docker
- βοΈ AWS CodeBuild - Docker builds in AWS
- π Azure DevOps - Docker in Azure Pipelines
π Advanced Learning & Certification
Certification Programs:
- π Docker Certified Associate (DCA) - Official Docker certification
- π DCA Study Guide - Exam preparation guide
- π― Docker Training Courses - Official training programs
Community & Support:
- π¬ Docker Community Forums - Community discussions
- π’ Docker Blog - Latest updates and tutorials
- π¦ Docker on Twitter - News and announcements
- πΊ Docker YouTube Channel - Video tutorials
- π» Docker GitHub - Open source repositories
Kubernetes Integration:
- βΈοΈ Kubernetes Documentation - Container concepts in K8s
- π’ Docker Desktop Kubernetes - Local Kubernetes cluster
- π¦ Helm Charts - Kubernetes package manager
π§ Tools & Ecosystem
Container Security:
- π‘οΈ Snyk Container Security - Vulnerability scanning
- π Trivy - Vulnerability scanner
- π Docker Bench Security - Security assessment
Monitoring & Observability:
- π Prometheus Docker - Monitoring Docker Swarm
- π Grafana Docker - Docker metrics visualization
- π cAdvisor - Container resource monitoring
Registry Alternatives:
- βοΈ Amazon ECR - AWS container registry
- π· Azure Container Registry - Azure registry
- π¦ Google Container Registry - GCP registry
- ποΈ Harbor - Open source registry
1οΈβ£8οΈβ£ Conclusion
Docker is the backbone of modern DevOps and microservice architectures.
It provides consistency, scalability, and automation across all stages of development and deployment.
By mastering Docker, you can:
- Build portable applications
- Deploy at scale in cloud environments
- Transition easily to orchestration tools like Kubernetes
- Integrate AI workflows into your DevOps lifecycle
- Implement secure, scalable microservices architectures
π― Next Steps
- Practice: Set up the examples in this guide
- Learn Kubernetes: Scale your containers with orchestration
- CI/CD Integration: Automate builds with GitHub Actions/Jenkins
- Production Deployment: Deploy to AWS ECS, Google Cloud Run, or Azure Container Instances
- Monitoring: Implement logging and monitoring for containerized applications
π Learn, experiment, and containerize your next big idea with Docker!