Learn Understand first, then practice while the concept is still fresh.

M45A - Containers Intro: Docker Basics

Understand the fundamental difference between Virtual Machines and Containers, and use the Docker CLI to run, manage, and network isolated applications.

Scripting

Containers Intro: Docker Basics

Understand the fundamental difference between Virtual Machines and Containers, and use the Docker CLI to run, manage, and network isolated applications.

50 min ADVANCED BOTH Curriculum-reviewed
What you should be able to do after this
  • Understand the fundamental difference between Virtual Machines and Containers, and use the Docker CLI to run, manage, and network isolated applications.

The Dependency Nightmare

In the Software section, we learned that Package Managers (apt, winget) solve the problem of downloading dependencies.

But what if you are a developer, and App A requires Python 3.8, but App B requires Python 3.12? You cannot install both globally without breaking the OS.

Historically, the solution was Virtual Machines (VMs). You installed a completely separate, heavy, gigabyte-sized Operating System just to run App B in isolation. Running 10 VMs required a massive supercomputer just to handle the overhead of 10 running operating systems.

The modern solution is the Container.


1. Virtual Machines vs Containers

A Container is an isolated “box” that holds an application and only its specific dependencies (like Python 3.12).

Unlike a VM, a Container does not contain an Operating System.

Instead, it asks the Host OS (your laptop’s Kernel) to do all the heavy lifting (CPU, Memory, Disk I/O). It uses Linux Control Groups (cgroups) to perfectly restrict and isolate the application so it believes it is running on its own private machine.

  • A VM takes minutes to boot. A Container starts in milliseconds.
  • A VM consumes 2GB of RAM just sitting empty. A Container consumes 5MB.

Docker completely revolutionized the software industry by standardizing how we build and run these containers.


2. Docker: Images vs Containers

Just like the “Program vs Process” model from M18, Docker has two states.

  1. The Image (The Blueprint): A dead, inert file resting on your hard drive. It contains the exact code, libraries, and settings needed to run the app. You define Images using a text file called a Dockerfile.
  2. The Container (The Running Instance): When you tell Docker to execute an Image, it spins up a Container in RAM.

Docker Desktop

Because Containers rely on the Linux Kernel (cgroups/namespaces), Docker cannot run natively on Windows. When you install Docker Desktop for Windows, it secretly spins up a highly optimized Linux Virtual Machine in the background (using WSL2) to host the containers.

The CLI Docker Workflow

1. Pull the official Nginx Web Server image from Docker Hub (The Internet Repository)

sudo docker pull nginx

2. Run the image, creating a living Container in the background (-d for detached)

sudo docker run -d —name my-web-site nginx

3. View all running containers (Notice it looks like the ‘ps’ command!)

sudo docker ps

4. Stop the container

sudo docker stop my-web-site


3. The 3 Pillars of Container Infrastructure

An isolated web server container isn’t very useful unless it can talk to the internet, save its files permanently, and talk to a database container.

A. Networking (docker network)

Containers have their own private IP addresses. If your Nginx container is listening on port 80 internally, your web browser cannot reach it. You must punch a hole through the container wall.

Port Forwarding

Forward Port 8080 on your laptop directly into Port 80 on the container

docker run -d -p 8080:80 nginx

B. Volumes (docker volume)

Because a Container is ephemeral by design, if you delete the container, all files inside it are destroyed instantly. To save a database permanently, you map a folder on your physical hard drive into the container using a Volume.

Persistent Storage

Map /opt/my_data on your hard drive to /var/lib/mysql inside the container

docker run -d -v /opt/my_data:/var/lib/mysql mysql_server

C. Orchestration (docker compose)

Typing massively long docker run commands with 10 different -v and -p flags is exhausting and error-prone.

Instead, we write a text file called docker-compose.yml. It defines the Web Server, the Database, the Volumes, and the Networks structurally.

Docker Compose

Read the yaml file, pull the images, create the network, map the volumes, and start the system in the background

docker compose up -d

Shut down the entire infrastructure instantly

docker compose down

🔒 Security Note: Rootless Podman

The standard Docker daemon runs as the root user on Linux. If a hacker breaks out of a container, they instantly own the entire physical server. Podman is a modern drop-in replacement for Docker that allows standard, non-administrator users to run containers (Rootless Containers), dramatically improving enterprise security.


What You Just Learned

  • Virtual Machines emulate hardware and run full Operating Systems (Heavy). Containers share the Host OS Kernel and only contain the app (Lightweight).
  • An Image is the blueprint on disk. A Container is the running process in RAM.
  • Use docker run -p to forward ports and connect the container to the world.
  • Use docker run -v to mount permanent hard drive folders inside the ephemeral container.
  • Use docker compose up -d to orchestrate massive multi-container applications (like a web server + database) using a single YAML file.
  • Podman is the secure, rootless alternative to Docker on Linux.

With scripting, pipelines, and containers mastered, you hold incredible power over the Operating System. In our final section, we learn how to administer the OS itself: System Administration.