← All Guides
beginner

Docker Compose for Beginners: Organizing Your Home Server the Right Way

How to write Docker Compose files, organize your homelab services, manage environment variables, and keep everything maintainable long-term.

Budget Homelab ·
dockercontainersself-hosting

docker run commands work fine for testing. They’re a mess for anything permanent. A long docker run command with 15 flags is unreadable, hard to reproduce, and impossible to hand off to someone else.

Docker Compose solves this by putting your container configuration in a YAML file. One file per service (or group of related services), version controlled, readable, and reproducible. Running a service is docker compose up -d. Taking it down is docker compose down.

This guide covers the Compose fundamentals you need for a well-organized homelab.

The basic structure

Every Compose file lives in its own directory and follows this structure:

services:
  service-name:
    image: image-name:tag
    container_name: human-readable-name
    restart: unless-stopped
    ports:
      - "host-port:container-port"
    volumes:
      - ./local-path:/container-path
    environment:
      VARIABLE_NAME: value

Save this as docker-compose.yml and run it with docker compose up -d.

Folder structure

The most important organizational decision: one directory per service (or per stack of related services).

~/docker/
├── nginx-proxy-manager/
│   └── docker-compose.yml
├── paperless-ngx/
│   ├── docker-compose.yml
│   ├── consume/
│   └── media/
├── syncthing/
│   ├── docker-compose.yml
│   └── sync/
├── technitium/
│   ├── docker-compose.yml
│   └── config/
└── mealie/
    ├── docker-compose.yml
    └── data/

Each service gets its own directory. Data volumes (except named Docker volumes) use relative paths within that directory. This means:

Avoid the temptation to put everything in one giant Compose file. It seems convenient until you want to update one service without touching the others.

Restart policies

Almost every homelab container should have restart: unless-stopped. This means:

The alternatives:

For homelab services: unless-stopped.

Environment variables

Configuration that varies between environments (passwords, API keys, hostnames) belongs in environment variables, not hardcoded in your compose files.

Option 1: Inline in compose file

environment:
  POSTGRES_PASSWORD: mypassword
  TZ: America/New_York

This works but means your compose file contains secrets. Don’t commit this to a public Git repository.

Option 2: .env file (preferred)

Create a .env file in the same directory as your compose file:

# ~/docker/paperless-ngx/.env
POSTGRES_PASSWORD=mypassword
PAPERLESS_SECRET_KEY=long-random-string-here
ADMIN_PASSWORD=anotherpassword

Reference the variables in your compose file:

environment:
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
  PAPERLESS_SECRET_KEY: ${PAPERLESS_SECRET_KEY}
  PAPERLESS_ADMIN_PASSWORD: ${ADMIN_PASSWORD}

Docker Compose automatically loads .env from the same directory. Add .env to your .gitignore if you’re using git.

Option 3: env_file directive

env_file:
  - .env

This passes all variables from .env directly to the container without declaring them individually. Simpler, but less explicit.

Volumes explained

Two types of volumes in Compose: bind mounts and named volumes.

Bind mounts map a host directory to a container path:

volumes:
  - ./data:/app/data

The ./data is relative to your compose file directory. The container writes to /app/data and you can access it at ~/docker/myservice/data. Easy to back up, easy to inspect.

Named volumes are managed by Docker:

volumes:
  - myservice_data:/app/data

volumes:
  myservice_data:

Docker manages where the data lives (usually /var/lib/docker/volumes/). You can’t easily browse it. Useful for databases and other services where you don’t need to access the data directly.

For homelab use, prefer bind mounts for service data you care about. Named volumes are fine for database storage if you’re backing it up via database dumps.

Networks

By default, all services in the same compose file are on a shared network and can reach each other by service name. Services in different compose files can’t reach each other by default.

If NPM needs to reach your Paperless container (which is in a different compose stack), you need a shared network:

In the NPM compose file:

networks:
  proxy:
    external: true

In the Paperless compose file:

services:
  webserver:
    networks:
      - default
      - proxy

networks:
  proxy:
    external: true

Create the shared network once:

docker network create proxy

Now NPM can reach the Paperless container using its container_name as the hostname.

The alternative (what I do for simplicity): forward by IP address instead of container name. NPM’s proxy hosts use 192.168.x.x:port — less elegant but zero network configuration required.

Useful commands

# Start a stack
docker compose up -d

# Stop a stack (keeps data)
docker compose down

# Stop and remove volumes (destroys data — be careful)
docker compose down -v

# View logs for a stack
docker compose logs -f

# View logs for one service
docker compose logs -f service-name

# Pull latest images
docker compose pull

# Restart after pulling
docker compose up -d

# Open a shell in a running container
docker compose exec service-name bash

# Check what's running
docker compose ps

A real example: Mealie

Here’s a full, working Mealie compose file as an example of these patterns in use:

services:
  mealie:
    image: ghcr.io/mealie-recipes/mealie:latest
    container_name: mealie
    restart: unless-stopped
    ports:
      - "9925:9000"
    volumes:
      - ./data:/app/data
    environment:
      TZ: America/New_York
      BASE_URL: https://mealie.yourdomain.com
      ALLOW_SIGNUP: "false"
      DB_ENGINE: sqlite

Save as ~/docker/mealie/docker-compose.yml, run docker compose up -d, and Mealie is running at http://your-server-ip:9925.

This is the pattern used throughout every guide on this site. One directory, one compose file, bind mounts for data, environment variables for configuration.

For getting Docker installed first, see the Getting Started guide. For adding HTTPS to any service, see the Nginx Proxy Manager guide.