Diagnosing Slow Docker Builds and Cutting Them Down Significantly

May 14, 2026 7 min read 20 views
Stacked geometric blocks representing Docker image layers against a soft gradient blue background, symbolizing build optimization and efficiency.

Your Docker build was fast last week. Now it takes four minutes and nobody touched the Dockerfile. Or maybe it has always been slow and you have just been quietly accepting it. Either way, you are losing real time every time you iterate, and the fix is almost always in the same handful of places.

This guide gives you a repeatable process for finding where build time is going and a set of concrete changes that actually move the needle.

What you'll learn

  • How to read build output and identify slow layers
  • Why cache invalidation is the most common culprit and how to stop triggering it accidentally
  • How to order Dockerfile instructions for maximum cache reuse
  • When and how to use multi-stage builds to shrink both build time and image size
  • Quick wins from .dockerignore and BuildKit

Prerequisites

You need Docker installed (version 20.10 or later gets you BuildKit by default on most platforms). The examples use a Python web app, but the principles apply to any language. Basic familiarity with writing a Dockerfile is assumed.

Start With a Baseline Measurement

Before changing anything, measure. Run your build with the --progress=plain flag so Docker prints every step with its duration instead of collapsing them into a progress bar.

docker build --no-cache --progress=plain -t myapp:bench . 2>&1 | tee build.log

The --no-cache flag gives you a cold-cache baseline β€” the worst-case scenario your CI environment often faces. Once you have the log, search for lines like #12 30.4s. Each number is a layer ID and its duration in seconds. Sort them mentally or pipe through grep to find the expensive ones.

grep -E "^#[0-9]+ [0-9]+\.[0-9]+s" build.log | sort -t' ' -k2 -rn | head -10

Now you have a ranked list of slow layers. This is where the actual work starts.

Understand the Layer Cache Before Touching Anything

Docker builds images as a stack of layers. When a layer changes, every layer below it in the file is rebuilt from scratch. This is the single most important fact about Docker build performance.

A common mistake looks like this:

COPY . /app
RUN pip install -r requirements.txt

Every time any source file changes β€” a README edit, a comment tweak β€” Docker invalidates the COPY layer and then re-runs pip install, which might take two minutes. Flip the order:

COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY . /app

Now pip install only re-runs when requirements.txt actually changes. Your source code changes hit a cached dependency layer and the build completes in seconds.

Fix Your .dockerignore File

The COPY . /app instruction sends your entire build context to the Docker daemon before a single layer runs. If your project root contains node_modules, a .git directory, test fixtures, or local data files, you are transferring hundreds of megabytes on every build β€” and invalidating cache entries unnecessarily.

Create a .dockerignore file in the same directory as your Dockerfile and be explicit about what to exclude:

.git
.gitignore
__pycache__
*.pyc
*.pyo
.pytest_cache
.env
*.md
docs/
tests/
node_modules/
dist/
.DS_Store

After adding this, watch the context transfer line in your build output shrink. On large projects, this alone can cut several seconds off every build, and it stops your COPY . layer from being invalidated by irrelevant file changes.

Use Multi-Stage Builds to Separate Concerns

Multi-stage builds let you use one image to compile or install dependencies and a second, leaner image to run the application. The build cache for the heavy first stage is still preserved between runs, but your final image does not carry the compiler, test tools, or build-time packages.

Here is a realistic example for a Python application:

# Stage 1: dependency builder
FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --prefix=/install -r requirements.txt

# Stage 2: runtime image
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /install /usr/local
COPY . .
CMD ["python", "main.py"]

The builder stage caches the installed packages. On subsequent builds, if requirements.txt has not changed, Docker reuses that cached layer and jumps straight to copying your source. The final image contains only what the running application needs.

For compiled languages like Go or Rust, multi-stage builds also drop compilers and build tools from the final image, often shrinking image size by an order of magnitude.

Enable and Use BuildKit Properly

BuildKit is Docker's modern build backend. It runs independent build steps in parallel, skips unused stages, and gives you better cache semantics. On Docker 23 and later it is the default; on older versions you can enable it with an environment variable:

DOCKER_BUILDKIT=1 docker build -t myapp .

BuildKit also unlocks the --mount=type=cache instruction, which is worth knowing about for package managers that write large caches to disk during installation.

RUN --mount=type=cache,target=/root/.cache/pip \
    pip install -r requirements.txt

This keeps the pip download cache between builds without baking it into the image layer. When you add a new dependency, pip fetches only the new package instead of downloading everything from scratch. The same pattern works for npm (target=/root/.npm), apt (target=/var/cache/apt), and most other package managers.

Audit Expensive RUN Instructions

Package installation is not always the slow layer. Sometimes the culprit is an apt-get update && apt-get install block that pulls in a dependency chain, a build step that compiles a native extension, or a curl that fetches a large binary.

For apt, combine update and install in one instruction and clean up afterward to avoid bloating the layer:

RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq-dev \
    gcc \
    && rm -rf /var/lib/apt/lists/*

The --no-install-recommends flag tells apt to skip suggested packages, which can cut both download time and image size noticeably. Splitting the apt-get update from apt-get install into separate RUN instructions is a common mistake β€” if the install list changes, you want apt to re-fetch the package index as part of the same cached unit.

Use a Smaller or More Specific Base Image

Starting from ubuntu:latest or python:3.12 (the full Debian-based variant) means your build pulls a large base image on first run and has more packages to update. Switching to a slim or alpine variant reduces both pull time and the attack surface.

Compare the approximate compressed sizes:

Base ImageApprox. Compressed Size
python:3.12~350 MB
python:3.12-slim~45 MB
python:3.12-alpine~18 MB

Alpine uses musl libc instead of glibc, which occasionally causes compatibility problems with packages that have native extensions. The slim variant is usually a safe middle ground β€” dramatically smaller than the full image with no compatibility surprises.

Pin your base image to a specific digest or minor version tag in production so a base image update does not silently invalidate your entire cache between CI runs.

Common Pitfalls That Undo Your Work

Copying secrets into layers. Do not COPY API keys, .env files, or credentials into any layer, even an intermediate one. Use build secrets with --secret or pass them in at runtime via environment variables.

Running apt-get update in isolation. If you put RUN apt-get update in one layer and RUN apt-get install ... in the next, Docker may cache the update step and serve stale package lists when you change the install list. Always combine them.

Expecting cache to survive across machines. The layer cache is local to the daemon. In CI, every job gets a fresh runner unless you explicitly push and pull a cache registry or use a service like Docker layer caching. Check your CI provider's documentation for how to persist the BuildKit cache between pipeline runs.

Using ADD when COPY is enough. ADD has extra behavior (fetching remote URLs, auto-extracting tarballs) that can produce unexpected results. Use COPY unless you specifically need those features.

Ignoring layer count at the wrong level. Merging every RUN into one giant instruction used to be common advice to reduce layer count. BuildKit makes this less important because layers are compressed and squashed more intelligently. Prioritize cache correctness and readability; merge only when it genuinely removes redundant work like multiple apt index fetches.

Wrapping Up

Slow Docker builds almost always trace back to a handful of root causes: poor layer ordering that busts the cache on every change, a bloated build context, missing .dockerignore, and package installations that redownload everything from scratch.

Here are the concrete actions to take right now:

  1. Run a cold-cache build with --progress=plain and identify your three slowest layers before touching anything else.
  2. Add or clean up your .dockerignore file and verify the context size drops in the build output.
  3. Move dependency installation (pip install, npm ci, apt-get install) above your COPY . . instruction so it caches independently of source changes.
  4. Switch to BuildKit and add --mount=type=cache to your package manager RUN step.
  5. Evaluate whether a multi-stage build lets you separate the heavy install stage from the lightweight runtime image.

Make one change at a time and re-measure after each. That way you know exactly what moved the needle and you have a story to tell your team about why the build is faster.

πŸ“€ Share this article

Sign in to save

Comments (0)

No comments yet. Be the first!

Leave a Comment

Sign in to comment with your profile.

πŸ“¬ Weekly Newsletter

Stay ahead of the curve

Get the best programming tutorials, data analytics tips, and tool reviews delivered to your inbox every week.

No spam. Unsubscribe anytime.