This website uses cookies

Read our Privacy policy and Terms of use for more information.

Docker's logo is a friendly whale. It's approachable. It's playful. But beneath the surface of most Dockerfiles, something far less friendly is lurking. Security risks that compound silently over time, growing into leviathan sized problems that teams don't notice until it's too late.

Most Dockerfiles are written with one goal: make the build pass. They get committed, promoted through environments, and eventually run in production. Nobody revisits them. Nobody audits them. Meanwhile, the attack surface grows with every layer, every unvetted dependency, every shortcut that "works fine" in development.

Containers are often perceived as isolated and therefore safe. This false sense of security is exactly what makes Docker security issues so dangerous. They hide in plain sight, buried in layers that no one inspects, running with privileges no one questioned.

In this article, we'll surface the most common and most dangerous security risks hiding in your Dockerfiles. These aren't edge cases. They aren't theoretical. They are the default behavior when you don't explicitly design against them. And if you've never taken a hard look at how your images are built, there's a good chance every single one of these is lurking in your infrastructure right now.

Running Containers as Root: The Default You Should Never Accept

By default, Docker containers run as root. Most teams never change this because nothing visibly breaks. The application starts. The health checks pass. Everything looks fine. But underneath, you've handed the keys to the kingdom to every process running inside that container.

If an attacker gains access to a container running as root, the blast radius is significantly larger. They can modify any file in the container's filesystem, install tools, and probe the network. In environments with misconfigured volume mounts or known container escape vulnerabilities, root inside the container can become root on the host. That's not a theoretical risk. It's a documented attack path.

What makes this worse is how common it is. Most official images and tutorial Dockerfiles don't include a USER instruction. If you've never explicitly added one, your containers are running as root right now.

The fix is straightforward. Create a dedicated user and group in your Dockerfile and switch to it before the CMD or ENTRYPOINT instruction.

# Before: no USER instruction, runs as root by default
FROM node:18-slim
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"]

# After: runs as a non-root user
FROM node:18-slim
WORKDIR /app
COPY . .
RUN npm install
RUN groupadd -r appgroup && useradd -r -g appgroup appuser
RUN chown -R appuser:appgroup /app
USER appuser
CMD ["node", "server.js"]

This is one of the simplest fixes you can make, yet it remains one of the most commonly skipped. It's not a nice to have. It's a baseline expectation in any production environment.

Secrets Baked Into Image Layers: The Risk You Can't Delete

Developers frequently copy or embed secrets into images during the build process. API keys get passed through ENV instructions. SSH keys get copied in to pull private repositories. Config files with database credentials get added with COPY. And then, in an attempt to clean up, developers delete them in a subsequent RUN instruction and assume the problem is solved.

It's not.

Docker images are composed of immutable layers. Every instruction in your Dockerfile creates a new layer, and previous layers are preserved in the final image. That means even if you delete a secret in a later step, the layer where it was added still contains it. Anyone with access to your image can extract it.

This isn't difficult to do. Tools like docker history and dive make it trivial to inspect individual layers and see exactly what was added. If your image is pushed to a shared registry, every person with pull access can see those secrets.

The modern solution is BuildKit secret mounts. By using --mount=type=secret in your RUN instructions, secrets are made available at build time but are never written to any image layer. They exist only in memory during that specific build step and leave no trace in the final artifact.

# syntax=docker/dockerfile:1
RUN --mount=type=secret,id=my_secret \
    cat /run/secrets/my_secret

The important thing to understand is that this problem is invisible until someone looks. And by then, your credentials may have been exposed for months. This is not a risk you can afford to leave unaddressed.

Unvetted and Unpinned Base Images: Building on Unstable Ground

Every Dockerfile starts with a FROM instruction, and that single line determines the foundation your entire image is built on. When that line reads FROM node:latest or FROM python:3, you're pulling whatever image happens to be associated with that tag at the moment your build runs.

Tags are mutable. They can be updated, overwritten, or point to entirely different image contents from one day to the next. That means your build is not reproducible, and you have no guarantee that the image you tested last week is the same one you're deploying today.

Even pinning to a specific version tag like node:18.17.0 doesn't fully solve the problem. Tags can still be overwritten. The only way to guarantee immutability is to pin by digest.

FROM node:18.17.0@sha256:abc123...

Beyond reproducibility, there's a supply chain risk. When you pull images from public registries, you're trusting every maintainer and every dependency in that image's lineage. One compromised package or one malicious layer is all it takes to introduce a backdoor into your infrastructure.

Pin by digest for critical production images. Vet your base images. Understand what's inside them before you build on top of them.

No Image Scanning in CI: Flying Blind Into Production

Most teams build Docker images and push them to a registry without ever scanning them for known vulnerabilities. The assumption is simple: the build passed, the tests passed, so the image must be fine.

That assumption is wrong.

Base images and installed packages carry CVEs. New vulnerabilities are disclosed constantly. An image that was clean when it was first built might accumulate critical vulnerabilities within weeks as new disclosures emerge. Without automated scanning, you have zero visibility into what you're shipping.

The gap here is not a lack of available tooling. Tools like Trivy, Grype, and Snyk can scan container images in seconds and surface critical and high severity CVEs before an image ever leaves your pipeline. Integrating one of these into your CI process is not a heavy lift.

trivy image your-image:latest --severity HIGH,CRITICAL

Yet most teams skip this step entirely. They scan application dependencies but ignore the container those dependencies run inside. That's a massive blind spot, and it's one that grows more dangerous over time as images age and new CVEs are published.

Image scanning in CI is not an advanced practice. It's the bare minimum. If your pipeline doesn't include it, you're flying blind.

Bloated Images Mean a Bigger Attack Surface

Here's a connection most teams miss: image size isn't just a performance or cost concern. It's a security concern. Every unnecessary package, binary, shell, and utility in your final image is a potential tool for an attacker.

Think about what ends up in a typical image that was built without any optimization. Build tools like gcc and make. Package manager caches. Entire SDKs that are only needed at compile time. Debug utilities that have no business being in a production environment. All of it ships. All of it is available to anyone who gains access to the running container.

The principle is simple: the less that's in your final image, the fewer things an attacker can exploit. This is the philosophy behind distroless images, which strip away everything except the application and its runtime dependencies. No shell. No package manager. Nothing to leverage.

Multi stage builds are one of the most effective tools here. By separating your build environment from your runtime environment, you can keep all the heavy tooling in an earlier stage and copy only the compiled artifact into a minimal final image. The result is often a 50 to 80 percent reduction in image size and a dramatically smaller attack surface.

Size is not just about speed. It's about how much opportunity you're giving an attacker.

The Compounding Effect

These risks don't exist in isolation. A container running as root, with secrets baked into its layers, built on an unscanned and unpinned base image, stuffed with tools that have no business being in production. That's not five small problems. That's one massive, compounding vulnerability.

Each risk is a tentacle. Alone, perhaps manageable. Together, they form something monstrous. And like the leviathan of myth, they thrive in the dark, in the Dockerfiles nobody reviews and the images nobody scans.

Surfacing What's Below

The five risks we covered here are not exotic. They are the defaults. Running as root is the default. Secrets persisting in layers is the default. Mutable tags are the default. No scanning is the default. Bloated images are the default. If you haven't explicitly designed against each of these, they are almost certainly present in your infrastructure.

Writing secure Dockerfiles isn't about paranoia. It's about intentionality. The same way you wouldn't ship application code without tests or review, you shouldn't ship container images without deliberate security decisions baked into every layer.

You now know what's lurking beneath the surface. The question is: do you know how to fix all of it?

Go From Exposed to Production Ready

This article surfaced the risks. My eBook gives you the systematic, chapter by chapter process to eliminate them.

Creating Production Ready Dockerfiles covers security alongside size, speed, and maintainability: the four pillars of a production grade image. From nonroot users to BuildKit secret mounts, from base image selection and digest pinning to CI scanning pipelines, every recommendation is practical, applied, and part of a running example you follow from start to finish.

You'll also get a production ready Dockerfile review checklist you can use for self review, pull requests, or as the foundation for your team's container standards.

Stop guessing. Start building images that are as intentional as the code inside them.

Keep Reading