Understand why the RUN instruction runs commands at build time in Dockerfiles and how it differs from CMD.

Learn how the RUN instruction runs commands during image build, creating layers that shape the final container. It differs from CMD, which runs at startup. See practical examples like updating packages or installing software to streamline Docker workflows and keep images lean for faster builds.

Title: The heart of a Dockerfile: what runs your commands and why it matters

If you’ve ever built a Docker image, you’ve likely met a handful of tiny but mighty phrases that shape the whole container you’re creating. Among them, one question keeps showing up: which Dockerfile instruction actually runs commands during the build? The quick answer is RUN. But there’s more to the story—because understanding RUN really unlocks how Docker images are assembled, how they behave at runtime, and how to keep things clean, fast, and repeatable.

Let me explain it in plain terms, with a few practical notes you can put to use right away.

RUN: the build-time workhorse

  • What RUN does: RUN executes shell commands while the image is being built. Think of it as the moment you install software, configure the system, or fetch dependencies inside the image itself. When you write a line like RUN apt-get update && apt-get install -y curl, you’re telling Docker to perform those actions and bake the results into a new layer of the image.

  • Why it matters: because everything you install or set up with RUN becomes part of the image. Each RUN creates a new layer, and that layering is at the core of how Docker images are cached and reused. If you change something later, Docker can reuse the untouched layers to speed up rebuilds. That caching is a big win in real-world development when you’re iterating on an image.

  • A practical pattern: keep RUN lines focused and minimize the number of layers. A common approach is to chain related commands in a single RUN with a shell-friendly flow, then clean up after installation to avoid leaving behind unnecessary files. For example, after installing packages, you can remove temporary files and clear caches within the same RUN statement. This keeps the final image lean and efficient.

CMD: the default command, at run time

  • What CMD does: CMD specifies the default command that runs when a container starts from the image. It’s not executed during the build; it’s a runtime instruction. If you don’t override it, the container will launch with the command you’ve declared in CMD.

  • Why it matters: CMD defines the container’s behavior after it’s launched. It’s the difference between a reusable base image and a ready-made service that starts up with a single command. You often see CMD used to run a server, a script, or a main application entry point.

  • A quick comparison: RUN is about building the image; CMD is about what happens when you run the container. They can be used together, but they play different roles in the lifecycle of your Docker workflow.

A tiny example to anchor the idea

Imagine you’re building an image that needs curl and a simple startup script.

  • RUN apt-get update && apt-get install -y curl

  • COPY startup.sh /usr/local/bin/startup.sh

  • RUN chmod +x /usr/local/bin/startup.sh

  • CMD ["/usr/local/bin/startup.sh"]

In short: the RUN line updates the package index and installs curl during the build. The COPY and second RUN set up the startup script, and CMD tells Docker what to run when the container comes to life. The distinction between what happens at build time (RUN) and what happens at run time (CMD) is not just academic. It affects how your images operate in development, CI systems, and production.

From build to layers: what really happens under the hood

  • Image layers are more than cosmetic. Each RUN you write creates a new layer that records the state after those commands execute. This layering is what enables Docker to reuse parts of the image across builds. If you change a later layer, only the subsequent layers need to be rebuilt. It’s a kind of smart, granular caching that can save you hours when you’re tweaking a Dockerfile.

  • Caching can surprise you. If you run a long RUN command and then tweak something earlier in the file, Docker may rebuild the later layers from scratch because the cache is invalidated. That’s why many seasoned folks group related tasks into fewer, more meaningful RUN steps and take care to order commands in a caching-friendly way.

  • Minimality pays off. The more you trim what ends up in the final image, the smaller it tends to be and the quicker it transfers. This is why you’ll often see multi-stage builds or clean-up steps inside RUN blocks—to haul in required tooling for the build, then throw it away when you’re done.

Why this matters for Docker literacy and real-world use

  • It’s about reproducibility. If a teammate builds the image on a different machine, the commands in RUN are what reproduce the software environment. Consistency is your friend here. When you review a Dockerfile, you’re basically reading the image’s recipe for what’s inside.

  • It influences security and maintenance. Packages installed with RUN are part of the image’s surface. If you keep your RUN steps tidy and up to date, you reduce risk and simplify future maintenance.

  • It shapes performance. A well-structured Dockerfile with efficient RUN commands helps with build speed and image size. That matters in workflows where images are built often, pushed to registries, then deployed to fleets of containers.

A few friendly caveats and common patterns

  • CMD vs ENTRYPOINT: CMD and ENTRYPOINT both describe what runs when the container starts, but they’re used a bit differently. CMD provides defaults that can be overridden at runtime, while ENTRYPOINT defines a fixed executable. Many folks pair them to establish a primary program, with CMD offering optional arguments. It’s a nuance that often crops up in Docker discussions, so it’s worth keeping straight.

  • Multi-stage builds: this is a favorite topic in modern Docker usage. You use one stage to build or compile, then copy just the final artifacts into a lean runtime image. It keeps the final image small and focused, which is great for deployment pipelines and resource efficiency.

  • Cleaning up inside RUN: after you install packages, it’s common to clean up caches to reduce image size. Something like rm -rf /var/lib/apt/lists/* is a traditional cleanup move in Debian/Ubuntu-based images. It’s a small touch, but it pays off when images are pulled across networks or stored in registries.

A quick mental model you can carry around

  • Build time (RUN): you’re assembling the container’s internal world. It’s all the software and configurations that become part of the image itself.

  • Run time (CMD/ENTRYPOINT): you’re deciding how the container behaves when it’s started in a run environment. It’s the actual process that gets spawned.

  • Layers are the bookkeeping of all the above. They’re what let Docker reuse, cache, and optimize your workflow. Treat them as a ledger of changes you’ve made to the image over time.

A tiny, practical digression you might find relatable

Think of building a Docker image like packing for a trip. RUN is packing the essentials into your suitcase—the clothes, the charger, the little toolkit you might need along the way. You want to pack smartly: not too much, but enough to handle common situations. CMD is like the plan for the day when you arrive. “We’ll start with coffee, then head to the museum.” The plan can change on the fly, but the items you packed stay with you in the bag. If you packed a lot of extra gear you don’t actually need, you’ll carry extra weight for no good reason. The same idea applies to Docker images: keep build-time actions lean and purposeful, and reserve runtime decisions for what the container should do when it’s alive.

Putting these ideas into everyday Docker work

  • When you’re writing a Dockerfile, start by stating the base image, then layer in the necessary software with one or two well-thought-out RUN steps. If you need several packages, see if you can combine their installation into a single RUN to control the number of layers.

  • Use CMD to declare the default behavior for standard tasks, but don’t be afraid to override it when you run a container in different contexts. It’s part of the flexibility that helps you adapt quickly in real projects.

  • If your application has a build step (like compiling code), consider a multi-stage pattern. Build in one stage, then copy only the results into a clean, minimal runtime stage. It’s a straightforward way to keep images nimble.

  • Always verify the final image size and contents by inspecting with local tooling. A quick look at what’s inside helps you spot unnecessary packages or misplaced files, which is easier to address early in the development cycle.

A quick recap you can nod to while you’re at your desk

  • RUN is the Dockerfile instruction that executes commands during the image build. It creates a new layer and captures the results of those commands.

  • CMD sets the default command to run when the container starts, at runtime.

  • The two work hand in hand, shaping both the image’s internal state and its behavior in production-like environments.

  • For efficient, maintainable images, think about grouping related tasks, cleaning up after installs, and using multi-stage builds when appropriate.

If you’re stepping into Docker for the first time or widening your toolkit, the RUN instruction is a fundamental building block you’ll return to again and again. It’s the engine behind the setup you want baked into every container you deploy. And while CMD handles the show that happens after birth, it’s the build-time work of RUN that makes that show possible in the first place.

For anyone who loves a good analogy, picture RUN as the part of the recipe that actually creates the dish in the oven. CMD is what you serve at the table, when your guests (containers) arrive. The dinner is prepared, the table is set, and the night can begin.

In the end, mastery of these small commands translates into clarity and confidence. The Dockerfile becomes not just a script, but a precise map of how your software would like to live inside a container. That quality—clear, practical, and repeatable—does a lot of quiet work in the background, making deployments smoother and teams more aligned. And that’s a win you’ll feel every time you spin up a new container with a familiar, well-ordered image.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy