How to set environment variables in a Docker container with the --env option when you run docker

Discover how to pass runtime configuration to a Docker container using --env or -e. This keeps images flexible with real-world examples like database connection strings, and shows why runtime vars beat hard-coded values. Compare with Dockerfile changes and host exports for clean deployments. Today.

If you’re poking around Docker as part of your learning journey for the DCA topics, there’s a small but mighty concept that keeps popping up: how to give a container the data it needs to run differently in different environments. Think of environment variables as the knobs you twist to change behavior without changing the code or rebuilding the image. It’s one of those things that’s simple in theory but, handled well, saves you a ton of headaches in real projects.

The easy knob: -e / --env

Here’s the thing: the most common way to set environment variables when you start a container is with the -e flag (short) or the --env flag (long form) in the docker run command. This is the runtime approach, meaning you tell Docker what to pass into the container each time you launch it. It’s especially handy when you want the container to know where to find a database, or which API endpoint to hit, without baking those values into the image itself.

For example, picture an app inside a container that needs a database connection string. You can run it like this:

  • docker run -e DB_HOST=db.example.com -e DB_USER=admin -e DB_PASS=secret myapp

Or, if you prefer the longer form:

  • docker run --env DB_HOST=db.example.com --env DB_USER=admin --env DB_PASS=secret myapp

That flexibility is a big win when you’re juggling multiple deployments—development, staging, production—each with its own stack of configuration. It’s the kind of thing you can change on the fly, without rebuilding the image or touching the Dockerfile.

Build-time vs runtime: ENV in the Dockerfile is different

There’s another way people sometimes talk about environment data: the ENV instruction inside a Dockerfile. That sets default values at build time. It’s like setting a baseline that every container started from that image will inherit unless you override it later. It’s useful for defaults, but it isn’t as flexible if you expect values to change often or vary by deployment.

Think of it like this: ENV in a Dockerfile is a sensible default, but -e or --env lets you override those defaults when you actually run the container. If you’re shipping an app that should connect to a test database most of the time but needs a production database in production, you don’t want to rebuild the image just to switch databases. You want one image, many running configurations.

Swarm, configs, and where docker config fits in

If you’re dabbling with Docker Swarm, you’ll hear about docker config. It’s a tool for managing configuration data within a swarm, but it isn’t a universal substitute for environment variables in standalone containers. For single-host setups or simple deployments, -e or --env remains the go-to. When you scale with Swarm, you can mix environment variables with secrets and configs to keep sensitive data safer and to manage configuration in a structured way. The key takeaway: environment variables are runtime knobs; Swarm configs are a higher-level mechanism for orchestrated environments. They’re related, but they serve different purposes.

A quick, practical example that sticks

Let’s make this concrete. Imagine a microservice that talks to a database and needs a couple of strings to do its job. You want different values in development versus production. You can pass those values at startup and avoid any image churn.

  • Start the container with direct values:

docker run -e APP_ENV=production -e DB_HOST=db-prod.example.com -e DB_PORT=5432 myservice

  • Or keep things organized with an env-file:

.env

APP_ENV=production

DB_HOST=db-prod.example.com

DB_PORT=5432

docker run --env-file .env myservice

An env-file is just a simple text file with KEY=value lines. It’s handy when you have a lot of variables or you want to share a standard set of defaults across teams—just keep sensitive data out of version control and use secrets in the places that support them.

Why not rely on exporting on the host?

You might wonder if exporting variables on the host and hoping they “make it into” the container works. In practice, that doesn’t automatically happen. The host’s environment is separate from the container’s environment; the container is meant to be an isolated runtime. Unless you explicitly pass values with -e, --env, or via an env-file, the container won’t see them. It’s a good reminder of the isolation Docker enforces—that’s a feature, not a quirk.

A few practical tips you’ll appreciate in real work

  • Default values are smart: Put sensible defaults in your Dockerfile with ENV to cover common scenarios, but override them at runtime when needed.

  • Don’t put secrets in plain text in images: If the values are sensitive (passwords, API keys), prefer runtime overrides and steer toward Docker secrets or a secret management pattern, especially in production.

  • Use a consistent naming convention: Treat environment variables like a little contract. Decide on a naming scheme (uppercase with underscores, for example) and stick with it across services to avoid confusion.

  • Remember that environment variables are strings: If your application expects numbers or booleans, cast them inside the app or pass them as strings and convert in code.

  • Combine with config files when it makes sense: Some apps benefit from a configuration file alongside environment variables. Keep the parts that are truly configuration-driven in env vars and place other options in files if they’re more complex.

A gentle digression that lands back on the point

You’ve probably seen a dozen ways people manage configuration in containerized apps. It’s tempting to chase the latest pattern, but the real trick is clarity and repeatability. The -e flag gives you both: a clear, explicit way to adjust how a container behaves at launch, without cracking open and rebuilding an image. It’s the kind of simplicity that scales, especially when you’re dealing with multiple environments. It’s also the kind of detail that tends to appear in structured coursework and hands-on modules because it teaches you to separate code from configuration—one of those fundamental discipline shifts that make containers so powerful.

A useful wrap-up you can carry forward

  • The correct, straightforward method to set environment data in a container is the -e or --env flag in docker run. It’s a runtime-level knob you can turn as you launch the container.

  • You can also bake defaults with ENV in a Dockerfile, but this is build-time behavior and less flexible for frequent changes.

  • For larger setups, especially in Swarm, explore docker config and secrets to manage configurations and sensitive data in a more controlled way.

  • If you’re organizing a lot of variables, an env-file is a neat way to keep things tidy and repeatable.

Key takeaways to memorize (without turning this into a checklist)

  • Runtime flexibility wins when you need to adapt a container’s behavior across environments.

  • Isolation is a feature: the container’s environment is separate from the host unless you explicitly map values.

  • Defaults are fine, but you should override them at run time when configurations are not universal.

A final thought as you continue your journey

Every time you spin up a container, ask yourself: what changes between environments, and how can I express that without rebuilding the image? The answer almost always involves environment variables, and that makes this topic feel small but surprisingly powerful. Once you get the hang of passing values at runtime, you’ll see how it ties into bigger patterns—like how you manage configurations in microservices, how you structure DevOps pipelines, and how you approach learning topics covered in the DCA material.

If you enjoyed thinking through the knobs and got a little curious about the practical side, try a quick exercise: run a simple container twice—once with a minimal set of environment variables, once with a more complete set. Compare the behavior you observe and note what changes. It’s a little experiment that cements the concept and ties neatly back to everyday Docker workflows.

In the end, these tiny choices—how you pass data into a container—add up. They shape the reliability and clarity of your deployments, and they’re exactly the kind of detail that shows up in real-world systems, not just in theory. And that’s the heart of understanding Docker more deeply, especially as you navigate through the topics that the DCA scope emphasizes.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy