Complete Guide to the docker run Command
In this article, we’ll walk you through everything you need to know about the Docker run command. We’ll cover the basics, dive into the syntax, and explore all the common options you’ll actually use. Whether you’re just getting started or dealing with more complex setups, you’ll pick up the best practices for starting and managing containers.
What does docker run do?
Core Function: The Command That Starts Containers
docker run
is the most important and fundamental command in Docker. Its main job is to start an application inside an isolated environment called a “container.”
Whether you’re a developer or system administrator, learning docker run
is the first step to using containers effectively. This command’s core job is to create a new running container from something called an “image.” It acts like a bridge between packaged software and actually running software.
Simply put, docker run
demonstrates Docker’s core value: package an application with everything it needs, then run it the same way anywhere. It takes a read-only template called an “image” and turns it into a running, modifiable “container” where the program can execute. Whether you’re running a simple “Hello, World!” program or deploying complex applications, it all starts with docker run
.
From Image to Container: From “Installer” to “Running Program”
To use docker run
effectively, you first need to understand the difference between “images” and “containers.” This is fundamental to understanding container technology.
Images are like software installers or blueprints. They package everything needed to run a program: code, runtime environment, tools, and settings. Images are defined using a file called Dockerfile and consist of layered file systems stacked on top of each other. Images are read-only and never change once created.
Containers are running instances of images. When you execute docker run
, Docker creates a container using that image. It adds a writable “container layer” on top of the read-only image. Any new files or changes the container makes while running, like logs, are saved in this writable layer.
The benefit is that the original image stays unchanged. So every time you start a container from the same image, you get an identical, clean environment. This ensures that if a program runs on your development machine, it will run exactly the same way on the server.
What Happens Behind docker run
The docker run
command looks simple, but Docker does a lot of work behind the scenes. Understanding what it actually does helps us troubleshoot problems.
When you type docker run nginx
, Docker follows these steps:
Step 1: Find the Image
Docker first looks for the nginx image on your computer. If it can’t find it, it downloads it from Docker Hub (the default image registry). You can also use the --pull
option to control this behavior, like forcing a download every time (always), only downloading when not found locally (missing, which is the default), or never downloading (never).
Step 2: Create the Container
After finding the image, Docker uses it to create a new container. The container now has its own ID and writable file layer. If you don’t use --name
to give it a name, Docker will assign a random one.
Step 3: Allocate Resources and Configure Next, Docker configures the container based on the options you provided. For example:
- Network: Connect the container to the network
- Port mapping: Connect your computer’s ports to the container’s ports, so external access to container services is possible
- Storage: Mount folders from your computer or Docker volumes into the container for data storage
- Resource limits: Limit how much CPU and memory the container can use
This prepares an isolated runtime environment.
Step 4: Run the Program
Docker starts the container and runs the program predefined in the image (defined by ENTRYPOINT and CMD). You can also specify your own program to run after the docker run
command.
Step 5: Connect Terminal
By default, the container’s output (like logs) displays directly in your terminal window, and you can interact with it. If you want it to run in the background, use the -d
option.
The entire process is: you tell Docker what you want (like a container running Nginx), and docker run
handles all the complex details for you.
docker run Command Syntax
Basic Format
The docker run
command follows a fixed format:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
- docker run: Fixed beginning
- [OPTIONS]: Optional flags to control various container behaviors like networking, storage, etc.
- IMAGE: Required part that tells Docker which image to use for creating the container
- [COMMAND]: Optional. If provided, it overrides the default command in the image
- [ARG…]: Optional arguments for the command above
The simplest usage is docker run hello-world
, which runs the container with all default settings.
Image: Specifying Which “Installer” to Use
The image tells Docker which template to use for creating the container. There are several ways to specify it:
Image name: Like ubuntu
, nginx
, or your-username/your-app-name
.
Tag: Used to distinguish different versions, follows the image name with a colon, like ubuntu:22.04
. If you don’t specify a tag, Docker defaults to latest
. However, in production environments, it’s better to specify an explicit version because latest
can change anytime.
Digest: This is the most precise method, using a long hash value to specify a unique image version, formatted as @sha256:<hash-value>
.
For example:
docker run nginx@sha256:f8f4ffc1f02179a3c7518527891a668132fe487f4a37b03657ba5f4b83041f23
Using digests ensures you run exactly the same image every time.
Customizing the Running Program
Every image has a default command that runs when it starts. docker run
allows you to modify it at runtime.
Override default command (CMD): Add the command and arguments you want to run after the image name to replace the image’s default CMD. For example, the ubuntu image defaults to running bash, but if you run docker run ubuntu ls -l /
, it will execute the ls -l /
command, list the root directory files, then exit.
Override entry point (ENTRYPOINT): ENTRYPOINT defines the main program the container runs. To modify it, use the --entrypoint
option. For example, if an image’s ENTRYPOINT is python and you want to use bash for debugging, you can write:
docker run --entrypoint /bin/bash my-image
Common docker run Options
docker run
has many options that allow fine-grained control over container behavior.
Specifying Runtime and Interaction Methods
These options determine how the container runs and how you interact with it.
Foreground vs. Background Running:
- Default is foreground running, where the container occupies your terminal window until the program ends
- Use
-d
or--detach
to run the container in the background. This is essential for long-running services (like web servers). After the command executes, you immediately get terminal control back
Interactive Mode:
-it
is a commonly used combination for entering the container’s command line interface. It’s actually two options combined:
-i
or--interactive
: Allows you to input commands to the container-t
or--tty
: Provides a proper terminal interface
docker run -it ubuntu bash
is the classic way to enter an Ubuntu container’s command line.
Container Naming and Cleanup:
--name
: Give the container a memorable name, like--name my-web-server
. This makes management much easier later, likedocker stop my-web-server
--rm
: Automatically delete the container when it exits. This is useful for temporary tasks (like testing) and prevents your system from accumulating useless old containers
Restart Policy:
--restart
: Defines what happens when the container crashes. This is important for ensuring service stability.
no
: Default value, don’t restarton-failure
: Only restart when the program exits with an errorunless-stopped
: Always restart, unless you manuallydocker stop
italways
: Always restart, regardless of exit reason
For background services, --restart unless-stopped
is a good choice.
Specifying Network and Ports
These options handle exposing the container to the outside world for external access.
Port Mapping:
-p <host-port>:<container-port>
: This is one of the most commonly used options. It connects a port on your computer (host) to a port inside the container. For example, docker run -p 8080:80 nginx
maps the host’s port 8080 to the Nginx container’s port 80. This way, accessing localhost:8080
on the host is equivalent to accessing the service inside the container.
-P
: Automatically maps all ports that the image declares for exposure to random ports on the host. It’s convenient but the port numbers aren’t fixed.
Network Connection:
--network
: Connect the container to a specified network. Default is the bridge network. In real applications, you typically create your own network (using docker network create
), then connect all containers that need to communicate with each other to this network. This way they can access each other directly using container names.
--network-alias
: Give the container an alias within the network. For example, a container named database-v1.2
can be given an alias db
, and other containers can connect to it directly using the name db
.
Data Storage and Persistence
Containers are “disposable” by default - when deleted, the data inside is gone. But applications like databases need to permanently save data. Docker provides two main methods: volumes and bind mounts.
Named Volumes (-v <volume-name>:<container-path>
): This is the officially recommended and most reliable method for data persistence.
-
How it works: Named volumes are managed by Docker, with data actually stored in a specific directory on the host, but we typically don’t directly manipulate it. The key is that the volume’s lifecycle is separate from the container’s. Even if the container is deleted, the volume and its data remain. Next time you create a new container and mount the same volume, the data comes back.
-
Example:
docker run -v mysql-data:/var/lib/mysql mysql
creates a volume namedmysql-data
and mounts it to the container’s/var/lib/mysql
directory, which is where MySQL stores its data.
Bind Mounts (-v /host-path:/container-path
): Directly mount a file or directory from the host into the container.
- How it works: The container can directly read and write to this directory on the host, as if it were part of itself.
- Use cases: Particularly useful during development. For example, you can mount your project’s code directory into the container, edit code on the host with your favorite editor, and see changes immediately in the container for real-time refresh.
- Note: This method isn’t portable because it depends on specific file paths on the host. It may also have performance issues on Windows or macOS.
Read-only filesystem (--read-only
): For better security, use this option to make the container’s filesystem read-only. This way, even if the program is compromised, it can’t write files randomly in the container. If the program truly needs to write files in certain places (like logs or temporary files), you can separately mount writable volumes for those directories.
Here’s a table summarizing the differences between these storage methods:
Feature | Named Volumes | Bind Mounts | Anonymous Volumes |
---|---|---|---|
Main Use | Save production data (like databases) | Share code/config files during development | Temporary data for single container only |
Host Location | Docker-managed directory | User-specified arbitrary path | Docker-managed with random name |
Lifecycle | Independent of container, requires manual deletion | As long as files on host | Can be deleted with container (docker rm -v ) |
Portability | High, works anywhere | Low, depends on host paths | Low, not suitable for sharing |
Performance | Good performance on Linux | Good on Linux, possibly slower on other systems | Same as named volumes |
docker run syntax | -v my-volume:/app |
-v /host/path:/app |
-v /app (no host path prefix) |
Specifying Environment Variables and Configuration
These options let you pass configuration information when starting the container.
-e <key>=<value>
or --env
: Set an environment variable inside the container. This is the most common way to pass configuration information like database passwords, API keys, etc.
--env-file
: If you have many environment variables, you can write them in a file and use this option to pass them all at once. This is more secure and cleaner.
-w
or --workdir
: Set the working directory inside the container. Commands executed in the container afterward will run in this directory.
--entrypoint
: As mentioned earlier, this option can completely replace the image’s default startup program, typically used for debugging.
Specifying Resource Limits
In environments where multiple applications share resources, limiting what each container can use is important to prevent one container from consuming all resources.
--cpus
& --memory (-m)
: These are the two most basic resource limit options.
--cpus="1.5"
limits the container to use at most 1.5 CPU cores--memory="512m"
or-m 512m
limits the container to use at most 512MB of memory
Setting these limits is very important for maintaining system stability.
--oom-kill-disable
: By default, if a container uses too much memory, the system will kill it. This option can disable this behavior, but it’s very dangerous. If a program with memory leaks uses this option, it could consume all the host’s memory, causing system crash.
Security and Permission Settings
Docker provides good isolation by default, but these options let you further adjust security settings.
--privileged
: A very powerful and dangerous option. It removes almost all security isolation between the container and host, allowing the container to directly access host devices. Never use this unless in special circumstances (like running Docker inside a container).
--cap-add
& --cap-drop
: A more fine-grained and safer choice than --privileged
. You can use them to precisely add or remove required system permissions for the container, rather than giving all permissions at once.
--security-opt
: Used to set more advanced security options, like AppArmor or seccomp profiles, which can limit the system calls the container can make, further hardening security.
-u
or --user
: Many images run programs as the root user by default, which poses security risks. Use this option to specify a regular user to run the program, like -u myuser
or -u 1001
. This is a very important security practice.
Differences and Relationships Between docker run and Other Container Lifecycle Management Commands
docker run
starts a container’s life, but it’s just one of many commands for managing container lifecycles. Understanding its relationship with create
, start
, exec
, and other commands is important for efficient container management.
Comparing Three Commands: run vs. create vs. start
The relationship between these three commands can be seen as a process from preparation to execution.
docker create [image]
: This command only does the first step. It creates a container from an image but doesn’t start it. The container is in “created” state, taking disk space but not running, consuming no CPU or memory. This is useful when you need to configure the container before starting it.
docker start
: This command starts an existing (created or stopped) container. It doesn’t create new containers, just gets an existing container running. It operates on containers, not images.
docker run [image]
: As mentioned, this is a convenient combination command, equivalent to docker create
plus docker start
. It creates a new container from an image and immediately starts it. You can’t use docker run
on an existing container because its purpose is always to create new ones.
Here’s a table summarizing the differences between these commands, including another commonly used command docker exec
:
Feature | docker run | docker start | docker create | docker exec |
---|---|---|---|---|
Main Purpose | Create a new container and start it | Start an existing container | Create a new container but don’t start it | Execute a command in a running container |
Operates On | Image | Container (ID or name) | Image | Container (ID or name) |
Container State | Always starts from image’s fresh state | Continues from last stopped state, preserving data | Creates a container in “created” state | Operates on a “running” container |
Typical Use | First-time deployment of a service | Restart a service that needs to preserve data (like databases) | Prepare a container in advance for later startup | Debug, inspect, or manage a running service |
docker run vs docker start Data Preservation
The difference between docker run
and docker start
greatly affects data management.
docker run
always creates a brand new container from the image. This is ideal for applications that don’t need to save state, like tests where you want a clean environment every time.
docker start
is for preserving state. It restores a stopped container, and all previous changes in the container’s filesystem are still there (unless it was originally run with --rm
). This is crucial for stateful applications like databases. When you docker stop
a database container, then docker start
it again later, all data remains intact.
Other Phases of Container Lifecycle
After starting a container with docker run
, there are other commands to manage it.
docker stop
: This is the standard way to stop a container. It first politely notifies the program inside the container “time to shut down,” giving it some time (default 10 seconds) to save data and exit gracefully. If it times out without exiting, Docker will forcibly kill it.
docker kill
: No warning, directly force-kills the program inside the container. Used when docker stop
doesn’t work.
docker pause
/ docker unpause
: Pause and resume all processes inside the container. When paused, the container still occupies memory but doesn’t consume CPU. This can be used to temporarily free up CPU resources.
docker rm
: Permanently delete a stopped container. If you add the -v
option (docker rm -v
), its associated anonymous volumes will also be deleted.
Real-world Usage Scenarios for docker run
Let’s look at how to use docker run
through several common scenarios.
Scenario 1: Deploy a Web Server (Nginx)
Goal is to deploy a simple Nginx web server that can run stably in the background.
Command:
docker run --name my-nginx-server -d -p 8080:80 --restart unless-stopped nginx:latest
Command Explanation:
--name my-nginx-server
: Give the container a name called my-nginx-server for easier management later-d
: Run the container in the background-p 8080:80
: Map the host’s port 8080 to the container’s port 80, so you can access the website viahttp://<your-IP>:8080
--restart unless-stopped
: Set restart policy. If the container crashes unexpectedly or the host reboots, Docker will automatically restart itnginx:latest
: Specify using the latest Nginx image
Scenario 2: Set Up a Temporary Development Environment (Ubuntu)
Goal is to create an isolated development environment where code is edited on the host but compiled and run in the container.
Command:
docker run --name dev-env -it --rm -v "$(pwd)"/app:/app -w /app ubuntu:latest bash
Command Explanation:
--name dev-env
: Give this development environment a name-it
: Enter the container’s command line in interactive mode--rm
: Automatically delete this container when you exit the command line, keeping the system clean-v "$(pwd)"/app:/app
: This is a bind mount. It mounts the app folder in your host’s current directory to the /app directory inside the container. This way, when you change code on the host, the container sees it immediately-w /app
: Set the working directory inside the container to /app. This way you’re directly in the project directory when you enterubuntu:latest bash
: Use the latest Ubuntu image and specify running the bash command to enter the command line
Scenario 3: Run a Database That Needs Data Persistence (MySQL)
This is a very important real-world scenario demonstrating how to properly run a database while ensuring data isn’t lost.
Step 1: Create a Dedicated Network Creating an independent network for services that need to communicate with each other is a good practice.
docker network create mysql-net
Step 2: Create a Named Volume to Store Data To make database data persist even after the container is deleted, we create a named volume.
docker volume create mysql-db-data
Step 3: Run the MySQL Container
Use the docker run
command to combine all needed options and start the database.
docker run --name my-database -d \
--network mysql-net \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
-v mysql-db-data:/var/lib/mysql \
mysql:latest
Command Explanation:
--name my-database
: Give the database container a fixed name-d
: Run it in the background--network mysql-net
: Connect it to the mysql-net network we just created-e MYSQL_ROOT_PASSWORD=my-secret-pw
: Set the MySQL root user password via environment variable-v mysql-db-data:/var/lib/mysql
: This is the most crucial step. It mounts the mysql-db-data volume we created to the container’s/var/lib/mysql
directory where MySQL stores data. This way all database files are saved in this independent volume, separate from the container’s lifecycle
Verification: You can try stopping and deleting this container (docker stop my-database
then docker rm my-database
), then starting a new one with the exact same docker run
command. You’ll find that because the new container mounts the same volume, all previously created databases and data are still there.
Scenario 4: Multiple Containers Communicating Through Network
This scenario demonstrates how different services (containers) can access each other through Docker networks. Let’s run a database management tool (Adminer) and connect it to the MySQL container created in the previous step.
Steps 1 & 2: Ensure the mysql-net network and my-database container from the previous scenario are still running.
Step 3: Run the Adminer container on the same network
docker run --name my-adminer -d \
--network mysql-net \
-p 8888:8080 \
adminer
Command Explanation:
--name my-adminer
: Give the management tool a name-d
: Run in the background--network mysql-net
: Key point! Connect the Adminer container to the mysql-net network too-p 8888:8080
: Expose Adminer’s web interface on the host’s port 8888
How to Connect: Now, access http://<your-IP>:8888
in your browser to see Adminer’s login page. In the “Server” or “Host” field, you don’t need to enter an IP address - just type my-database
. Because they’re on the same network, Docker’s built-in DNS automatically resolves the my-database
name to the MySQL container’s internal IP address.
This demonstrates a basic method for building microservices: services discover and communicate with each other through names.
Summary
docker run
is the cornerstone of Docker container technology. Its main job is turning a static image into a living, isolated container while providing rich options for precise control over every aspect of the container.
Through docker run
’s various options, we can configure container runtime behavior, networking, data storage, resource usage, and security settings. Whether you need a temporary command-line environment during development, deploying a stable service that can auto-restart in production, or starting a database that needs permanent data storage.
Understanding the differences between docker run
, docker create
, and docker start
is key to mastering container lifecycle management. create
and start
provide more fine-grained control, while run
is a convenient combination command that completes the entire process from finding images to running programs in one step.
Thoroughly understanding its usage and position in the container lifecycle is the mark of a competent Docker user.