Docker Exec Command with Examples

In this article, we will talk about the Docker Exec Command. At the end of this article, you will learn:

  • What Docker Exec Command is
  • What it is used for
  • The syntax for this command

1. Introduction

Let us say you have a docker container that is currently running. You would quite often want to run commands on these running containers. Let’s say you want to list all the files and directories inside your container and therefore you want to run the ls command on the container. This is where the docker exec command comes in handy.

1.1 Syntax

$ docker exec <options><container_name><command_to_execute>

Let’s break this down into smaller components:

  • docker exec : This is the actual command
  • container_name : This is the name of the container on which you want to run commands
  • command_to_execute : This is the actual command you want to run on the container. In our example, the ls command.
  • options : Docker exec command supports various options to modify the functionality of the commands. It supports the following functions.
  • -i – interactive mode
  • -t – Spawns a pseudo TTY
  • -u – Specifies the username or UID.
  • -w – Working directory
  • -p – allocates extended privileges to the command.
  • -d – runs in detached mode.
  • -e – sets environment variables.

2. Docker installation and Setup

For this example, we will be using an AWS EC2 instance created using Amazon Linux 2 image.

We have installed docker on the EC2 instance using the command:

Install Docker

$ sudo amazon-linux-extras install docker

After this, we started docker using the following:

Start Docker

$ sudo service docker start

Let’s now pull docker image for ubuntu from the DockerHub:

Pull Ubuntu from Docker Hub

$ docker pull ubuntu

To start and run this image use the following command:

Run the Ubuntu image

$ docker run -d -t ubuntu

Now, if you run the command:

Get the list of all containers

$ docker ps -a

You will see that the ubuntu container is now up and running.

CONTAINER ID   IMAGE          COMMAND    CREATED          STATUS                PORTS     NAMES
b1ac9a96cbe0   ubuntu         "bash"     3 minutes ago    Up 3 minutes                    flamboyant_elbakyan

The setup is now complete.

3. Exec Command

Now that we have our container running, we can now run the ls command on this container using the docker exec command, as shown below:

Run ls command against the container

$ docker exec b1ac9a96cbe0 ls 

4. Interactive Bash Terminal

In the above example, we ran the container in the detached mode. This means we just ran the container but we are not in the interactive mode. Instead of using the docker exec command to interact with the container, you can get inside the container and run the commands from inside the container. In order to do this, we need to run the container in interactive mode.

Run the container in interactive mode

$ docker run -i -t ubuntu
root@4c940701afc8:/# ls
bin  boot  dev  etc  home  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Now you know how to interact with the container from inside and outside the container.

5. Understanding the -i -t options

The -i flag keeps input open to the container, and the -t flag creates a pseudo-terminal that the shell can attach to. Wait, What? What does this mean?

Simply stated this means that the input and output terminals of your host machine (machine on which the container is tunning) are connected to the standard input(stdin) and standard output(stdout) of your container. That is why whatever you type on your host machine will be processed by your container as a command and the output is visible on the host machine. Of course, the command has to be valid in order for it to run.

6. Running as a Root

We can also run the exec command as the root user. To exec command as root, use the -u option. The option requires a username or UID of the user. The UID for the root user is 0.

Run as root using UID

The syntax is:

$ docker exec -u 0  command
$ docker exec -u 0 b1ac9a96cbe0 whoami

You can also use the username:

Run as root using username

$ docker exec -u root b1ac9a96cbe0 whoami 

7. Risks when running as Root

Docker containers typically run with root privileges by default. This allows for unrestricted container management. This is really useful for development purposes but can expose you to high risk once you put your containers into a production environment.

Why? Because anyone who accesses your container running as root can start undesirable processes in it, such as injecting malicious code. And running a process in your container as a root makes it possible to change the user id (UID) or group id (GID) when starting the container, which makes your application vulnerable. So always run the containers as a non-root user.

8. Run vs Exec

If you noticed carefully, in the above example, we started the container before running the exec command. This means the exec command needs a running container. However, on the contrary, the run command runs a temporary container, executes the command, and kills the container after the command is executed.

run and exec commands

$ docker run -it b1ac9a96cbe0 
$ docker exec - it b1ac9a96cbe0 

The first command will start a container in the interactive mode. You can run as many commands as you want, but after you exit and run docker ps -a, there will be no containers running.

On the contrary, if you run the same using exec, the docker ps -a will show the container as running.

9. Summary

In this tutorial, we learned about the docker exec command and the way it is used in a Docker environment. We also learned how to work in an interactive mode with the container and the differences between run and exec command. Last but not the least, try to avoid running containers as the root user.

For more information on this topic visit the following sites

Nitin Kulkarni

Nitin is an experienced IT professional with over 20 years of experience across the globe. He has lead and delivered many complex projects involving Cloud Computing, API design, DevOps and Microservices. He has a solid history of aligning systems with business strategy with a comprehensive knowledge of systems architecture principles, software development languages, software, and infrastructure. He is a certified AWS and TOGAF Architect. He holds a bachelor's degree from the University of Pune, India, and currently pursuing his Master's degree in Georgia Tech.
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Back to top button