Docker is a powerful platform for building, shipping, and running distributed applications. It allows developers to package their applications and dependencies into a portable container, which can then be easily deployed on any system. In this blog post, we will go through the steps required to install Docker on a Linux CentOS 7 system and some basic commands to get started using Docker. It is a platform that enables developers to easily create, deploy, and run applications in containers. Containers are lightweight, portable, and self-sufficient, making them an ideal choice for deploying applications in a variety of environments. Docker provides a consistent and reproducible environment for applications, allowing developers to focus on writing code, rather than worrying about the underlying infrastructure.
Installing Docker in Centos 7
Let’s start installing the yum dependencies that will allow us to add a new repositories:
$ sudo yum install yum-utils -y
Next, add the Docker community edition yum repository:
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Then we can proceed to install Docker, enable the daemon and start it:
$ sudo yum install docker-ce -y
$ sudo systemctl start docker && sudo systemctl enable docker
Installing Docker in Ubuntu 18.04
Installing Docker in Ubuntu it’s pretty similar to Centos. We first add the dependencies to add new apt repositories:
$ sudo apt install software-properties-common
After, we proceed to add the Docker repository:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Then we can run the Docker installation, enable the daemon and staring it:
$ sudo apt install docker-ce -y
$ sudo systemctl start docker && sudo systemctl enable docker
Getting started with Docker running some useful commands
Once installed in the desired OS, the commands don’t vary between OS, therefore, this section works for both and you can follow the guide from here conveniently.
You might notice this issue when running Docker without sudo privileges:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/json": dial unix /var/run/docker.sock: connect: permission denied
This is because the interaction with the Docker daemon needs elevated privileges. To overcome this issue, one of the things that can be done is to adding the user to the docker group create during the installation:
$ sudo usermod -a -G docker <youruser>
$ su <youruser>
$ id
After running the above commands, your account will be able to run Docker commands without the need to use sudo
privileges, which is more secure. Bear in mind that, after adding your account into the docker
group, a new session session is needed either with su
or ssh
to refresh your account groups. In the following picture, you can see the details of the issue and the command sequence to fix it:
Another thing that you could do to avoid typing the full docker
command word, is to create an alias to shorten it like:
$ alias d=$(which docker)
If you want to persist that alias between sessions in your account, add it to the “.bashrc” file from your account’s home:
echo "alias d=$(which docker)" >> ~/.bashrc
This will save you some time when managing Docker. However, in this guide, the full docker
word command will be used.
Now, let’s dive into the containers world with Docker. At this point, we only have the Docker daemon running ready to receive commands and manage containers but, there are no containers. You can begin to download (pull) a container image and see the effects:
$ docker pull alpine:latest
$ docker images
The pull command lets you download a container image to your docker engine. You can see in the above picture that initially docker doesn’t have any image but, after running the pull the local repository is filled with the “alpine” image. The source where Docker looks for container images by default is from https://hub.docker.com/. This default behaviour can be changed through configuration.
Once we have the image downloaded, we can proceed to run it on the host with this command:
$ docker run -d alpine sleep 50000
$ docker ps
The docker ps
is used to get the actively running containers, therefore, from the picture above, you can see that, from the beginning, there are no output from the first docker ps
but, after running the alpine image that was downloaded with pull
, it is showing a new container entry based on alpine image running a sleep 5000
command.
While the container is still running the command, it is possible to get access the container and check the inner components:
$ docker exec -it <container_name> sh
The docker exec
allows you to run additional commands on the running container and it is combined with the option -it for a console-like interaction, followed by the name of the container (In this case eloquent_elion) and finally the command you want to run inside (sh
to get a shell).
The same can be achieved with the container ID:
Once you have opened a shell on the container, you can quit from it by issuing the command exit
or by pressing CTRL and typing the sequence P and Q:
Now, let’s terminate the container in a “friendly” manner:
$ docker stop <container_name>
If you run docker ps
only, you’ll see nothing since it shows only the running containers. In order to get all the containers, even those that are not running:
$ docker ps -a
Let’s perform a clean up, removing the container and then the downloaded image:
$ docker container rm <container_name>
$ docker image rm alpine:latest
Alternatively, these commands can be used for container and image removal:
$ docker rm <container_name>
$ docker rmi alpine:latest
Both container and image are now gone and the environment is almost the same as the beginning. To make the things a little more interesting, we now proceed to deploy a container image with a web service like nginx
and make it accessible:
$ docker run -d --name mynginx --hostname mynginx -p 80:80 nginx:alpine
This time, instead of letting Docker assign a random name, we assign it with the –name option and also set the hostname with –hostname. Moreover, take into consideration that the run command implies a pull if the image was not downloaded beforehand. In order to make the container accessible to the outside, the -p option followed by 2 port numbers separated with “:” is used. The first port corresponds to the VM that is running Docker and can be selected freely as long as it is not under usage. The second port corresponds to the container and it use to be a specific one depending on how the container was made.
Afterwards, we can proceed to test the service:
$ curl http://localhost
There you go! An nginx
running on your VM without the need of installing the package and its dependencies in no time, and of course this can be done with whatever application you may know that was previously containerised.
For your knowledge, the way that Docker routes the traffic to the container is by using docker-proxy processes. From the nginx example, this was part of the output from the netstat
command.
To have an idea of what image string to pass to docker for pulling, browse the site https://hub.docker.com/ and on the search bar, type the desired app. In this example, we are looking for a node.js container:
Click on the result to explore the section Supported tags and respective Dockerfile links:
Let’s say that we want to run the “buster” version of node.js. Then, we can proceed with the following command:
$ docker run -d --name buster --hostname buster node:buster sleep 50000
Take into consideration that if you don’t specify the container image version, Docker will get by default the one with latest tag version.
To finalise, this time we want to terminate abruptly the containers and remove both containers and images
$ docker rm -f buster mynginx
$ docker rmi nginx:alpine node:buster
To Summarise
I hope you went through the guide and learnt the basics of Docker which is a very cool tool to be familiar with. Also, this will help you for the next Docker guides that I’m planning to write, where I will explain about the Docker network, volumes, Dockerfiles, docker-compose and more. I encourage you to search for additional resources to continue learning this topic in the meantime.