Installing Docker on Ubuntu is simple because Ubuntu provides Docker in its repositories. However, Docker is not available in CentOS's default repositories.
It also installs packages 'containerd' and stack of python3 packages, like 'python3-docker'. The docker works on my PC right after that. But it did not work after installing docker.io, or sudo apt-get install docker-ce docker-ce-cli containerd.io from the disco and bionic repositories. – bl79 May 1 '20 at 12:47.
Fret not, there are three ways you can install docker on a CentOS Linux system.
To install docker in CentOS without getting a migraine, try this command and see the magic unfold on your terminal screen: sudo dnf install docker-ce -nobest You'll be prompted to import a GPG key, make sure the key matches to 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35 before entering 'y'. This tutorial provides a starting point on how to install Docker, create and run Docker containers on CentOS/RHEL 8/7, but barely scratches the surface of Docker. Step 1: Install and Configure Docker. Earlier versions of Docker were called docker or docker-engine, if you have these installed, you must uninstall them before installing a newer.
- Using docker's repository
- Downloading the RPM
- Using helper scripts
Here, I'll walk you through the installation process of Docker CE using docker's RPM repository.
Docker CE stands for Docker Community Edition. This is the free and open source version of Docker. There is Docker EE (Enterprise Edition) with paid support. Most of the world uses Docker CE and it is often considered synonymous to Docker.
Installing Docker on CentOS
Before going any further, make sure you have the system updated. You can update the CentOS using:
Step 1: Add the official repository
Add docker's official repository using the following command
You should also update the package cache after adding a new repository:
Step 2: Install Docker CE
The trouble with using a custom repository is that it may have dependency issue if you try installing the latest version of docker-ce.
For example, when I check the available versions of docker-ce with this command:
I got docker-ce-3:19.03.9-3.el7 as the latest version. But the problem in installing the latest version is that it depends on containerd.io version >=1.2.2-3. Now, this version of containerd.io is not available in CentOS 8.
To avoid this dependency cycle and battling them manually, you can use the
--nobest option of the dnf command.
It will check the latest version of docker-ce but when it finds the dependency issue, it checks the next available version of docker-ce. Basically, it helps you automatically install the most suitable package version with all the dependencies satisfied.
To install docker in CentOS without getting a migraine, try this command and see the magic unfold on your terminal screen:
You'll be prompted to import a GPG key, make sure the key matches to
060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35 before entering 'y'.
containerd.io is a daemon for managing containers. Docker is just one form of Linux containers. To make the various types of container images portable, Open Container Initiative has defined some standards. containerd is used for managing the container images conforming to OCI standard.
Setting up docker on CentOS
Alright! You have docker installed but it's not yet ready to be used yet. You'll have to do some basic configurations before it can be used smoothly.
Run docker without sudo
You can run docker without any sudo privileges by adding your user to the
The docker group should already exist. Check that using the following command:
If this outputs nothing, create the docker group using groupadd command like this:
Mac os beta update download. Now add your user to the
docker group using the usermod command:
Change the user_name in the above command with the intended user name.
Now log out and log back in for the group change to take effect.
Start docker daemon
Docker is installed. Your user has been added to the
docker group. But that's not enough to run docker yet.
Before you can run any container, the docker daemon needs to be running. The docker daemon is the program that manages all the containers, volumes, networks etc. In other words, the daemon does all the heavy lifting.
Start the docker daemon using:
You can also enable docker daemon to start automatically at boot time:
Docker After Installing
Verify docker installation by running a sample container
Everything is done. It's time to test whether the installation was successful or not by running a docker container.
To verify, you can run the cliché hello-world docker container. It is a tiny docker image and perfect for quickly testing a docker installation.
If everything is fine, you should see an output like this:
Here's what the command is doing behind the hood:
- The docker client, i.e. the command line tool that you just used, contacted the docker daemon.
- The daemon looked for hello-world docker image in the local system. Since it doesn't find the image, it pulls it from Docker Hub.
- The engine creates the container with all the options you provided through the client's command line options.
This hello-world image is used just for testing a docker installation. If you want a more useful container, you can try running Nginx server in a container like this:
Once the command is done running, open up a browser and go to http://your_ip_address:56788. I hope you know how to know your IP address in Linux.
You should see nginx server running. You can stop the container now.
I hope this tutorial helped you in installing docker on CentOS. Do subscribe for more Docker tutorials and DevOps tips.
Become a Member for FREE
Join the conversation.
Use this information to quickly start up Community Edition using Docker Compose.
Note: While Docker Compose is often used for production deployments, the Docker Compose file provided here is recommended for development and test environments only. Customers are expected to adapt this file to their own requirements, if they intend to use Docker Compose to deploy a production environment.
To deploy Community Edition using Docker Compose`, download and install Docker, then follow the steps below. Make sure that you’ve reviewed the prerequisites before continuing.
Docker After Install
Clone the project locally, change directory to the project folder, and switch to the release branch:
Note: Make sure that exposed ports are open on your host computer. Check the
docker-compose.ymlfile to determine the exposed ports - refer to the
host:containerport definitions. You’ll see they include 5432, 8080, 8083 and others.
docker-compose.ymlfile in a local folder.
For example, you can create a folder
Change directory to the location of your
Deploy Community Edition, including the repository, Share, Postgres database, Search Services, etc.:
This downloads the images, fetches all the dependencies, creates each container, and then starts the system:
Note that the name of each container begins with the folder name you created in step 2.
As an alternative, you can also start the containers in the background by running
docker-compose up -d.
Wait for the logs to complete, showing messages:
See Troubleshooting if you encounter errors whilst the system is starting up.
Open your browser and check everything starts up correctly:
Service Endpoint Administration and REST APIs
Search Services administration
If Docker is running on your local machine, the IP address will be just
If you’re using the Docker Toolbox, run the following command to find the IP address:
Log in as the
adminuser. Enter the default administrator password
Check system start up
Use this information to verify that the system started correctly, and to clean up the deployment.
Open a new terminal window.
Change directory to the
docker-composefolder that you created in the deployment steps.
Verify that all the services started correctly.
List the images and additional details:
You should see a list of the services defined in your
List the running containers:
You should see a list of the services defined in the
View the log files for each service
<service-name>, or container
For example, to check the logs for Share, run any of the following commands:
You can add an optional parameter
<container-name>to display the last 25 lines of the logs for the selected container.
Check for a success message:
Once you’ve tested the services, you can clean up the deployment by stopping the running services.
Stop the session by using
CONTROL+Cin the same window as the running services:
Alternatively, you can open a new terminal window, change directory to the
docker-composefolder, and run:
This stops the running services, as shown in the previous example, and removes them from memory:
You can use a few more commands to explore the services when they’re running. Change directory to
docker-composebefore running these:
Stop all the running containers:
Restart the containers (after using the
Starts the containers that were started with
Stop all running containers, and remove them and the network:
--rmi alloption also removes the images created by
docker-compose up, and the images used by any service. You can use this, for example, if any containers fail and you need to remove them:
See the Docker documentation for more on using Docker.
Deployment project in GitHub
See the Alfresco/acs-deployment GitHub project for more details.
- In this project, you’ll find several Docker Compose files. The default
docker-compose.ymlfile contains the latest work-in-progress deployment scripts, and installs the latest development version of Content Services.
- To deploy a specific released version of Content Services, several major.minor Docker Compose files are provided in the
docker-composefolder of the project.
- To modify your development environment, for example to change or mount files in the existing images, you’ll have to create new custom Docker images (recommended approach). The same approach applies if you want to install AMP files into the repository and Share images. See the Customization guidelines for more.
Using the Community Compose file in this project deploys the following system:
To bring the system down and cleanup the containers run the following command:
If you have issues running
docker-compose upafter deleting a previous Docker Compose cluster, try replacing step 4 in the initial Docker Compose instructions with:
Note: Make sure that the
docker-compose uppart of the command uses the format you chose in step 4.
Stop the session by using
Remove the containers (using the
Try allocating more memory resources, as advised in
For example, in Docker, change the memory setting in Preferences (or Settings) Resources > Advanced > Memory to at least 8GB. Make sure you restart Docker and wait for the process to finish before continuing.
Go back to step 4 in the initial Docker Compose instructions to start the deployment again.
Note: Keep in mind that 8GB is much lower than the required minimum, and may need to be adapted for your environment. You’ll need a machine with at least 13GB of memory to distribute among the Docker containers.