Divine Odazie
6 min readSep 26 2022
Best Practices when using Docker Compose
Container technology has streamlined how you’d build, test, and deploy software from local environments to on-premise data centers, or the cloud. But while using container technology, a new problem of manually starting and stopping each container arose, making it tedious to build multi-container applications.
To solve this problem, Docker Inc created Docker Compose. You can use Docker Compose to simplify the running of multi-container applications with as little as two commands; docker-compose up
and docker-compose down
. But as with every software tool, there are best practices to use it efficiently.
This article will discuss 4 best practices you should consider when using Docker Compose to orchestrate multi-container Docker applications. The best practices are:
- Substitute environment variables in Docker Compose files
- If possible, avoid multiple Compose files for different environments
- Use YAML templates to avoid repetition
- Use
docker-compose up
flags where necessary
1. Substitute Environment Variables in Docker Compose Files
Ideally, when defining a Compose file, you would have environment variables with secrets you wouldn’t want to push to any source code management platform like GitHub.
It is best to configure those environmental variables in the machine's shell, where you will deploy the multi-container application, so there are no secret leaks. And then populate the environment variables inside the Docker Compose file by substituting as seen below.
mongoDB: environment: - MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME} - MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
With the above configuration in your Docker Compose file, when you run docker-compose up
, Docker Compose will look for the MONGO_INITDB_ROOT_USERNAME
and MONGO_INITDB_ROOT_PASSWORD
environment variables in the shell and then substitute their values in the file.
How to Add Environment Variables to your Shell
Ideally, to add to a shell, you would use export VARIABLE=<variable_value>
. But with that method, if the host machine reboots, those environment variables will be lost.
To add environment variables that would persist through reboots, create an environment file with:
vi .env
And in that file, store your environment variables like in the image below and save the file.
Then in the root directory of your production machine, run $ ls -la
, which should show you a .profile
as shown in the image below.
Open up the profile file with $ vi .profile
and at the end of the file, add this configuration:
set -o allexport; source /<path_to_the_directory_of_.env_file>/.env; set +o allexport
The above configuration will loop through all the environment variables you set and add them to your machine.
And to ensure the configuration takes effect, log out of your shell session, log back in, and then run:
printenv
You should see your environment variables, as shown in the image below.
Check out this documentation to learn more about best practices using environment variables in Docker Compose.
2. If Possible, Avoid Multiple Compose Files for Different Environments
Although Docker suggests defining an additional Compose file for a specific environment, e.g., docker-compose-prod.yaml
for your production environment. This approach can cause issues as it is manual and error-prone when reapplying all modifications from one environment to another. Also, it increases the complexity of your development process when you consider CI, staging, or QA environments.
As a rule, you should always try to keep only a single Docker Compose file for all environments. But in cases where you might want to use nodemon to monitor changes in your node application during development. Or perhaps you would use a managed MongoDB database in production but run MongoDB locally in a container for development.
For these cases, you can use the docker-compose.override.yml
file. As its name implies, the override file will contain configuration overrides for existing or entirely new services in your docker-compose.yaml
file.
To run your project with the override file, you still run the default docker-compose up
command. Then Docker Compose will automatically merge the docker-compose.yml
and docker-compose.override.yml
files into one. Suppose you defined a service in both files; Docker Compose merges the configurations using the rules described in Adding and overriding configuration.
You can add the override file to your .gitignore
file, so it won't be there when you push code to production. After doing this, if you still see the need to create more Compose files, then go ahead but keep in mind the complexity.
3. Use YAML Templates to Avoid Repetition
When a service has options that will repeat in other services, you can create a template from the initial service to reuse in the other services instead of continuously repeating yourself.
The following illustrates Docker Compose YAML templating:
version: '3.9' services: web: &service_default image: . init: true restart: always backend: <<: *service_default # inheriting the service defaults definitions image: <image_name> env_file: .env environment: XDEBUG_CONFIG: "remote_host=${DOCKER_HOST_NAME_OR_IP}"
In the above YAML configuration, the first service defines restart: always
, which will restart the container automatically if it crashes. Instead of adding restart: always
and other recurring configs you might have to all your services, you can replicate them with <<: *service_default
.
Check out this article to learn more about YAML templating capabilities for Docker Compose.
4. Use docker-compose up
Flags Where Necessary
To make your development process a lot easier when creating and starting up containers with docker-compose up
, you could use the features provided by docker-compose up
flags.
To see these flags, you can run the command below in your terminal:
$ docker-compose up --help
Also, you can see them in the compose up command line reference, but viewing them on your terminal is easier during development.
Conclusion
This article discussed 4 best practices you should consider when using Docker Compose to orchestrate multi-container Docker applications. There are other best practices you can consider; to learn more about them, check out the following resources:
- 10 Tips for Docker Compose Hosting in Production
- Best Practices Around Production Ready Web Apps with Docker Compose
- Docker-compose Tricks and Best Practices
About the author
Consistency is key. That’s what Divine believes in and he says he benefits from that fact which is why he tries to be consistent in whatever he does. Divine is currently a Developer Advocate and Technical Writer who spends his days’ building, writing, and contributing to open-source software.days
More articles
Akshat Virmani
6 min readAug 24 2024
How to add GitHub Copilot in VS Code
Learn how to add GitHub Copilot to Visual Studio Code for AI-assisted coding. Boost productivity, reduce errors, and get intelligent code suggestions in seconds.
Read Blog
Akshat Virmani
6 min readAug 09 2024
Common API Integration Challenges and How to Overcome Them
Discover common API integration challenges and practical solutions. Learn how to optimize testing, debugging, and security to streamline your API processes efficiently.
Read Blog
Akshat Virmani
6 min readJun 20 2024
Web Scraping using Node.js and Puppeteer
Step-by-step tutorial on using Node.js and Puppeteer to scrape web data, including setup, code examples, and best practices.
Read Blog