1. The Purpose of Docker#
The emergence of Docker has made application environment configuration, deployment, and testing easier.
For example, if you have developed a web application and it works fine locally, but now you want to show it to your friends or deploy it to a server.
First, you need to configure the same software and environment, such as the database, web server, necessary plugins, libraries, etc.
And there is no guarantee that the software will run properly because it may be using a completely different operating system. Even if it is using Linux, each distribution will have slight differences.
Therefore, in order to completely simulate the same local development environment, virtual machines naturally come to mind.
However, virtual machines need to simulate the entire hardware and run the entire operating system, which not only takes up a lot of space and memory but also affects program performance.
This is where Docker comes in.
2. Introduction to Docker#
Docker is similar to a virtual machine, but much lighter.
It does not simulate the underlying hardware, but provides a completely isolated runtime environment for each application.
Different tools and software can be configured in this environment, and they will not be affected by other environments.
This environment is called a "container" in Docker.
At this point, we have to mention three important concepts in Docker.
1. Image#
An image can be understood as a snapshot of a virtual machine.
It contains the application to be deployed and all the associated libraries and software.
Through images, multiple different containers can be created.
2. Container#
A container is like a virtual machine.
It runs the deployed application.
Each container runs independently and does not affect each other.
3. Dockerfile#
It is mainly used to create images.
It can be understood as installing the operating system and runtime environment in a virtual machine.
However, this is done through an automated script called Dockerfile.
3. Docker Installation#
See the notes for Ubuntu basics and Docker installation.
Ubuntu Basics and Docker Installation
4. Dockerfile Configuration#
Create a Dockerfile in the project.
FROM node:latest
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["npm", "start"]
-
First, use the FROM command to specify a base image.
Docker Hub provides many high-quality operating system images with different package management tools for different operating systems, such as apt on Ubuntu.
There are also images developed for certain languages or frameworks, such as Python, Nginx, Node, Tomcat, etc.
FROM Node:Latest
is followed by a version or tag. -
Specify the working directory for Docker commands.
WORKDIR /app
This command specifies the working directory for all Docker commands after this command.
If this path does not exist, Docker will create it automatically, which avoids using absolute paths or manually switching paths and increases the readability of the program.
-
Copy the program to the Docker image.
COPY . .
COPY
The first parameter is the local path, and "." represents all files in the program root directory.
The second parameter is the path in the Docker image, and "." represents the current working directory.
All paths except those excluded by .dockerignore will be copied.
-
Run commands.
Run any shell command when creating the image.
Since node is used here, npm install is used to install all dependencies of the program.
RUN npm install
-
Expose a port.
EXPOSE 3000
Expose the container's port 3000 to allow external connections to this port.
-
Execute a command after the Docker container starts.
Finally, use CMD to specify the command to be executed after the Docker container starts.
CMD ["npm", "start"]
It is important to note that the container is not the same as the image.
Unlike RUN, which is used when creating an image, CMD is used when running a container.
At this point, the Dockerfile automation script is complete.
5. Create an Image#
Run the docker build command to create an image.
docker build -t my-blog .
-t specifies the name of the image.
The final "." tells Docker to look for the Dockerfile in the current directory.
The first time you run it, it will take a while because Docker needs to download the necessary image files.
Subsequent runs will be faster because Docker caches each previous operation. This is called layering in Docker.
6. Start a Container#
After obtaining the image, you can use docker run to start a container.
docker run -p 3000:3000 -d my-blog
-p maps a port on the container to a port on the local host.
This is necessary to access the web application in the container from the host.
The first 3000 is the port on the local host, and the second 3000 is the port on the container.
-d runs the container in the background, so the container's output will not be displayed directly on the console.
Now you can access it by opening localhost:3000 on the local host.
7. Docker Desktop Operations#
In Docker Desktop, you can see all the output of this application running in the background.
Container Panel#
The Container panel displays all the currently running containers and allows you to stop, restart, or delete them.
You can also remotely debug the container through the shell.
The following are the corresponding command-line instructions.
However, please note that when you delete a container, all modifications and newly added data will be lost.
It is similar to deleting a virtual machine, where all the data inside will be destroyed.
If you need to keep the data in the container, you can use Docker's volume feature.
8. Volume#
A volume can be thought of as a folder shared between the local host and different containers.
For example, if you modify the data in a volume in one container, it will be reflected in other containers as well.
1. Create a Volume#
docker volume create my-blog-data
You can use this command to create a volume.
2. Specify a Volume#
When starting a container, use the -v parameter to specify a volume and mount it to a path in the container.
docker run -p 3000:3000 -v my-blog-data:/etc/blogData my-blog
Here, my-blog-data is mounted to the /etc/blogData path.
Any data written to this path will be permanently stored in this volume.
9. Multiple Containers Collaboration#
In practical use, multiple containers are often used.
For example, one container is used to run a web application, and another container is used to run a database.
This can achieve effective separation of data and application logic.
Docker Compose can achieve this.
10. Docker Compose#
1. Create a docker-compose.yml File#
version: '3'
services:
web:
build:
ports:
- "3000:3000"
db:
image: "mysql"
environment:
MYSQL_DATABASE: blog
MYSQL_ROOT_PASSWORD: password
volumes:
- my-blog-data:/var/lib/mysql
volumes:
my-blog-data:
In this file, use services to define multiple containers.
For example, a web container runs a web application, and a MySQL container runs a database.
In the database container, you can add environment variables for the database name and connection password.
You can also specify a volume to permanently store data.
2. Run Docker Compose#
Use the docker compose up -d
command to run all the containers.
The -d flag indicates that all containers should run in the background.
3. Stop and Delete All Containers#
docker compose down
However, newly created volumes need to be manually deleted in Docker Desktop or by adding the --volumes parameter to the command.
All of these operations can also be done in Docker Desktop.