1, Building images using Dockerfile
Dockerfile is a script composed of a series of commands and parameters. These commands are applied to the basic image and finally create a new image.
- For developers: it can provide a completely consistent development environment for the development team;
- For testers: you can directly take the image built during development or build a new image through Dockerfile file to start working;
- For operation and maintenance personnel: seamless migration of applications can be realized during deployment.
1.1 common commands
command | effect |
---|---|
FROM image_name:tag | Defines which base image is used to start the build process |
MAINTAINER user_name | Declare the creator of the image |
ENV key=value | Setting environment variables (multiple can be written) |
RUN command | It is the core part of Dockerfile (multiple entries can be written) |
ADD source_dir/file dest_dir/file | Copy the file of the host to the container. If it is a compressed file, it will be automatically decompressed after copying |
COPY source_dir/file dest_dir/file | It is similar to ADD, but if there is a compressed file, it cannot be decompressed |
WORKDIR path_dir | Set working directory |
CMD ["executable", "param1", "param2"...] | Actions performed when the container starts |
Refer to the official documentation for more commands: Portal
1.2 creating images using scripts
Steps:
-
Create directory:
mkdir –p /usr/local/dockerdjango
-
To create a Dockerfile:
vim Dockerfile # The name must be dockerfile or dockerfile
-
Write command:
# Dependent image name and label FROM python:3.6 # Specify image creator information MAINTAINER hugh # Copy file ADD ./requirement.txt /home/ # Execute command RUN pip install -r /home/requirement.txt -i appoint pip source RUN mkdir /usr/local/django_pro # assign work directory WORKDIR ~/project_pro CMD ["uwsgi","--ini","/home/django_pro/uwsgi.ini"]
-
Execute the command to build a mirror:
docker build -t='Image name:label' . # Pay attention to the spaces and dots behind
-
Check whether the image is established:
docker images
2, Upload image to Docker Hub
The steps are as follows:
-
Log in to Docker Hub and execute the following commands:
docker login
Then enter the user name and password. If there is no account, register one through the browser.
-
Label the image:
The image name must be "docker hub user name / image name".
docker tag Mirror name or ID:Tag user name/Image name:label
-
Push to remote warehouse:
docker push user name/Image name:label
3, Build private warehouse
The steps are as follows:
-
Pull private warehouse image:
docker pull registry
-
Configure warehouse address:
vim /etc/docker/daemon.json
Save and exit after modifying to your own ip address:
"insecure-registries": ["IP address:5000"]
-
Restart docker:
systemctl restart docker
-
Start container:
docker run -d -v /opt/registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:2
By default, the Registry service will save the uploaded image in / var/lib/registry of the container. We can save the image to the / opt/registry directory of the host by attaching the / opt/registry directory of the host to this directory.
-
Browser access: http://ip Address: 5000/v2/_catalog
-
Label it to point to our registry:
docker tag Mirror name or id ip address:5000/myfirstimage
-
Push image:
docker push ip address:5000/myfirstimage
4, Installation and use of docker compose
Docker Compose is a docker tool used to define and run complex applications. An application using docker container is usually composed of multiple containers. Using Docker Compose makes it easier to manage these container applications.
Docker Compose manages multiple docker containers through a configuration file. In the configuration file, all containers are defined through services, and then the Docker Compose script is used to start, stop and restart applications, services in applications and all containers that depend on services. It is very suitable for the scenario of combining multiple containers for development.
4.1 install docker compose for Ubuntu
-
Installation:
curl -L https://get.daocloud.io/docker/compose/releases/download/v2.2.3/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose
-
Verify installation:
docker-compose -v
4.2 use examples
Build a web application based on Python flash framework running in docker through docker compose, and use redis database at the same time.
**Note: * * make sure you have installed Docker Engine, but you don't need to install Python or Redis, because both are provided by Docker image.
Steps:
-
Create a project directory with the following structure:
└── compose_test ├── docker │ └── docker-compose.yml ├── Dockerfile └── src ├── app.py └── requirements.txt
-
In compose_ Create a python flash application in the test / SRC / directory with the file name app py:
from flask import Flask from redis import Redis app = Flask(__name__) redis = Redis(host='redis', port=6379) @app.route('/') def hello(): count = redis.incr('hits') return 'Hello World! I have been seen {} times.\n'.format(count) if __name__ == "__main__": app.run(host="0.0.0.0", debug=True)
-
Create python dependency file compose_test/src/requirements.txt:
flask redis
-
Create Dockerfile file of container:
FROM python:3.9 COPY src/ /opt/src WORKDIR /opt/src RUN pip install -r requirements.txt CMD ["python", "app.py"]
-
In compose_ The docker compose script is defined in the test / docker / directory, and the file name is docker compose yml:
version: '3' services: web: build: ../ ports: - "5000:5000" volumes: - ../src:/opt/src redis: image: redis:3.0.7
This compose file defines two services, namely web and redis containers:
- web container:
- Use the current docker compose Build an image from the Dockerfile in the parent directory (compose_test/Dockerfile) of the directory where the YML file is located.
- Map the 5000 port on the container to the 5000 port on the host.
- Compose the project directory on the host_ Mount test / SRC to the / opt/src directory in the container.
- redis container:
- Use the official redis image version 3.0.7 extracted from Docker Hub.
- web container:
-
To build and run an application using Compose:
docker-compose up # If you do not use the default docker - compose YML file name, you need to specify the - f parameter: # Docker compose - f file name yml up -d
Then enter in the browser http://0.0.0.0:5000/ View running applications.
-
Common configuration parameters:
- Version: Specifies the version number of Docker Compose file.
- Services: define multiple services and configure startup parameters.
- volumes: declare or create data volume objects that are commonly used in multiple services.
- networks: defines network objects that are used together in multiple services.
- configs: declare some configuration files to be used in this service.
- secrets: declare some secret keys and password files to be used in this service.
- x - * * *: custom configuration, mainly used to reuse the same configuration.
-
Common commands:
docker-compose up # It will automatically search docker-compose.com under the current path YML file docker-compose -f Specify file up docker-compose up -d # It is executed in the background. Generally, we look at the log output instead of this one docker-compose stop # When stopped, containers and mirrors are not deleted docker-compose down # Stop and delete the associated container docker-compose start # Container for starting yml file management docker-compose ps # Running container docker-compose images # Container managed by docker compose docker-compose exec yml Written in the file service /bin/bash # Enter the container
For more configuration items, refer to the official documents: Portal