How to build Python applications into Docker containers

Now, cloud native is becoming more and more popular, and containerization is imperative. As a developer, you need to contact some container related operations more or less. The most basic operation is how to build your own application into a Docker container and manage it. This paper creates a simple Web background demonstration project based on the "starlet" framework and common 3-piece sets of "Nginx, MySQL" and "Redis", and introduces how to write a container and its operation step by step.

1. Create project

The first step is to create a Python} background project, which contains three interfaces. One interface is used to test whether the service is normal, the other is to test the} MySQL} call, and the last is to test the} Redis} call.

First, create a background project and dependencies:

➜  example_python git:(master) mkdir python_on_docker          
➜  example_python git:(master) cd python_on_docker 
# A project is supported by a virtual environment. If you are familiar with it, it is recommended to use porety to deal with the supporting environment of python and venv. Here, venv is used for demonstration convenience
➜  python_on_docker git:(master) python3.7 -m venv venv
➜  python_on_docker git:(master) source venv/bin/activate 
➜  python_on_docker git:(master) touch __init__.py  # Each Python project should ensure that there is a__ init__.py file 

# You can see that there are multiple venv environments. At present, you have switched to venv and started to install dependencies
(venv) ➜  python_on_docker git:(master) pip install starlette aiomysql aioredis uvicorn
(venv) ➜  python_on_docker git:(master) pip install cryptography  # aiomysql requires this module to provide encryption algorithms
# Generate dependent files for pip installation
(venv) ➜  python_on_docker git:(master) python -m pip freeze > requirements.txt

Then create the project master file, @ example Py, which mainly provides API services, including the three interfaces mentioned above. The example code is as follows (source code):

from typing import Optional, Tuple

import aiomysql
import aioredis
from starlette.applications import Starlette
from starlette.config import Config
from starlette.requests import Request
from starlette.responses import JSONResponse, PlainTextResponse
from starlette.routing import Route


config: Config = Config(".env")
mysql_pool: Optional[aiomysql.Pool] = None
redis: Optional[aioredis.Redis] = None


async def on_start_up():
    """connect MySQL and Redis"""
    global mysql_pool
    global redis

    mysql_pool = await aiomysql.create_pool(
        host=config("MYSQL_HOST"),
        port=config("MYSQL_PORT", cast=int),
        user=config("MYSQL_USER"),
        password=config("MYSQL_PW"),
        db=config("MYSQL_DB"),
    )
    redis = aioredis.Redis(
        await aioredis.create_redis_pool(
            config("REDIS_URL"),
            minsize=config("REDIS_POOL_MINSIZE", cast=int),
            maxsize=config("REDIS_POOL_MAXSIZE", cast=int),
            encoding=config("REDIS_ENCODING")
        )
    )


async def on_shutdown():
    """close MySQL and Redis connect"""
    await mysql_pool.wait_closed()
    await redis.wait_closed()


def hello_word(request: Request) -> PlainTextResponse:
    """Test interface call interface"""
    return PlainTextResponse("Hello Word!")


async def mysql_demo(request: Request) -> JSONResponse:
    """test MySQL Call interface"""
    count: int = int(request.query_params.get("count", "0"))
    async with mysql_pool.acquire() as conn:
        async with conn.cursor() as cur:
            await cur.execute("SELECT %s;", (count, ))
            mysql_result_tuple: Tuple[int] = await cur.fetchone()
    return JSONResponse({"result": mysql_result_tuple})


async def redis_demo(request: Request) -> JSONResponse:
    """test Redis Call interface"""
    count: int = int(request.query_params.get("count", "0"))
    key: str = request.query_params.get("key")
    if not key:
        return JSONResponse("key is empty")
    result: int = await redis.incrby(key, count)
    await redis.expire(key, 60)
    return JSONResponse({"count": result})


app: Starlette = Starlette(
    routes=[
        Route('/', hello_word),
        Route('/mysql', mysql_demo),
        Route('/redis', redis_demo)
    ],
    on_startup=[on_start_up],
    on_shutdown=[on_shutdown]
)

The project file is created, and then a supporting configuration file is created env (the configuration of. env} will be automatically loaded by the configuration of starrette):

# Change the configuration according to your own configuration information
MYSQL_DB="mysql"
MYSQL_HOST="127.0.0.1"
MYSQL_PORT="3306"
MYSQL_USER="root"
MYSQL_PW=""

REDIS_URL="redis://localhost"
REDIS_POOL_MINSIZE=1
REDIS_POOL_MAXSIZE=10
REDIS_ENCODING="utf-8"

So far, there is only example in the directory Py main file Env configuration file and env requirements Txt. Now start the application and check whether the application can start normally (note to modify the configuration files of mysql and redis. At present, it is assumed that mysql and redis have been installed locally):

# Using Python - M uvicon prevents calls to external uvicon
python -m uvicorn example:app
# The following is the terminal output
INFO:     Started server process [4616]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)

From the output of the terminal, we can see that our service has started and listened to the 8000 port of the machine. Next, test whether the interface can be used normally:

➜  curl http://127.0.0.1:8000
Hello Word!
➜  curl http://127.0.0.1:8000/mysql
{"result":[0]}
➜  curl http://127.0.0.1:8000/mysql\?count\=10
{"result":[10]}
➜  curl http://127.0.0.1:8000/mysql\?count\=50
{"result":[50]}
➜  curl http://127.0.0.1:8000/redis\?key\=test
{"count":0}
➜  curl http://127.0.0.1:8000/redis\?key\=test\&count\=2
{"count":2}
➜  curl http://127.0.0.1:8000/redis\?key\=test\&count\=2
{"count":4}
➜  curl http://127.0.0.1:8000/redis\?key\=test\&count\=2
{"count":6}
➜  curl http://127.0.0.1:8000/redis\?key\=test\&count\=2
{"count":8}    

Through the output, we can find that our test results are normal and the interface can be used normally. This is the end of the first course. Next, we will start the journey of deploying Python Web applications using Docker

2. Create an image for the project and run it

At present, we haven't touched Docker. We started using Docker from here, but before using it, we should ensure that we have installed Docker. Each platform has different installation methods, and there are a lot of information. The official information is also very detailed, so we won't describe it here.

Creating an image in Docker is very simple. You only need to tell Docker how to create an image through a dockerfile. Dockerfile mainly includes two purposes. One is a description of the current image; One is to guide Docker to complete the containerization of applications (create an image containing the current application). Dockerfile can realize the seamless switching between development and deployment, At the same time, dockerfile can help novices get familiar with the project quickly (dockerfile has a clear and accurate description of current applications and their dependencies, and is very easy to read and understand. Therefore, pay attention to this file as much as your code, and incorporate it into the source control system

After a brief understanding, start to write the corresponding Dockerfile. The file is as follows (source code):

# Pull the basic image of python, and use python -V to see which version you are just now
FROM python:3.7.4-alpine
# Sets the maintainer of the current image
LABEL maintainer="so1nxxxx@gmail.com"
# Set working directory
WORKDIR /data/app
# To COPY local dependencies, a new mirror layer will be created for each COPY
COPY . .

# Setting environment variables
# Do not generate pyc files
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Install dependencies. Since the hieredis and cryptography that aioredis depends on need to be compiled, you need to add these packages here. In this way, the dependency container will become large
RUN apk add --update gcc musl-dev python3-dev libffi-dev openssl-dev build-base && pip install --upgrade pip && pip install -r requirements.txt

# Indicates the port on which to listen
EXPOSE 8080

# Commands to run
CMD ["uvicorn", "--host", "0.0.0.0", "example:app"]

Although there are many commands in Dockerfile , file, they are not complex. After understanding them, you will find that these commands are highly readable. The following is the interpretation of each command:

  • FROM: the first line of each Dockerfile file is the FROM instruction. The image specified by the FROM instruction will be used as a basic image layer of the current image, and the image generated by subsequent commands will be added to the basic image layer as a new image layer. Here we use the Python: XXX Alpine image, which is the official image. The Python image is based on an alpine Linux image. Alpine Linux is very small, but although the sparrow is small, it has all kinds of internal organs. It is a good habit to use the FROM command to reference the official basic image, because the official image usually follows some best practices and can help users avoid some known problems. In addition, selecting a relatively small image file when using FROM can usually avoid some potential problems

  • LABEL: the maintainer of the current image is specified in Dockerfile. Each LABEL is actually a key value pair. In an image, you can add custom metadata for the image by adding a LABEL

  • WORKDIR: indicates the working directory in the image

  • COPY: COPY the local directory, COPY the application related files from the construction context to the current image, and create a new image layer for storage

  • ENV: sets the environment variable when the image runs

  • RUN: execute the command. The RUN command will create a new image layer on top of the alpine basic image specified by FROM to store these installation contents

  • EXPOSE: indicates the port to listen on. It has no effect

  • CMD: command to run at startup

These commands are very simple. However, I mentioned in the notes of the "COPY" and "RUN" commands that a new image layer is created. In the "Docker" image, each additional image layer means that the image will occupy more storage space, and it will be more difficult to use and slower. Therefore, we generally pursue that the built image should be as small as possible. We don't like that the image takes up too much space because of several commands. How can we distinguish commands and judge which commands will add a new image layer?

A basic principle on how to distinguish whether the command will create a new image layer is that if the instruction is used to add new files or programs to the image, the instruction will create a new image layer. If it only tells {Docker} how to complete the construction or how to run the application, it will only add the metadata of the image.

In addition, different "Dockerfile" writing methods will lead to different number of image layers. For example, in "Dockerfile", each "RUN" instruction will basically add an image layer. We can include multiple commands in one RUN instruction by using & & to connect multiple commands or by using backslash (\) line feed, so as to reduce the generation of image layers.

However, sometimes it is necessary to split # RUN # because # docker # has its own caching mechanism. If the image layer built during # RUN # execution is in the cache, docker # will directly reference it, so the construction speed will be faster. When all # RUN # are merged into the same one, Basically, it is difficult to hit the cache (it should be noted that after docker executes the first command that does not hit the cache, all subsequent commands will not be built through the cache).

Now that the Dockerfile has been written, let's check our directory before building the image:

(venv) ➜  python_on_docker git:(master) ls -a
Dockerfile  example.py  requirements.txt __pycache__  venv .env

Found under directory__ pycache__ And venv files. I don't want these two files to be included in the image, but they need to be used during development. At this time, a mechanism similar to "gitignore" in "docker" can be used. We can use it Docker ignore file write the files we want to ignore, so that docker ignores these files when building the image:

__pycache__/
.venv

Now, Dockerfile # and After the docker ignore file is created, you can start to build your own image through the following command:

# -t is followed by a label, which can be filled in by yourself Represents the current directory
➜  docker image build -t app:latest .

After building the image, you can view the current image

# View current mirror
➜  version_1 git:(master) docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
app                 latest              3351ee7a79ac        About a minute ago   435MB

We can see that our image has been created successfully, but it is unreasonable to show that this simple image occupies 435MB of space. We can check the number of layers, volumes and configuration information of the image through docker image inspect xxx to further solve the problem of too large image (because there will be a lot of returned data, it will not be shown here). In addition, we can also view our image construction through the "history" command to find out which step causes the image to become larger:

➜  version_1 git:(master) docker history app
IMAGE               CREATED              CREATED BY                                      SIZE                COMMENT
3351ee7a79ac        About a minute ago   /bin/sh -c #(nop)  CMD ["uvicorn" "--host" "...   0B                  
f7fedcb216b0        About a minute ago   /bin/sh -c #(nop)  EXPOSE 8080                  0B                  
190fd056b947        About a minute ago   /bin/sh -c apk add --update gcc musl-dev pyt...   313MB               
66901ff8b9d4        5 minutes ago        /bin/sh -c #(nop)  ENV PYTHONUNBUFFERED=1       0B                  
7e85b2fa504e        5 minutes ago        /bin/sh -c #(nop)  ENV PYTHONDONTWRITEBYTECO...   0B                  
a2714bff8c12        5 minutes ago        /bin/sh -c #(nop) COPY dir:26dace857b0be9773...   23.7MB              
dc4d69bd98e5        5 minutes ago        /bin/sh -c #(nop) WORKDIR /data/app             0B                  
db1533598434        5 minutes ago        /bin/sh -c #(nop)  LABEL maintainer=so1nxxxx...   0B                  
f309434dea3a        16 months ago        /bin/sh -c #(nop)  CMD ["python3"]              0B                  
<missing>           16 months ago        /bin/sh -c set -ex;   wget -O get-pip.py "$P...   6.24MB              
<missing>           16 months ago        /bin/sh -c #(nop)  ENV PYTHON_GET_PIP_SHA256...   0B                  
<missing>           16 months ago        /bin/sh -c #(nop)  ENV PYTHON_GET_PIP_URL=ht...   0B                  
<missing>           16 months ago        /bin/sh -c #(nop)  ENV PYTHON_PIP_VERSION=19...   0B                  
<missing>           17 months ago        /bin/sh -c cd /usr/local/bin  && ln -s idle3...   32B                 
<missing>           17 months ago        /bin/sh -c set -ex  && apk add --no-cache --...   86.4MB              
<missing>           17 months ago        /bin/sh -c #(nop)  ENV PYTHON_VERSION=3.7.4     0B                  
<missing>           17 months ago        /bin/sh -c #(nop)  ENV GPG_KEY=0D96DF4D4110E...   0B                  
<missing>           17 months ago        /bin/sh -c apk add --no-cache ca-certificates   551kB               
<missing>           17 months ago        /bin/sh -c #(nop)  ENV LANG=C.UTF-8             0B                  
<missing>           17 months ago        /bin/sh -c #(nop)  ENV PATH=/usr/local/bin:/...   0B                  
<missing>           17 months ago        /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           17 months ago        /bin/sh -c #(nop) ADD file:fe64057fbb83dccb9...   5.58MB

As you can see, the apk of the third command It takes up the most space because we need to install some compilation dependencies before installing the Python library we want, but these dependencies are very large. Fortunately, Docker provides the function of multi-stage construction (there is also the builder mode, but it is not as good as multi-stage construction). The multi-stage construction method is that a Dockerfile file contains multiple FROM instructions, In this file, each FROM instruction is a new Build Stage, and each new build order can easily copy the components completed in the previous stage, that is, you can build a heavier Docker image by dependency, and then build an image you really want based on the image, and finally only keep the last built image.

So we can rewrite our Dockerfile file, use the first build phase to solve the dependency installation, then build a new image based on the dependency of the first phase in the second build phase, and finally output the image of the second build phase. The example file is as follows: (source code):

#####################
# Compile dependent configuration files#
#####################

# Phase I
# Set the alias of this phase to builder
FROM python:3.7.4-alpine as builder
# Set working directory
WORKDIR /data/app
# Copy local dependencies
COPY . .

# Setting environment variables
# Do not generate pyc files
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Install dependencies and compile files to / data/python_wheels
RUN apk add --update gcc musl-dev python3-dev libffi-dev openssl-dev build-base && pip install --upgrade pip && pip wheel --no-cache-dir --no-deps --wheel-dir /data/python_wheels -r requirements.txt

#####################
# Profiles for online use#
#####################

# Phase II
# Pull the basic image of python, and use python -V to see which version you are just now
FROM python:3.7.4-alpine
# Sets the maintainer of the current image
LABEL maintainer="so1nxxxx@gmail.com"
# Set working directory
WORKDIR /data/app
# Copy local dependencies
COPY . .

# Copy the compiled file generated in the first build phase to the path corresponding to the container
COPY --from=builder /data/python_wheels /data/python_wheels
# Installing python dependencies through wheels
RUN pip install --no-cache /data/python_wheels/*

# Indicates the port on which to listen
EXPOSE 8080

# Commands to run
CMD ["uvicorn", "--host", "0.0.0.0", "example:app"]

There are two "FROM" in the "Dockerfile" file shown. One "FROM" represents a separate construction stage. The first stage is to install and compile the required dependencies according to the current "Python" environment, and then generate the "Python" wheel file according to the "requirements". The generated location is / data/python_wheels . The second "FROM" statement is the same as just now, until the "COPY" statement. Here is a "COPY --from" instruction. It only copies the dependencies related to the production environment FROM the image built in the previous stage, but does not COPY the dependencies unnecessary to the production environment. This statement means / data / Python in the builder construction stage_ Wheels , COPY to / data / Python in the current build phase_ wheels . Next, the RUN statement is also changed. Since the dependency has been compiled in the first stage, you can directly use the dependency for installation The following is the same as the previous. There is no change. After the Dkckerfile file is written, start to build your own image:

# -t is followed by a label, which can be filled in by yourself Represents the current directory
➜  docker image build -t app_1:latest .

View the built image after the build is completed:

➜  version_2 git:(master) docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
app_1               latest              a71a4a7db157        7 seconds ago       116MB
app                 latest              3351ee7a79ac        9 minutes ago       435MB

➜  version_2 git:(master) docker history app_1
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
a71a4a7db157        43 seconds ago      /bin/sh -c #(nop)  CMD ["uvicorn" "--host" "...   0B                  
d4d38b71a1ba        43 seconds ago      /bin/sh -c #(nop)  EXPOSE 8080                  0B                  
5fb10c8afea8        43 seconds ago      /bin/sh -c pip install --no-cache /data/pyth...   15.3MB              
e454bbe54adb        46 seconds ago      /bin/sh -c #(nop) COPY dir:ff6195d46738a79a1...   2.13MB              
d70a8a552490        46 seconds ago      /bin/sh -c #(nop) COPY dir:fbe9ac8ac1636d3d7...   3.63kB              
dc4d69bd98e5        14 minutes ago      /bin/sh -c #(nop) WORKDIR /data/app             0B                  
db1533598434        14 minutes ago      /bin/sh -c #(nop)  LABEL maintainer=so1nxxxx...   0B                  
f309434dea3a        16 months ago       /bin/sh -c #(nop)  CMD ["python3"]              0B                  
<missing>           16 months ago       /bin/sh -c set -ex;   wget -O get-pip.py "$P...   6.24MB              
<missing>           16 months ago       /bin/sh -c #(nop)  ENV PYTHON_GET_PIP_SHA256...   0B                  
<missing>           16 months ago       /bin/sh -c #(nop)  ENV PYTHON_GET_PIP_URL=ht...   0B                  
<missing>           16 months ago       /bin/sh -c #(nop)  ENV PYTHON_PIP_VERSION=19...   0B                  
<missing>           17 months ago       /bin/sh -c cd /usr/local/bin  && ln -s idle3...   32B                 
<missing>           17 months ago       /bin/sh -c set -ex  && apk add --no-cache --...   86.4MB              
<missing>           17 months ago       /bin/sh -c #(nop)  ENV PYTHON_VERSION=3.7.4     0B                  
<missing>           17 months ago       /bin/sh -c #(nop)  ENV GPG_KEY=0D96DF4D4110E...   0B                  
<missing>           17 months ago       /bin/sh -c apk add --no-cache ca-certificates   551kB               
<missing>           17 months ago       /bin/sh -c #(nop)  ENV LANG=C.UTF-8             0B                  
<missing>           17 months ago       /bin/sh -c #(nop)  ENV PATH=/usr/local/bin:/...   0B                  
<missing>           17 months ago       /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           17 months ago       /bin/sh -c #(nop) ADD file:fe64057fbb83dccb9...   5.58MB 

You can see that the image size has been reduced a lot. It's almost one fourth of the size. It's perfect! If you want to make the image smaller, you can add the -- square option when using the {bliud} command, so that} Docker will merge all image layers into one when {build}, but this also has disadvantages, because the merged image layers cannot share the image layer, and the cost of image will become great when} push} and} pull}.

The image is finally created. You can start the container to view the operation effect of the image we built:

# Execute the container running command. The - name parameter can be set by yourself. The - p parameter specifies that the container port (the second) is bound to the local port (the first). The last parameter is image id. the generated image is different each time
➜  docker container run -d --name docker_app_1 -p 8000:8000 app_1

After calling the start command, you can view the container operation through the ps} command:

# After startup, the viewing container failed to start. It is normal. The configuration uses ` 127.0 0.1 'if mysql and redis are not installed on the container, it must not be connected 
# View container status
➜  version_2 git:(master) docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS               NAMES
3070cf77c951        app_1               "uvicorn --host 0.0...."   7 seconds ago       Exited (0) 6 seconds ago                       docker_app_1

Through the output, you can find the failure of the image operation, but you don't know why. You can only view the failure through the operation log:

# View run log
➜  version_2 git:(master) docker logs -f -t --tail 10 docker_app_1 
2021-02-07T09:01:16.062239955Z     await pool._fill_free_pool(False)
2021-02-07T09:01:16.062241993Z   File "/usr/local/lib/python3.7/site-packages/aiomysql/pool.py", line 168, in _fill_free_pool
2021-02-07T09:01:16.062250734Z     **self._conn_kwargs)
2021-02-07T09:01:16.062253106Z   File "/usr/local/lib/python3.7/site-packages/aiomysql/connection.py", line 75, in _connect
2021-02-07T09:01:16.062255305Z     await conn._connect()
2021-02-07T09:01:16.062257318Z   File "/usr/local/lib/python3.7/site-packages/aiomysql/connection.py", line 523, in _connect
2021-02-07T09:01:16.062259455Z     self._host) from e
2021-02-07T09:01:16.062275244Z pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on '127.0.0.1'")
2021-02-07T09:01:16.062277250Z 
2021-02-07T09:01:16.062279127Z ERROR:    Application startup failed. Exiting.

It can be seen from the error log that the image failed to start because it was not connected to 127.0 0.1 , address causes an error, but , MySQL , is already installed on this machine. Why can't it be connected? This is because a network mode will be selected when the Docker container is running. There are three types of networks to configure: host, bridge, and none:

  • Docker
    127.0.0.1
    ifconfig
    docker0
    127.0.0.1
    
  • Host host is the host network, that is, the Docker and the host share the network. In this mode, the applications in the Docker container can use the network as usual. At the same time, the network performance of this mode is also the best. If a network bottleneck is found when using the bridge mode, or the application has high requirements for network delay and concurrency, Remember to switch to host network mode.

  • none means there is no network, and the applications in the container will not be networked

After understanding the network mode of Docker, we can solve the problem of disconnection by changing the network mode to host. The specific operation is to start the container by removing the - p 8000:8000 option and adding the -- net=host option (if the old container exists, remember to delete it, otherwise it will occupy space):

# Start container
➜  version_2 git:(master) docker container run -d --name docker_app_1 --net=host app_1  
cd1ea057cdb6ec6ee3917d13f9c3c55db2a2949e409716d1dbb86f34bb1356e5

# Check the startup log, normal!
➜  version_2 git:(master) docker logs -f -t --tail 10 docker_app_1
2021-02-07T09:06:35.403888447Z INFO:     Started server process [1]
2021-02-07T09:06:35.403903761Z INFO:     Waiting for application startup.
2021-02-07T09:06:35.437776480Z INFO:     Application startup complete.
2021-02-07T09:06:35.438466743Z INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

# Calling interface discovery started successfully
➜  curl 127.0.0.1:8000
127.0.0.1, Hello Word!

At this point, we finally build the python web application into an image and start the image normally.

3. Deploy and manage multi container applications in single engine mode

After knowing how to make containers, we can start preparing to become Yaml engineers. Above, we only build an image of Python application, and then connect the local MySQL and Redis services. Now we can container MySQL and Redis services together, However, at this time, it is too troublesome for each service to be configured and used through a specific Dockerfile.

If there is a file like Dockerfile, which describes how to install the three images, then we can install the three services on the server by executing a command. In Docker, Docker Compose provides this function.

If you use "Ansible", you know that there will be a "palybook" yaml configuration file. As long as you store a "palybook" file on the control host, you can control other machines to perform any operation, such as installing applications for other hosts. However, "Docker Compose" is somewhat similar. Based on this function, you can manage multiple "Docker" containers on one machine at the same time.

Docker Compose describes the whole application through a declarative configuration file, so as to complete the deployment with one command. After the application is successfully deployed, it can also manage its complete life cycle through a series of simple commands. In addition, configuration files can also be stored and managed in the version control system. This tool will be installed together with {docker}.

The next step is the actual combat. Compared with the previous services, this instance has more "Nginx" services, and "Nginx" requires a configuration file, so you need to configure a "Dockerfile" file for "Nginx" separately.

First, create a "nginx" folder in the directory, and then write the configuration file "nginx" in the folder Conf (source code):

upstream app_server {
    server app:8000;
}

server {

    listen 80;

    location / {
        proxy_pass http://app_server;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

}

At this time, it may be found that there is a strange configuration "app:8000" in the "upstream" in the configuration file. Let's ignore it first and continue to create the "Dockerfile (source code) file of" Nginx ":

FROM nginx:1.19.0-alpine

# Remove the default profile for Nginx
RUN rm /etc/nginx/conf.d/default.conf
# Use the configuration file we wrote
COPY nginx.conf /etc/nginx/conf.d

Now that the container file of Nginx , has been prepared, we begin to write our , docker - compose YML , file. Suppose that our current single server needs Python , Web services, Nginx, Redis , and MySQL , services (source code):

# Version must be specified and is always on the first line of the file. It defines the version of Compose file format (mainly API). It is recommended to use the latest version.
version: "3.5"
# Used to define different application services
services:
    redis:
        # Start an independent container named redis based on the redis:alpine image.
        image: "redis:alpine"
        # Enables Docker to connect services to the specified network. This network should already exist or be defined in the network level key.
        networks:
            - local-net
        # Specify Docker to map port 6379 in the container (- target) to port 63790 of the host (published).
        ports:
            - target: 6379
              published: 63790
    mysql:
        image: mysql
        command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci #Set utf8 character set
        restart: always
        networks:
            local-net:
        environment:
            # Set the user name and password required by mysql through environment variables
            MYSQL_DATABASE: 'test'
            MYSQL_USER: 'root'
            MYSQL_PASSWORD: ''
            MYSQL_ROOT_PASSWORD: ''
            MYSQL_ALLOW_EMPTY_PASSWORD: 'true'

        # Specify Docker to map port 3306 in the container (- target) to port 33060 in the host (published).
        ports:
            - target: 3306
              published: 33060
        # Set the directory where MySQL can easily store content
        volumes:
            - type: volume
              source: local-vol
              target: /example_volumes
    app:
        # Specify the Docker to build a new image based on the instructions defined in the Dockerfile under the current directory (.) and the image will be used to start the container of the service.
        build: .
        # Specify the command to be executed by Docker in the container. Since it already exists in our Dockerfile, it will be commented out here
        # command:
        # Specify Docker to map the 8000 port in the container (- target) to the 8000 port of the host (published).
        ports:
            - target: 8000
              published: 8000
        networks:
            - local-net
        # Mount to local volume
        volumes:
            - type: volume
              source: local-vol
              target: /example_volumes
        # The declaration needs to rely on the above services. This can only work if the above services are up
        depends_on:
            - mysql
            - redis
    nginx:
        build: ./nginx

        # Specify Docker to map port 80 in the container (- target) to port 8001 of the host (published).
        networks:
            - local-net
        ports:
            - target: 80
              published: 8001
        depends_on:
            - app

# networks is used to guide Docker to create a new network. By default, Docker Compose creates a bridge network. This is a single host network, which can only connect containers on the same host. Of course, you can also use the driver attribute to specify different network types.
networks:
    local-net:
        driver: bridge

volumes:
    local-vol:

After the file is created, if you start it directly, you will find that although the container will map the port to the host, because all services are configured with a local net network, which is a bridge mode network, you can access 127.0 in the container 0.1 , you can't access other containers, but these container applications can establish connections through the , local net , network. As long as you access the container service name, you can directly access the corresponding container (many domestic tutorials don't say, giant pit), so , Nginx The "app:8000" option is only available for conf configuration, which means that "Nginx" is connected to the 8000 port of our Python application (service name is app). In addition, we need to change our env configuration file and modify their host:

# MySQL accesses mysql, which is equivalent to accessing your own network, and 127.0 0.1 is similar
MYSQL_DB="mysql"
MYSQL_HOST="mysql"
MYSQL_PORT="3306"
MYSQL_USER="root"
MYSQL_PW=""

REDIS_URL="redis://redis"
REDIS_POOL_MINSIZE=1
REDIS_POOL_MAXSIZE=10
REDIS_ENCODING="utf-8"

Then you need to change the app's "Dockerfile" startup command to start it in 5 seconds to prevent some services from running before they get up:

CMD CMD sh -c 'sleep 5 && uvicorn --host 0.0.0.0 example:app'

Everything is ready. Finally, we can start our container group by executing the "docker compose up - d" command. In this command, d means running in the background, and then check the operation through several commands:

# Use the docker compose up Command to view the status of the application. The output shows the container name, the Command running in it, the current status and the network port it listens on.
➜  version_3 git:(master) docker-compose ps           
      Name                     Command               State                 Ports               
-----------------------------------------------------------------------------------------------
version_3_app_1     uvicorn --host 0.0.0.0 exa ...   Up      0.0.0.0:8000->8000/tcp, 8080/tcp  
version_3_mysql_1   docker-entrypoint.sh mysql ...   Up      0.0.0.0:33060->3306/tcp, 33060/tcp
version_3_nginx_1   /docker-entrypoint.sh ngin ...   Up      0.0.0.0:8001->80/tcp              
version_3_redis_1   docker-entrypoint.sh redis ...   Up      0.0.0.0:63790->6379/tcp 


# Use the docker compose top command to list the processes running in each service (container).
# Where the PID number is the process ID on the Docker host (not in the container).
➜  version_3 git:(master) docker-compose top
version_3_app_1
UID    PID    PPID   C   STIME   TTY     TIME                                       CMD                                  
-------------------------------------------------------------------------------------------------------------------------
root   1802   1786   0   16:05   ?     00:00:00   /usr/local/bin/python /usr/local/bin/uvicorn --host 0.0.0.0 example:app

version_3_mysql_1
  UID      PID    PPID   C   STIME   TTY     TIME                                         CMD                                    
---------------------------------------------------------------------------------------------------------------------------------
deepin-+   1047   1018   0   16:05   ?     00:00:00   mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci

version_3_nginx_1
  UID      PID    PPID   C   STIME   TTY     TIME                        CMD                    
------------------------------------------------------------------------------------------------
root       1355   1339   0   16:05   ?     00:00:00   nginx: master process nginx -g daemon off;
systemd+   1467   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1468   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1469   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1470   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1471   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1472   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1473   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1474   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1475   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1476   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1477   1355   0   16:05   ?     00:00:00   nginx: worker process                     
systemd+   1478   1355   0   16:05   ?     00:00:00   nginx: worker process                     

version_3_redis_1
  UID      PID    PPID   C   STIME   TTY     TIME         CMD     
------------------------------------------------------------------
deepin-+   1048   1014   0   16:05   ?     00:00:00   redis-server

# View current network
# View network details docker network inspect version_3_local-net  
➜  version_3 git:(master) docker network ls
NETWORK ID          NAME                  DRIVER              SCOPE
b39273f15fb3        bridge                bridge              local
23ef7eb0fba0        host                  host                local
ab8439cd985c        none                  null                local
5bcd17ecd747        version_3_local-net   bridge              local

# Viewing volumes
# View details docker volume inspect version_3_local-vol 
➜  version_3 git:(master) docker volume ls
DRIVER              VOLUME NAME
local               version_3_local-vol

Through the above commands, we can find that the service is running normally. If you want to stop at this time, you can use the , docker compose stop , command, which will stop the application but will not delete resources. However, for stopped applications, you can use the , docker compose RM , command to delete the containers and networks related to the application, However, volumes and mirrors are not deleted. Directly use the docker compose down command to stop and close the application, and then the application is deleted, leaving only the image, volume and source code.

4. Summary

So far, the containerization of Python applications has been introduced, but this is only a simple beginning. Later, we need to slowly understand how to control and execute multi machine container applications.

Keywords: Python Docker Container

Added by surfsup on Mon, 13 Dec 2021 10:27:52 +0200