Finally, the container can be used as smoothly as Docker

Author: michelangela Yang

Engineers who pursue technology are generally addicted to cleanliness, especially in the cloud native world. Although Kubernetes has formulated the container runtime interface (CRI) standard, in the early stage, only Docker can be used when the container runs, and Docker does not adapt to this standard, so it opened a back door to Docker and spent a lot of energy adapting it. Later, when more container runtime options were available, Kubernetes had to reconsider whether to continue adapting Docker, because every time Kubelet was updated, it had to consider the adaptation with Docker.

The standard is like this. I set the standard. If you are compatible, you can play together, and if you are incompatible, bye. It is like the bottom line of two people together. You can be heavy, you can be ugly, and you can be imperfect, but if you are incompatible with the standard, you can't play together, so Kubernetes kicked Docker out of group chat.

Finally, Kubernetes chose container. Today, container has become an industrial container runtime. It is simple, robust and portable.

Shortcomings of existing CLI

Although Docker can do all the things that container can do now, container still has a very obvious defect: the CLI is not friendly enough. Like Docker and Podman, it cannot start a container with a simple command, and its two cli tools ctr and crictl Can't realize such a simple requirement, which is needed by most people. I can't deploy a Kubernetes cluster to test containers locally, can I?

The design of ctr is not very friendly to humans. For example, it lacks the following functions similar to Docker:

  • docker run -p <PORT>
  • docker run --restart=always
  • Through voucher file ~ / docker/config.json to pull the image
  • docker logs

In addition, there is a CLI tool called crictl, which is as unfriendly as ctr.

In order to solve this pain point, container officially launched a new CLI called nerdctl . The use experience of nerdctl is as smooth as that of docker, for example:

🐳  → nerdctl run -d -p 8080:80 --name=nginx --restart=always nginx

nerdctl is just a replica of docker?

The goal of nerdctl is not to simply copy the function of docker. It also implements many functions that docker does not have, such as delaying the pulling of images( lazy-pulling )Image encryption( imgcrypt )Wait.

Please refer to this article for the delayed pull image function: Container uses stargz snapshot to delay pulling images.

Although it is expected that these functions will eventually be implemented in Docker, but It may take months or even years , because docker currently uses only a small part of the container subsystem. Docker may refactor the code to use the complete container in the future, but we haven't seen any substantial progress yet. So the container community decided to create a new CLI to make container more user-friendly.

nerdctl trial

You can start from release of nerdctl Download the latest executable files in. There are two distributions available for each version:

  • nerdctl-<VERSION>-linux-amd64. tar. GZ: contains only nerdctl.
  • nerdctl-full-<VERSION>-linux-amd64. tar. GZ: contains nerdctl and related dependent components (containerd, runc, CNI,...).

If you have already installed container, you only need to select the previous release, otherwise select the full version.

After installing nerdctl, you can use nerdctl to run the container:

🐳  → nerdctl run -d -p 80:80 --name=nginx --restart=always nginx:alpine

docker.io/library/nginx:alpine:                                                   resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:d33e9e24389d7d8b90fe2bcc2dd1bc09b4d235e916ba9d5d9a71cf52e340edb6:    done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:c1f4e1974241c3f9ddb2866b2bf8e7afbceaa42dae82aabda5e946d03f054ed2: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:bfad9487e175364fd6315426feeee34bf5e6f516d2fe6a4e9b592315e330828e:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:29d3f97df6fd99736a0676f9e57e53dfa412cf60b26d95008df9da8197f1f366:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:9aae54b2144e5b2b00c610f8805128f4f86822e1e52d3714c463744a431f0f4a:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:a5f0adaddd5456b7c5a3753ab541b5fad750f0a6499a15f63571b964eb3e2616:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5df810e1c460527fe400cdd2cab62228f5fb3da0f2dce86a6a6c354972f19b6e:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:345aee38d3533398e0eb7118e4323a8970f7615136f2170dfb2b0278bbd9099d:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e6a4c36d7c0e358e5fc02ccdac645b18b85dcfec09d4fb5f8cbdc187ce9467a0:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 5.7 s                                                                    total:  9.4 Mi (1.6 MiB/s)
27b55e0b18b10c4c8f34e3ba709614e7b1760a75db061d2ce5183e8b1101ce09

To view the created container:

🐳  → nerdctl ps
CONTAINER ID    IMAGE                             COMMAND                   CREATED          STATUS    PORTS                 NAMES
3b5faa266a43    docker.io/library/nginx:alpine    "/docker-entrypoint...."    3 minutes ago    Up        0.0.0.0:80->80/tcp    nginx

Like Docker, container also has a subcommand network:

🐳  → nerdctl network ls
NETWORK ID    NAME               FILE
0             bridge
              k8s-pod-network    /etc/cni/net.d/10-calico.conflist
              host
              none

Let's take a look at the default bridge configuration:

🐳  → nerdctl network inspect bridge
[
    {
        "CNI": {
            "cniVersion": "0.4.0",
            "name": "bridge",
            "nerdctlID": 0,
            "plugins": [
                {
                    "type": "bridge",
                    "bridge": "nerdctl0",
                    "isGateway": true,
                    "ipMasq": true,
                    "hairpinMode": true,
                    "ipam": {
                        "type": "host-local",
                        "routes": [
                            {
                                "dst": "0.0.0.0/0"
                            }
                        ],
                        "ranges": [
                            [
                                {
                                    "subnet": "10.4.0.0/24",
                                    "gateway": "10.4.0.1"
                                }
                            ]
                        ]
                    }
                },
                {
                    "type": "portmap",
                    "capabilities": {
                        "portMappings": true
                    }
                },
                {
                    "type": "firewall"
                },
                {
                    "type": "tuning"
                }
            ]
        },
        "NerdctlID": 0
    }
]

You can see that CNI is still operating behind the network subcommand, which is different from the principle of docker network subcommand.

Build image

nerdctl can also be used in combination with buildkit to build container image. You need to download the executable file of buildkit first:

🐳  → wget https://github.com/moby/buildkit/releases/download/v0.8.2/buildkit-v0.8.2.darwin-amd64.tar.gz

Unzip it into $PATH:

🐳  → tar -C /usr/local/ -zxvf buildkit-v0.8.2.linux-amd64.tar.gz

Write systemd unit file:

# /etc/systemd/system/buildkit.service
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit

[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true

[Install]
WantedBy=multi-user.target

Enable buildkit Service and set startup and automatic operation:

🐳  → systemctl enable --now buildkit.service

Below with KubeSphere Project as an example to show how to use nerdctl to build images.

First clone the official warehouse of KubeSphere:

🐳  → git clone --depth=1 https://github.com.cnpmjs.org/kubesphere/kubesphere.git

Enter the warehouse directory and compile binary files:

🐳  → cd kubesphere
🐳  → make ks-apiserver

Copy binaries to Dockerfile Directory:

🐳  → cp bin/cmd/ks-apiserver build/ks-apiserver

Enter the Dockerfile directory and modify the Dockerfile:

# Copyright 2020 The KubeSphere Authors. All rights reserved.
# Use of this source code is governed by an Apache license
# that can be found in the LICENSE file.
FROM alpine:3.11

ARG HELM_VERSION=v3.5.2

RUN apk add --no-cache ca-certificates
# install helm
RUN wget https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz && \
    tar xvf helm-${HELM_VERSION}-linux-amd64.tar.gz && \
    rm helm-${HELM_VERSION}-linux-amd64.tar.gz && \
    mv linux-amd64/helm /usr/bin/ && \
    rm -rf linux-amd64
# To speed up building process, we copy binary directly from make
# result instead of building it again, so make sure you run the
# following command first before building docker image
#   make ks-apiserver
#
COPY  ks-apiserver /usr/local/bin/

EXPOSE 9090
CMD ["sh"]

Build image:

🐳  → cd build/ks-apiserver

🐳  → nerdctl build -t ks-apiserver .
[+] Building 22.6s (9/9) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                0.0s
 => => transferring dockerfile: 812B                                                                                                                                                0.0s
 => [internal] load .dockerignore                                                                                                                                                   0.0s
 => => transferring context: 2B                                                                                                                                                     0.0s
 => [internal] load metadata for docker.io/library/alpine:3.11                                                                                                                      1.0s
 => [1/4] FROM docker.io/library/alpine:3.11@sha256:bf5fa774f08a9ed2cb301e522b769d43d48124315a4ec50eae3228d03b9dc558                                                                7.9s
 => => resolve docker.io/library/alpine:3.11@sha256:bf5fa774f08a9ed2cb301e522b769d43d48124315a4ec50eae3228d03b9dc558                                                                0.0s
 => => sha256:9b794450f7b6db7c944ba1f4161edb68cb535052fe7db8ac06e613516c4a658d 2.10MB / 2.82MB                                                                                     21.4s
 => => extracting sha256:9b794450f7b6db7c944ba1f4161edb68cb535052fe7db8ac06e613516c4a658d                                                                                           0.1s
 => [internal] load build context                                                                                                                                                   1.0s
 => => transferring context: 115.87MB                                                                                                                                               1.0s
 => [2/4] RUN apk add --no-cache ca-certificates                                                                                                                                    2.7s
 => [3/4] RUN wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz &&     tar xvf helm-v3.5.2-linux-amd64.tar.gz &&     rm helm-v3.5.2-linux-amd64.tar.gz &&     mv linux-amd64  4.7s
 => [4/4] COPY  ks-apiserver /usr/local/bin/                                                                                                                                        0.2s
 => exporting to oci image format                                                                                                                                                   5.9s
 => => exporting layers                                                                                                                                                             4.6s
 => => exporting manifest sha256:d7eb2a90496678d11ac5c363b7743ffe2b8e23e7071b94556a5e3231f50f5a6e                                                                                   0.0s
 => => exporting config sha256:8eb6a5187ce958e76c8d37e18221d88f25b48dd7e6672021d0fce21bb071f284                                                                                     0.0s
 => => sending tarball                                                                                                                                                              1.3s
unpacking docker.io/library/ks-apiserver:latest (sha256:d7eb2a90496678d11ac5c363b7743ffe2b8e23e7071b94556a5e3231f50f5a6e)...done
unpacking overlayfs@sha256:d7eb2a90496678d11ac5c363b7743ffe2b8e23e7071b94556a5e3231f50f5a6e (sha256:d7eb2a90496678d11ac5c363b7743ffe2b8e23e7071b94556a5e3231f50f5a6e)...done

View the built image:

🐳  → nerdctl images
REPOSITORY                                                   TAG       IMAGE ID        CREATED          SIZE
alpine                                                       3.11      bf5fa774f08a    3 seconds ago    2.7 MiB
ks-apiserver                                                 latest    d7eb2a904966    6 minutes ago    57.7 MiB

For more usage of nerdctl, please refer to README of official warehouse.

summary

From the perspective of industry trend, Docker has become increasingly far away from Kubernetes community, and the container runtime with CRI interface represented by container will be favored by Kubernetes. However, there are still many difficulties in using container purely. For example, it is not convenient to create management containers through CLI. With nerdctl, a CLI tool, you can fill the vacancy of container ease of use and make you happy to use container on a single machine.

This article is composed of blog one article multi posting platform OpenWrite release!

Keywords: Linux github Kubernetes Nginx bash

Added by wiseoleweazel on Tue, 08 Mar 2022 08:16:31 +0200