Docker common commands

Image command

1. List all local images

docker images
docker image ls

2. Label the local image

docker tag [image_name]:[tag] [new_image_name]:[new_tag]

3. View image details

docker inspect [image_name]:[tag]

4. View the image history

# Display the specific content of each layer
docker history [image_name]:[tag]

5. Search image

# Search for shared images in remote warehouses
# docker search [TERM[
docker search nginx

Delete local mirror

# When a container created by the image exists, it is not allowed to delete the image. You need to delete the container first
dockker rmi [image_name]:[tag]

create mirror

  1. Create a container based on an existing image
# docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
# -m submission notes
# -a author
docker commit -m "Add a new file" -a "author tony" d6cfbdd9258c test:0.1
  1. Import based on local template
docker import [OPTIONS] file [URL]-[REPOSITORY[:TAG]]
# Take ubuntu as an example
cat ubuntu-16.04-x86_64-minimal.tar.gz | docker import - ubuntu:14.04
  1. Create based on Dockerfile

Storage Mirroring

Export image to local file

# Export local ubuntu:14.04 image as file ubuntu_14.04.tar, copy ubuntu_14.04.tar share with others
docker save -o ubuntu_14.04.tar ubuntu:14.04

Load image

You can use docker load to import the exported tar file into the local image library

docker load --input ubuntu_14.04.tar
 or
fovkrt load < ubuntu_14.04.tar

Upload image

Upload the image to the warehouse. The default is Docker Hub warehouse

# docker push NAME[:TAG] | [REGISTRY_HOST[:REGISTRY_PORT]/]NAME[:TAG]
# 1 .  Add label
docker tag test:latest user/test:latest
# 2. Upload image
docker push user/test:latest

container

Create container

docker create -it ubuntu:latest
# -i keep the standard input on, and the default is false
# -t whether to assign a pseudo terminal. The default is false
# See the official document for more parameters

Start container

Start a container that has been created

docker start [containerID or NAMES]

Create and start a new container

It is equivalent to executing docker create first and then docker start command

docker run [containerID or NAMES] [OPTIONS]
# -d let Docker container run in daemon state in the background

Terminate container

Terminate a running container

docker stop [containerID or NAMES]

Enter container

# Recommendation method
docker exec -it [containerID or NAMES] [command]

Delete container

Delete containers in aborted or exited state

docker rm [containerID or NAMES]

View container information

docker inspect [containerID or NAMES]

Export container

Export a container that has been created to a file, whether it is running or not

# docker export [-o] [--output] [CONTAINER]
docker export -o test_for_run.tar ce5

Import container

Import the exported file into an image through the docker import command
You can import the image storage file into the local image library through docker load or import the container snapshot into the local image library through docker import
The difference is that docker import will lose all historical information and metadata information, and load will keep all information and have a larger volume

docker import test_for_run.tar - test/ubuntu:v1.0

Warehouse

Sign in

# User name and password are required
docker login

Search for images in the warehouse

docker search [Image name]
# According to whether it is officially provided, it is divided into basic image or root image and user image
# The official image is named with a single word without a user name, and the user image is named with a user name

Build local private warehouse

Build a local private warehouse through the officially provided registry image

docker run -d p 5000:5000 -v /opt/data/registry:/temp/registry registry

Container data management

Mount data volume

characteristic:

  1. Data volumes can be shared and reused between containers, and the transfer of data between containers will become efficient and convenient
  2. Changes to the data in the data volume take effect immediately, whether in container or local
  3. Updating the data volume will not affect the image, decoupling the application and data
  4. The volume will exist until there is no container to use and it can be safely unloaded
# Load the host's / src/webapp directory into the container's / opt/webapp directory
docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
#  By default, the data volume is mounted with read-write permission, which can be specified as read-only through ro
docker run -d -P --name web -v /src/webapp:/opt/webapp:ro

Data volume container

It is necessary to share some continuously updated data among multiple containers. The simplest way is to use the data volume container.

# Create a data volume container dbdata
docker run -it -v /dbdata --name dbdata ubuntu
# View / dbdata directory
# You can then use -- volumes from in other containers to mount the data volumes in the dbdata container
# Next, create two containers db1 and db2, and mount the data volume from the dbdata container
docker run -it --volumes-from dbdata --name db1 ubuntu2
docker run -it --volumes-from dbdata --name db2 ubuntu2
# At this time, containers db1 and db2 mount the same data volume to the same / dbdata directory. The writes of any of the three containers in this directory can be seen by other containers
# Use the -- volumes from parameter multiple times to mount multiple data volumes from multiple containers. You can also mount data volumes from other containers that have mounted container volumes
docker run -d --name db3 --volumes-from db1 training/postgres
# If you delete a mounted container, the data volume is not automatically deleted

Delete container volume

docker rmm -v [Container volume]

Migrating data using container volumes

docker run --volumes-from dbdata -v $(pwd):/backup --name worker ubuntu tar cvf /backup/backup.tar /dbdata
# 1. Create a container worker using ubuntu image
# 2. Use the -- volumes from dbdata parameter to let the worker container mount the data volume of the dbdata container (i.e. dbdata)
# 3. Use the - v $(pwd):/backup parameter to mount the local current log to the / backup directory of the worker container
# 4. After the worker container is started, the tar cvf /backup/backup/tar /dbdata command is used to back up the contents under / dbdata as / backup / backup in the container tar 

recovery

Restore data to a container

# 1. Create a container with data volume dbdata2
docker run -v /dbdata --name dbdata2 ubuntu /bin/bash
# 2. Create a new container, mount the container of dbdata2, and use untar to unzip the backup files into the locked container
docker  run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Port mapping and container interconnection

Port mapping access container

# -P (in uppercase) Docker randomly maps a 49000 ~ 49900 port to the open network port of the internal container
docker run -d -O training/webapp python app.py
# -p (lowercase) map the specified port of the container to the specified port of the machine, and multiple can be mapped
docker run -d -p 5000:5000 -p 3000:80 training/webapp python app.py
# Map a specific address
docker run -d -p 127.0.0.1:5000:5000 training/webapp python app.py
# Map any port of the local machine to the 5000 port of the container, and the local host will automatically assign a port
docker run -d -o 127.0.0.1::5000 training/webapp python app,py

View port mapping configuration

docker port nostalgic_morse 5000

Interconnection mechanism to realize convenient mutual visits

Interconnected by custom container name

# 1. Set the container name (set the container name as db below)
# The container name is unique. If you need to use this name again, you need to execute docker rm to delete the existing db container
docker run -d -P --name db training/postgres
# 2. Connect by container name (the following web containers connect db containers by name)
docker run -d -P --name web --link db:db training/webapp python app.py
# Syntax: - link name:alias name is the name of the container to be connected, and alias is the alias of the connection
# After the connection is successful, docker ps checks the container connection and finds that there are db and web/db in the db container names column, which indicates a new message that allows the web container to access the db container

View container public connection information

Method 1: update environment variables

# Use the env parameter to view environment variables
 docker run --rm --name web2 --link db:db training/webapp env
# db in results_ The environment variables at the beginning are used by the web container to connect to the db container

Method 2: update the / etc/hosts file
Docker will add host information to the parent container

# 1. Enter the parent container to view the host information
cat /etc/hosts
# 2. Check the db container ip to obtain the connection ip

Keywords: Docker Ubuntu bash

Added by cafrow on Thu, 27 Jan 2022 01:38:19 +0200