Docker container network configuration

Docker container network configuration

The creation of namespace in Linux kernel

ip netns command
You can complete various operations on the Network Namespace with the help of the ip netns command. The ip netns command comes from the iproute installation package. Generally, the system will install it by default. If not, please install it yourself.

Note: sudo permission is required when the ip netns command modifies the network configuration.

You can complete the operations related to the Network Namespace through the ip netns command. You can view the command help information through the ip netns help:

[root@localhost ~]# ip netns help
Usage: ip netns list
       ip netns add NAME
       ip netns set NAME NETNSID
       ip [-all] netns delete [NAME]
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...
       ip netns monitor
       ip netns list-id
[root@localhost ~]# 

By default, there is no Network Namespace in the Linux system, so the ip netns list command will not return any information

Create a Network Namespace

Create a namespace named ns0 through the command:

[root@localhost ~]# ip netns list
[root@localhost ~]# ip netns add ns0
[root@localhost ~]# ip netns list
ns0

The newly created Network Namespace will appear in the / var/run/netns / directory. If a namespace with the same name already exists, the command will report the error of "Cannot create namespace file" / var/run/netns/ns0 ": File exists.

[root@localhost ~]# ls /var/run/netns/
ns0

[root@localhost ~]# ip netns add ns0
Cannot create namespace file "/var/run/netns/ns0": File exists
[root@localhost ~]#

Operation Network Namespace

The ip command provides the ip netns exec subcommand, which can be executed in the corresponding Network Namespace.

View the network card information of the newly created Network Namespace

[root@localhost ~]# ip netns exec ns0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

You can see that a lo loopback network card will be created by default in the newly created Network Namespace, and the network card is closed at this time. At this time, if you try to ping the lo loopback network card, you will be prompted that Network is unreachable

[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect: Network unreachable

Enable lo loopback network card

[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.082 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.083 ms
^Z
[1]+  Stopped               ip netns exec ns0 ping 127.0.0.1

Transfer equipment

We can transfer devices (such as veth) between different network namespaces. Since a device can only belong to one Network Namespace, the device cannot be seen in the Network Namespace after transfer.

Among them, veth devices are transferable devices, while many other devices (such as lo, vxlan, ppp, bridge, etc.) are not transferable.

veth pair

The full name of veth pair is Virtual Ethernet Pair. It is a pair of ports. All packets entering from one end of the pair of ports will come out from the other end, and vice versa.

veth pair is introduced to communicate directly in different network namespaces. It can be used to connect two network namespaces directly.

Create veth pair

[root@localhost ~]# ip link add type veth
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f4:76:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.136.129/24 brd 192.168.136.255 scope global noprefixroute dynamic ens33
       valid_lft 1355sec preferred_lft 1355sec
    inet6 fe80::a060:b3c1:5019:bcc9/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:af:a9:22:cc brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8e:45:34:89:5f:6a brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a6:66:2e:52:1a:89 brd ff:ff:ff:ff:ff:ff

You can see that a pair of Veth pairs are added to the system to connect the two virtual network cards veth0 and veth1. At this time, the pair of Veth pairs are in the "not enabled" state.

Enable communication between network namespaces

Next, we use veth pair to realize the communication between two different network namespaces. Just now, we have created a Network Namespace named ns0. Next, we will create another information Network Namespace named ns1

[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns list
ns1
ns0

Add veth0 to ns0 and veth1 to ns1

[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip link set veth1 netns ns1

Configure the ip addresses for these Veth pairs and enable them

[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns0 ip addr add 192.168.136.2/24 dev veth0
[root@localhost ~]# ip netns exec ns1 ip link set lo up
[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns1 ip addr add 192.168.136.4/24 dev veth1

View the status of this pair of Veth pairs

[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8e:45:34:89:5f:6a brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 192.168.136.2/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::8c45:34ff:fe89:5f6a/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a6:66:2e:52:1a:89 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.136.4/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a466:2eff:fe52:1a89/64 scope link 
       valid_lft forever preferred_lft forever

As can be seen from the above, we have successfully enabled this veth pair and assigned the corresponding ip address to each veth device. We try to access the ip address in ns0 in ns1:

[root@localhost ~]# ip netns exec ns1 ping 192.168.136.2
PING 192.168.136.2 (192.168.136.2) 56(84) bytes of data.
64 bytes from 192.168.136.2: icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from 192.168.136.2: icmp_seq=2 ttl=64 time=0.176 ms
64 bytes from 192.168.136.2: icmp_seq=3 ttl=64 time=0.108 ms

## veth pair successfully realizes the network interaction between two different network namespaces.

veth device rename

[root@localhost ~]# ip netns exec ns0 ip link set veth0 down
[root@localhost ~]#  ip netns exec ns0 ip link set dev veth0 name ens0
[root@localhost ~]# ip netns exec ns0 ip link set ens0 up
[root@localhost ~]# ip netns exec ns0 ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
20: ens0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a2:c6:0a:25:d1:ce brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.168.136.2/24 scope global ens0
       valid_lft forever preferred_lft forever
    inet6 fe80::a0c6:aff:fe25:d1ce/64 scope link 
       valid_lft forever preferred_lft forever

Four network mode configurations

bridge mode

[root@localhost ~]# docker run -it --name yyy --rm busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
3aab638df1a9: Pull complete 
Digest: sha256:52817dece4cfe26f581c834d27a8e1bcc82194f914afe6d50afad5a101234ef1
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
       
Add when creating container--network bridge With or without--network The effect of options is consistent

[root@localhost ~]# docker run -it --name yy --network bridge --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

none mode

[root@localhost ~]# docker run -it --name y --network none --rm busybox      
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever

container mode

Start the first container

[root@localhost ~]# docker run -it --name y1 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

Start second

[root@localhost ~]# docker run -it --name y2 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

You can see that the IP address of the container named y2 is 172.17.0.3, which is different from that of the first container, that is, there is no shared network. At this time, if we change the startup mode of the second container, we can make the container IP named y2 consistent with that of the y1 container, that is, the shared IP, but not the file system

[root@localhost ~]# docker run -it --name y2 --rm --network container:y1 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Create a directory on the y1 container

/ # mkdir /tmp/yy
/ # ls tmp/
yy

Check the / tmp directory on the y2 container and you will find that there is no such directory because the file system is isolated and only shares the network.

host mode

Directly indicate that the mode is host when starting the container

[root@localhost ~]# docker run -it --name y2 --rm --network host busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000
    link/ether 00:0c:29:b8:d0:10 brd ff:ff:ff:ff:ff:ff
    inet 192.168.136.129/24 brd 192.168.136.255 scope global dynamic noprefixroute ens33
       valid_lft 957sec preferred_lft 957sec
    inet6 fe80::cb48:11a1:1d08:cdf0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
    link/ether 02:42:3e:f8:13:5c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:3eff:fef8:135c/64 scope link 
       valid_lft forever preferred_lft forever
9: vethe3306e3@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 
    link/ether 62:34:0a:6f:0f:fb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6034:aff:fe6f:ffb/64 scope link 
       valid_lft forever preferred_lft forever

At this point, if we start an http site in this container, we can directly access the site in this container in the browser with the IP of the host.

Common operations of containers

View container hostname

[root@localhost ~]# docker run -it --name zz --network bridge --rm busybox
/ # hostname
7ceecd6ea510

Inject hostname when container starts

[root@localhost ~]# docker run -it --hostname yyds --rm busybox
/ # hostname
yyds
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      yyds
/ # 
/ # cat /etc/resolv.conf 
# Generated by NetworkManager
search localdomain
nameserver 192.168.136.2
/ # ping www.baidu.com
PING www.baidu.com (14.215.177.39): 56 data bytes
64 bytes from 14.215.177.39: seq=1 ttl=127 time=26.661 ms
64 bytes from 14.215.177.39: seq=5 ttl=127 time=27.254 ms

Manually specify the DNS to be used by the container

[root@localhost ~]# docker run -it --rm --hostname node1 --dns 114.114.114.114 busybox
/ # hostname 
node1
/ # 
/ # cat /etc/resolv.conf 
search localdomain
nameserver 114.114.114.114
/ # cat /etc/hosts 
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      node1

Manually inject the host name to IP address mapping into the / etc/hosts file

[root@localhost ~]# docker run -it --rm --hostname node1 --dns 114.114.114.114 --add-host node2:172.17.0.3 --add-host node3:172.17.0.3 busybox
/ # cat /etc/hosts 
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      node2
172.17.0.3      node3
172.17.0.3      node1

Open container port

When docker run is executed, there is a - p option to map the application ports in the container to the host, so that the external host can access the applications in the container by accessing a port of the host.

-The p option can be used multiple times, and the port it can expose must be the port that the container is actually listening to.

-Use format of p option:

  • -p
    • Maps the specified container port to a dynamic port at all addresses of the host
  • -p :
    • Map the container port to the specified host port
  • -p ::
    • Maps the specified container port to the dynamic port specified by the host
  • -p ::
    • Map the specified container port to the port specified by the host

Dynamic ports refer to random ports. The specific mapping results can be viewed using the docker port command.

[root@localhost ~]# docker run -d --name nginx001 --rm -p 80 nginx
3a5ab39f5dfd7866d691fe75ceb901c6470b76d1d486928d75057897fd639806
[root@localhost ~]# docker port nginx001
80/tcp -> 0.0.0.0:49153
80/tcp -> :::49153

It can be seen that port 80 of the container is exposed to port 32769 of the host. At this time, we can access this port on the host to see if we can access the sites in the container

[root@localhost ~]# curl http://192.168.136.129:49153
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@localhost ~]# 

The most common one is to map the 80 of the container to the specified 8080 port number of the local machine

[root@localhost ~]# docker run -d --name web -rm -p 8080:80 nginx
unknown shorthand flag: 'r' in -rm
See 'docker run --help'.
[root@localhost ~]# docker run -d --name web --rm -p 8080:80 nginx
2989313c64df523020bc867b69c62d29bbbf3ceda687c5a47cd97d5b35ac04f0
[root@localhost ~]# ss -antl
State       Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN      0      128        *:49153                  *:*                  
LISTEN      0      128        *:8080                   *:*                  
LISTEN      0      128        *:22                     *:*                  
LISTEN      0      100    127.0.0.1:25                     *:*                  
LISTEN      0      128       :::49153                 :::*                  
LISTEN      0      128       :::8080                  :::*                  
LISTEN      0      128       :::22                    :::*                  
LISTEN      0      100      ::1:25                    :::*                  
[root@localhost ~]# curl 192.168.136.129:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

iptables firewall rules will be generated automatically with the creation of the container and deleted automatically with the deletion of the container.

Maps the container port to a random port of the specified IP

[root@localhost ~]# docker run --name web --rm -p 192.168.136.129::80 nginx

View the port mapping on another terminal

[root@localhost ~]# docker port web
80/tcp -> 192.168.136.129:32768

Map the container port to the specified port of the host

[root@localhost ~]# docker run --name web --rm -p 80:80 nginx

View the port mapping on another terminal

[root@localhost ~]# docker port web
80/tcp -> 0.0.0.0:80

Network attribute information of custom docker0 Bridge

Official document related configuration

To customize the network attribute information of docker0 bridge, you need to modify the / etc/docker/daemon.json configuration file

[root@localhost ~]# cd /etc/docker/
[root@localhost docker]# ls
daemon.json  key.json

[root@localhost ~]# vim /etc/docker/daemon.json 
[root@localhost ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://arq8p4a6.mirror.aliyuncs.com"]
  "bip": "192.168.136.129/24"
}

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:b8:d0:10 brd ff:ff:ff:ff:ff:ff
    inet 192.168.136.129/24 brd 192.168.136.255 scope global dynamic noprefixroute ens33
       valid_lft 1690sec preferred_lft 1690sec
    inet6 fe80::cb48:11a1:1d08:cdf0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:3e:f8:13:5c brd ff:ff:ff:ff:ff:ff
    inet 192.168.136.129/24 brd 192.168.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:3eff:fef8:135c/64 scope link 
       valid_lft forever preferred_lft forever

The core option is bip, which means bridge ip. It is used to specify the IP address of docker0 bridge itself. Other options can be calculated from this address.

docker remote connection

The C/S of the dockerd daemon only listens to the address in Unix Socket format (/ var/run/docker.sock) by default. If you want to use TCP sockets, you need to modify the / etc/docker/daemon.json configuration file, add the following contents, and then restart the docker service:

Start docker service:

"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]

Pass the "- H | - host" option directly to dockerd on the client to specify which host to control the docker container on

docker -H 192.168.136.129:2375 ps

docker create custom bridge

Create an additional custom bridge, which is different from docker0

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
8093f454945a   bridge    bridge    local
eed38306b990   host      host      local
99828ff95579   none      null      local

[root@localhost ~]# docker network create -d bridge --subnet "192.168.1.0/24" --gateway "192.168.1.1" br0
6a3d4797b80d12f25d2787da04386755374cf3b60f71cc228b7b444eb671bbd1

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
6a3d4797b80d   br0       bridge    local
8093f454945a   bridge    bridge    local
eed38306b990   host      host      local
99828ff95579   none      null      local

Create a container using the newly created custom bridge:

[root@localhost ~]# docker run -it --name y1 --network br0 busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
3aab638df1a9: Pull complete 
Digest: sha256:52817dece4cfe26f581c834d27a8e1bcc82194f914afe6d50afad5a101234ef1
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

Create another container and use the default bridge:

[root@localhost ~]# docker run --name y2 -it busybox
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Just think, can y2 and y1 communicate with each other? If not, how to realize communication

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
6a3d4797b80d   br0       bridge    local
8093f454945a   bridge    bridge    local
eed38306b990   host      host      local
99828ff95579   none      null      local
[root@localhost ~]# docker ps 
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@localhost ~]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED              STATUS                          PORTS     NAMES
9b6gjjhj3d23   busybox   "sh"      About a minute ago   Exited (0) 14 seconds ago                 y2
5bc17726f078   busybox   "sh"      12 minutes ago       Exited (0) About a minute ago             y1
[root@localhost ~]# docker network connect br0 9b6gjjhj3d23

/ # ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # hostname
9b6gjjhj3d23

/ # ping 192.168.1.2 / / you can ping here
PING 192.168.1.2 (192.168.1.2): 56 data bytes
64 bytes from 192.168.1.2: seq=0 ttl=64 time=0.091 ms
64 bytes from 192.168.1.2: seq=1 ttl=64 time=0.121 ms

Keywords: Docker network Container

Added by rewast on Wed, 08 Dec 2021 05:29:22 +0200