Network strategy
In Kubernetes, the definition of network isolation capability is described by a special API object, that is, NetworkPolicy.
The Pod in Kubernetes is "Accept All" by default, that is, the Pod can receive requests from any sender; Or send a request to any recipient. If you want to limit this situation, you must specify it through the NetworkPolicy object.
An example of a complete NetworkPolicy object is as follows:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend - ports: - protocol: TCP port: 80 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978
Like all other Kubernetes resource objects, NetworkPolicy requires apiVersion, kind and metadata fields. We define the limit range of this NetworkPolicy through the spec.podSelector field. Because NetworkPolicy currently only supports the definition of ingress rules, the podSelector here essentially defines the "target Pod" for the policy, For example, matchLabels:role=db here represents the Pod with the role=db tag in the current Namespace. If you leave the podSelector field blank:
spec: podSelector: {}
Then this NetworkPolicy will act on all pods under the current Namespace.
Each NetworkPolicy contains a list of policyTypes, including progress, progress or both. This field indicates whether the current policy is applied to the inlet traffic, outlet traffic or both of the matched Pod. If policyTypes are not specified, it indicates progress inlet traffic by default. If any outlet traffic rules are configured, Is specified as progress.
Once the Pod is selected by the NetworkPolicy, the Pod will enter the "Deny All" state, that is, the Pod is not allowed to be accessed by or initiate access to the outside world.
For example, the above example shows that the isolation rule is only valid for pods with the role=db tag under the default namespace. Restricted request types include ingress and egress.
Ingress: each NetworkPolicy contains a whitelist of ingress rules. The rules allow traffic to match both the from and ports sections. For example, the inlet traffic rules we configured in the above example are as follows:
ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend - ports: - protocol: TCP port: 80
In the inress rule here, we define from and ports to indicate the white list and ports allowed to flow in. Kubernetes will reject any request to access the isolated Pod, unless the request comes from the object in the following "white list" and accesses the port of the isolated Pod. Three parallel situations are specified in the allowed inflow whitelist: ipBlock, namespaceSelector and podSelector:
- Pod with role=frontend tag under default namespace
- Any Pod under Namespace with project=myproject tag
- Any request whose source address belongs to the 172.17.0.0/16 network segment and does not belong to the 172.17.1.0/24 network segment.
Egress: each NetworkPolicy contains a whitelist of egress rules. Each rule allows traffic that matches the to and port sections. For example, the configuration of the example rule here:
egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978
It means that Kubernetes will reject any request initiated by the isolated Pod, unless the destination address of the request belongs to the 10.0.0.0/24 network segment and accesses port 5978 of the network segment address.
Installing Calico
Specific network plug-ins are required to make the network policy effective. Currently, the network plug-ins that have implemented NetworkPolicy include Calico, Weave, Kube router and other projects, but do not include Flannel project.
Therefore, if you want to use NetworkPolicy while using Flannel, you need to install an additional network plug-in, such as Calico project, to execute NetworkPolicy. Because we use the Flannel network plug-in here, we need to install Calico to take charge of the network policy first.
For details, please refer to the official documents: https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel
First, confirm that Kube controller manager is configured with the following two parameters:
...... spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true - --cluster-cidr=10.244.0.0/16 ......
Download the required resource manifest file:
$ curl https://docs.projectcalico.org/manifests/canal.yaml -O
If the previously configured pod CIDR is 10.244.0.0/16 network segment, you can skip the following configuration. If it is different, you can replace it in the following way:
$ POD_CIDR="<your-pod-cidr>" \ $ sed -i -e "s?10.244.0.0/16?$POD_CIDR?g" canal.yaml
Finally, install directly:
$ kubectl apply -f canal.yaml
test
First, we prepare two pods to be tested:
apiVersion: v1 kind: Pod metadata: name: test-np spec: containers: - name: test-np image: nginx:1.17.1 --- apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: busybox:1.30 command: ["/bin/sh","-c","sleep 86400"]
Create these two pod s directly:
$ kubectl apply -f test-np.yaml pod/test-np unchanged pod/busybox configured $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED ... busybox 1/1 Running 0 3m38s 10.244.2.3 node1 <none> ... test-np 1/1 Running 0 3m38s 10.244.1.6 node2 <none> ...
We use busybox to access test NP. Without adding any network policy, we test whether the request can be made normally:
$ kubectl exec -it busybox ping 10.244.1.6 kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. PING 10.244.1.6 (10.244.1.6): 56 data bytes 64 bytes from 10.244.1.6: seq=0 ttl=62 time=0.642 ms 64 bytes from 10.244.1.6: seq=1 ttl=62 time=0.536 ms 64 bytes from 10.244.1.6: seq=2 ttl=62 time=0.981 ms ......
At this time, we add the above network policy and label test NP with role=db:
$ kubectl label pod test-np role=db --overwrite pod/test-np labeled $ kubectl get networkpolicy kubectl get networkpolicy NAME POD-SELECTOR AGE test-network-policy role=db 10s
At this time, the test NP Pod matches the network policy. Because the Pod that matches the network policy will reject all network requests and need to start the request through the white list, but the busybox Pod is obviously not in the white list, so the network request should be rejected:
kubectl exec -it busybox ping 10.244.1.6 kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. PING 10.244.1.6 (10.244.1.6): 56 data bytes ^C --- 10.244.1.6 ping statistics --- 8 packets transmitted, 0 packets received, 100% packet loss command terminated with exit code 1
Now I want busybox to normally request the pod test NP. Let's look at the white list in the network policy bound by test NP. First, we obviously don't meet the requirements. Second, transfer busybox to the Namespace of project=myproject. This implementation is not a special container. Then let's look at the third, The pod with the label role=frontend can be accessed, so we can directly label my busybox:
$ kubectl label pod busybox role=frontend --overwrite pod/busybox labeled
Then we continue to request test NP, which should succeed this time:
kubectl exec -it busybox ping 10.244.1.6 kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. PING 10.244.1.6 (10.244.1.6): 56 data bytes 64 bytes from 10.244.1.6: seq=0 ttl=62 time=0.519 ms 64 bytes from 10.244.1.6: seq=1 ttl=62 time=0.761 ms 64 bytes from 10.244.1.6: seq=2 ttl=62 time=1.682 ms 64 bytes from 10.244.1.6: seq=3 ttl=62 time=0.432 ms ......