In the previous chapter, we have learned to use kubedm to create clusters and add new nodes. In this chapter, we will redeploy according to the method of CKAD course. In fact, the content of the official tutorial is not much. The author has written two similar deployment methods. If kubernetes clusters have been deployed, the content of this chapter can be skipped.
This article is part of the author's Kubernetes series of e-books. E-books have been open source. Please pay attention. E-book browsing address:
https://k8s.whuanle.cn [suitable for domestic visit]
https://ek8s.whuanle.cn [gitbook]
deploy
default network
This section is mainly about configuring the hosts file. In subsequent configurations, you can quickly connect through the host name instead of typing the IP address every time.
We execute the ip addr command on the Master node server, find ens4, and record the ip mentioned therein.
ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000 link/ether 42:01:0a:aa:00:02 brd ff:ff:ff:ff:ff:ff inet 10.170.0.2/32 scope global dynamic ens4 valid_lft 2645sec preferred_lft 2645sec inet6 fe80::4001:aff:feaa:2/64 scope link valid_lft forever preferred_lft forever
As mentioned above, IP is 10.170.0.2. Or use hostname -i to query. There are many ways to obtain the intranet IP of the host.
Then modify the / etc/hosts file and add a line (replace this ip with yours):
10.170.0.2 k8smaster
Later, we visit the cluster and use k8smaster as the host name (domain name), which does not require an IP address. Using the host name is convenient for memory and avoids strong IP fixation.
Kubedm installation k8s
The deployment process here is different from that in the previous chapter, because in the previous chapter, kubedm init is directly used to initialize the cluster without more details.
Execute kubectl version to check the k8s version, and find the GitVersion:"v1.21.0", which is the Kubernetes version.
Create a kubedm-config.yaml file. When we use kubedm init, we use this configuration file to initialize k8s master.
The contents of the document are:
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubenetesVersion: 1.21.0 controlPlaneEndpoint: "k8smaster:6443" networking: podSubnet: 192.168.0.0/16
Note that: must be followed by a space. Indicates key: value. For example, image: nginx:letest, and: without spaces will be connected together.
Then initialize the Master through the configuration file:
kubeadm init --config=kubeadm-config.yaml --upload-certs --v=5 | tee kubeadm-init.out # It can be omitted as kubedm init -- config = kubedm-config.yaml -- upload certs
--v=5 can output more information. tee xxx can output the information to a file to facilitate log collection or subsequent inspection.
After executing the initialization command, the terminal or view the kubedm-init.out file, which contains the following contents:
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8smaster:6443 --token 45td1j.xqdscm4k06a4edi2 \ --discovery-token-ca-cert-hash sha256:aeb772c57a35a283716b65d16744a71250bcc25d624010ccb89090021ca0f428 \ --control-plane --certificate-key d76287ccc4701db9d34e0c9302fa285be2e9241fc43c94217d6beb419cdf3c52 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8smaster:6443 --token 45td1j.xqdscm4k06a4edi2 \ --discovery-token-ca-cert-hash sha256:aeb772c57a35a283716b65d16744a71250bcc25d624010ccb89090021ca0f428
According to the prompt, we will execute the following commands one by one. Do not paste them at one time, because cp -i means you need to enter y/n to confirm the change. Pasting at one time will lead to skipping (changing - i to - f is also OK).
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
then:
export KUBECONFIG=/etc/kubernetes/admin.conf
Note to the author: the KUBECONFIG environment variable will fail when logging in or creating a new terminal window next time. Open the. bashrc file of the user directory and add export KUBECONFIG=/etc/kubernetes/admin.conf on the last side to ensure that it is still available when logging in or switching terminals next time.
Note to the author: since multiple users are involved, if you switch users, you cannot use the kubedm / kubectl / kubelet commands. If you switch users, you can execute the above commands from make -p $HOME/.kube to export xxx, so that other users can also execute the commands to operate the node.
Enter kubedm config print init default to view the configuration during master initialization.
The above is the official deployment method of CKAD.
Configure Calico
What is CNI
CNI means container network interface. It is a standard design of Kubernetes. Users do not need to pay attention to what network plug-ins are used. They can configure the network more easily when plug-ins or containers are destroyed.
There are mainstream plug-ins such as Flannel, Calico and Weave in Kubernetes. In the previous article, we used Weave when deploying Kubernetes network. In this chapter, we will use Calico to deploy the network.
For CNI, the following chapters will be studied in depth.
Calico(https://github.com/projectcalico/calico )It is an open source network and security solution for container, virtual machine and bare metal workload. It provides network connection and network security policy implementation between pods.
Flannel, Calico and Weave are commonly used Kubernetes network plug-ins, which readers can refer to https://kubernetes.io/zh/docs/concepts/cluster-administration/networking/ There is not much explanation here.
First download Calico's yaml file.
wget https://docs.projectcalico.org/manifests/calico.yaml
Then we need to pay attention to calico in the yaml file_ IPV4POOL_ The value of CIDR can be opened directly by the reader https://docs.projectcalico.org/manifests/calico.yaml Or use less calico.yaml to read the file on the terminal.
Find CALICO_IPV4POOL_CIDR, for example:
# - name: CALICO_IPV4POOL_CIDR # value: "192.168.0.0/16"
This indicates the ip4 pool. If the ip does not exist, it will be created automatically, and the network ip of the created pod will be in this range. The default is 192.168.0.0. We don't need to change it. If you need to customize it, you can delete #, and then change the ip.
[Error] prompt Please be sure to configure this parameter according to the IP segment in your cluster.
Then we enable Calico network plug-in:
kubectl apply -f calico.yaml
After the network configuration is completed, you can join the node using kubedm join.
other
Execute commands on nodes
If we execute the command on the Worker node, we will find:
root@instance-2:~# kubectl describe nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?
First, in the Master node, download the / etc/kubernetes/admin.conf file, or copy the contents of the file to the Worker node.
Upload or copy the file to the / etc/kubernetes/admin.conf file of the Worker node and execute the configuration.
mkdir -p $HOME/.kube sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc
Automatic completion tool
There are many kubectl commands and optional parameters. It is easy to make mistakes by typing long commands every time. We can use bash completion to quickly complete the command input for us.
sudo apt-get install bash-completion -y
source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> $HOME/.bashrc
When we hit the command and press the TAB key, it will be completed automatically.
Enter kubectl des, and then click TAB. You will find that the content is automatically completed as kubectl description.
State description
Execute the kubectl describe nodes / command, and we can see the node details. There is a conditions field, which describes the status of all running nodes. It has five fields or types:
- Ready Whether the Node can receive pods. If yes, the Status is true; False if the Node is not healthy and cannot receive pods. True under normal conditions.
- DiskPressure Indicates that the free space of the node is insufficient to add a new Pod. If True, it indicates that it is abnormal.
- MemoryPressure Indicates that the node has memory pressure, that is, the available memory of the node is low. If True, it indicates that it is abnormal.
- PIDPressure Indicates that there is process pressure on the node, that is, there are too many processes on the node; If True, it indicates abnormal.
- NetworkUnavailable Indicates that the node network configuration is incorrect; If True, it indicates abnormal.
If JSON is used to represent:
"conditions": [ { "type": "Ready", "status": "True", "reason": "KubeletReady", "message": "kubelet is posting ready status", "lastHeartbeatTime": "2019-06-05T18:38:35Z", "lastTransitionTime": "2019-06-05T11:41:27Z" } ]
Readers can refer to: https://kubernetes.io/zh/docs/concepts/architecture/nodes/
This chapter mainly introduces the kubedm deployment k8s, configuration and startup of Calico network plug-ins required in CKAD authentication. Compared with the previous article, it mainly controls the creation of kubernetes clusters through yaml files. The deployment processes in the two chapters are the same, but the network plug-ins are different.