1, Overview
When building k8s clusters, you need to visit google, download relevant images and install software, which is very troublesome.
Alicloud just provides k8s update source, which can be directly used by domestic users.
2, Environment introduction
Operating system host name IP address function configuration
- ubuntu-16.04.5-server-amd64k8s-master192.168.91.128 master node 2-core 4G
- ubuntu-16.04.5-server-amd64k8s-node1192.168.91.129 slave node 2-core 4G
- ubuntu-16.04.5-server-amd64k8s-node2192.168.91.131 slave node 2-core 4G
3, Preparation before installation
host name
Ensure that the / etc/hostname of the three hosts has been modified to the correct host name. After modification, restart the system.
time
Make sure that the time zones of the three servers are the same. Forcibly change the time zone to Shanghai and execute the following command
ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime bash -c "echo 'Asia/Shanghai' > /etc/timezone"
Install ntpdate
apt-get install -y ntpdate
If the following error occurs
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? Execute 2 commands to solve sudo rm /var/cache/apt/archives/lock sudo rm /var/lib/dpkg/lock
Update using alicloud time server
ntpdate ntp1.aliyun.com
All 3 servers are executed to ensure the same time!
Please make sure that the firewall is turned off!
4, Official start
Disable swap All hosts sudo sed -i '/swap/ s/^/#/' /etc/fstab sudo swapoff -a
Install Docker
Update apt source and add https support (all hosts)
sudo apt-get update && sudo apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Add GPG Key (all hosts) using utc source
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add
Add docker CE stable source address (all hosts)
sudo add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
Install docker Ce (all hosts)
sudo apt-get update sudo apt install docker-ce=18.06.1~ce~3-0~ubuntu
Install kubelet, kubedm, kubectl
Add apt key and source (all hosts)
sudo apt update && sudo apt install -y apt-transport-https curl curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >>/etc/apt/sources.list.d/kubernetes.list
Installation (all hosts)
sudo apt update sudo apt install -y kubelet=1.15.2-00 kubeadm=1.15.2-00 kubectl=1.15.2-00 sudo apt-mark hold kubelet=1.15.2-00 kubeadm=1.15.2-00 kubectl=1.15.2-00
Install kubernetes cluster (master only)
- image repository specifies the image source and Alibaba cloud's source. This will avoid the timeout when pulling the image. If there is no problem, you can see the successful log entry in a few minutes
sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.2 --pod-network-cidr=192.169.0.0/16
Output:
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.104]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.104 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.104 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 41.503569 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 767b6y.incfuyom78fl6j88
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.104:6443 --token 767b6y.incfuyom78fl6j88 \
--discovery-token-ca-cert-hash sha256:941807715378bcbd5bd1cbe244c4bdbf00dee4e45c3b0ff3555eea746607a672
View Code
Note: the following warning message appears:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Negligible, solution:
#Modify or create
vim /etc/docker/daemon.json #Add the following -- don't add this line. It's just a comment (^ ^) --- qingfeng { "exec-opts": ["native.cgroupdriver=systemd"] }
Copy the kubeconfig file to the home directory kube directory (master only)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install the network plug-in to allow communication between pod s (master only)
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml kubectl apply -f rbac-kdd.yaml kubectl apply -f calico.yaml
View the pod status under Kube system namespace (master only)
kubectl get pod -n kube-system
Wait 1 minute, the effect is as follows:
kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-node-8w22s 2/2 Running 0 26m calico-node-r7bzc 2/2 Running 0 36m calico-node-zds6x 2/2 Running 0 26m coredns-bccdc95cf-d8vcl 1/1 Running 0 107m coredns-bccdc95cf-kzjt5 1/1 Running 0 107m etcd-k8s-master 1/1 Running 0 106m kube-apiserver-k8s-master 1/1 Running 0 106m kube-controller-manager-k8s-master 1/1 Running 0 106m kube-proxy-db49l 1/1 Running 0 107m kube-proxy-tthjs 1/1 Running 0 26m kube-proxy-vdtj8 1/1 Running 0 26m kube-scheduler-k8s-master 1/1 Running 0 106m
Join node (node only)
Copy the join node command entered by kubedm init and execute it on each node
kubeadm join 192.168.10.104:6443 --token 767b6y.incfuyom78fl6j88 --discovery-token-ca-cert-hash sha256:941807715378bcbd5bd1cbe244c4bdbf00dee4e45c3b0ff3555eea746607a672
View cluster status (master only)
root@k8s-master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 44m v1.15.2 k8s-node1 Ready <none> 42m v1.15.2 k8s-node2 Ready <none> 42m v1.15.2
Command Completion
apt-get install bash-completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc source ~/.bashrc
4, Deploy application
Take flask as an example:
vim flask.yaml
The contents are as follows:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: flaskapp-1 spec: replicas: 1 template: metadata: labels: name: flaskapp-1 spec: containers: - name: flaskapp-1 image: jcdemo/flaskapp ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: flaskapp-1 labels: name: flaskapp-1 spec: type: NodePort ports: - port: 5000 name: flaskapp-port targetPort: 5000 protocol: TCP nodePort: 30005 selector: name: flaskapp-1
Start application
kubectl apply -f flask.yaml
View application status
root@k8s-master:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES flaskapp-1-59698bc97d-ssqv8 1/1 Running 0 43m 192.169.2.3 k8s-node2 <none> <none>
The above information shows that this pod runs on k8s-node2 this host
Accessing pod ip services using curl
root@k8s-master:~# curl 192.169.2.3:5000 <html><head><title>Docker + Flask Demo</title></head><body><table><tr><td> Start Time </td> <td>2019-Aug-07 12:30:38</td> </tr><tr><td> Hostname </td> <td>flaskapp-1-59698bc97d-ssqv8</td> </tr><tr><td> Local Address </td> <td>192.169.2.3</td> </tr><tr><td> Remote Address </td> <td>192.168.10.120</td> </tr><tr><td> Server Hit </td> <td>3</td> </tr></table></body></html>root@k8s-master:~#
View svc ports
root@k8s-master:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE flaskapp-1 NodePort 10.107.108.46 <none> 5000:30005/TCP 45m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m
Direct web access k8s-node2 to port 30005
http://192.168.10.119:30005/
The effects are as follows:
5, Deploy dashboard visualization plug-in
summary
In the Kubernetes Dashboard, you can view the running status of applications in the cluster, and create and modify various Kubernetes resources, such as Deployment, Job, daemon, etc. Users can Scale Up/Down Deployment, execute Rolling Update, restart a Pod, or deploy new applications through a wizard. Dashboard can display the status and log information of various resources in the cluster.
It can be said that Kubernetes Dashboard provides most functions of kubectl, and you can choose according to the situation.
github address:
https://github.com/kubernetes/dashboard
install
Kubernetes does not deploy Dashboard by default. You can install it through the following command:
kubectl apply -f http://mirror.faasx.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
View service
root@k8s-master:~# kubectl --namespace=kube-system get deployment kubernetes-dashboard NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1/1 1 1 5m23s root@k8s-master:~# kubectl --namespace=kube-system get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.100.111.103 <none> 443/TCP 5m28s
View pod
Make sure the status is Running
root@k8s-master:~# kubectl get pod --namespace=kube-system -o wide | grep dashboard kubernetes-dashboard-8594bd9565-t78bj 1/1 Running 0 8m41s 192.169.2.7 k8s-node2 <none> <none>
Allow external access
Note: it will occupy the terminal
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'
Access via browser
Note: 192.168.10.104 by master ip http://192.168.10.104:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
The effects are as follows:
Configure login permissions
Dashboard supports Kubeconfig and Token authentication. To simplify the configuration, we use the configuration file dashboard admin Yaml grants admin permission to the default user of dashboard.
vim dashboard-admin.yml
The contents are as follows:
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
Execute kubectl apply to make it effective
kubectl apply -f dashboard-admin.yml
Now directly click skip on the login page to enter the Dashboard. The effect is as follows:
For the introduction of dashboard interface structure, please refer to the link:
https://www.cnblogs.com/kenken2018/p/10340157.html