Using kubeadm to deploy highly available Kubernetes 1.17.0 After setting up three Master nodes, it is found during the deployment of Heketi that Daemonset does not start the corresponding pod. The original Master node of Kubernetes needs to be set to participate in application scheduling (by default, the Master node does not run applications). There are two ways:
Change the attributes of the node for a long time, allowing the Master to run the application and execute:
kubectl taint nodes --all node-role.kubernetes.io/master-
Temporarily allow the Master node to execute the application, and add:
tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule
I'm here to test:
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: test labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
Then save it as test.yaml, run: kubectl apply -f test.yaml, and the pod is established.
If there is no settings for notifications, the pod will only run in the worker node although it runs successfully. If there is only the master node, the pod will not be started.