What is a daemon set
The daemon set ensures that a copy of a Pod is running on all (or some) nodes. When nodes join the cluster, a new Pod will be added for them. When a Node is removed from the cluster, these pods will also be recycled. Deleting a DaemonSet will delete all pods it creates
Some typical uses of using DaemonSet:
- Run the cluster storage daemon, such as glusterd and ceph on each Node
- Run the log collection daemon on each Node, such as fluent D and logstash
- Run the monitoring daemon on each Node, such as Prometheus Node Exporter, collected, Datadog agent, New Relic agent, or Ganglia gmond
apiVersion: apps/v1 kind: DaemonSet metadata: name: deamonset-example labels: app: daemonset spec: selector: matchLabels: name: deamonset-example template: metadata: labels: name: deamonset-example spec: containers: - name: daemonset-example image: wangyanglinux/myapp:v1
Job
Job is responsible for batch tasks, that is, tasks that are executed only once. It ensures the successful completion of one or more pods of batch tasks
Special instructions
- The format of spec.template is the same as that of Pod
- Restart policy only supports Never or OnFailure
- For a single Pod, the Job ends after the default Pod runs successfully
- . spec.completions flag the number of pods that need to run successfully at the end of the Job. The default is 1
- . spec.parallelism flag the number of pods running in parallel. The default is 1
- spec.activeDeadlineSeconds flag failed. The maximum retry time of Pod. Beyond this time, retry will not continue
Example
Find the value of π
Algorithm: Ma Qing formula
π/4=4arctan1/5-arctan1/239
This formula was discovered by John Maqing, a professor of astronomy in England, in 1706. He used this formula to calculate the Pi of 100 bits. Ma Qing formula can get 1.4 decimal precision for each calculation. Because the multiplicand and divisor are not greater than long integers in the calculation process, it can be easily programmed on the computer
# -*- coding: utf-8 -*- from __future__ import division # Import time module import time # Calculate current time time1=time.time() # The algorithm calculates the PI according to Ma Qing's formula # number = 1000 # Calculate 10 more bits to prevent the influence of mantissa selection number1 = number+10 # Count to number1 digits after the decimal point b = 10**number1 # Find the first term with 4 / 5 x1 = b*4//5 # Find the first term with 1 / 239 x2 = b // -239 # Seek the first major item he = x1+x2 #Set the end point of the following cycle, that is, a total of n items are calculated number *= 2 #Cycle initial value = 3, final value 2n, step size = 2 for i in xrange(3,number,2): # Find each item and symbol containing 1 / 5 x1 //= -25 # Find each item and symbol containing 1 / 239 x2 //= -57121 # Sum of two items x = (x1+x2) // i # Sum he += x # Find π pai = he*4 #Drop the last ten pai //= 10**10 # Output the value of PI paistring=str(pai) result=paistring[0]+str('.')+paistring[1:len(paistring)] print result time2=time.time() print u'Total time:' + str(time2 - time1) + 's'
FROM hub.c.163.com/public/python:2.7 ADD ./main.py /root CMD /usr/bin/python /root/main.py
apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: metadata: name: pi spec: containers: - name: pi image: pi:v1 restartPolicy: Never
Check the log to display the allowable 2000 bit π value
CronJob Spec
- The format of spec.template is the same as that of Pod
- Restart policy only supports Never or OnFailure
- For a single Pod, the Job ends after the default Pod runs successfully
- . spec.completions flag the number of pods that need to run successfully at the end of the Job. The default is 1
- . spec.parallelism flag the number of pods running in parallel. The default is 1
- . spec.activeDeadlineSeconds flag failed. The maximum retry time of Pod. Beyond this time, retry will not continue
CronJob
Cron Job manages time-based jobs, namely:
- Run only once at a given point in time
- Run periodically at a given point in time
Usage conditions: Kubernetes cluster currently used, version > = 1.8 (for CronJob)
Typical usage is as follows:
- Schedule Job runs at a given point in time
- Create jobs that run periodically, such as database backup and sending mail
CronJob Spec
- . spec.schedule: scheduling, required field, specifies the task running cycle, and the format is the same as Cron
- . spec.jobTemplate: Job template, a required field, which specifies the task to be run. The format is the same as that of Job
- . spec.startingDeadlineSeconds: the time limit (in seconds) for starting a Job. This field is optional. If the scheduled time is missed for any reason, the Job that misses the execution time will be considered as a failure. If not specified, there is no deadline
- . spec.concurrencyPolicy: concurrency policy. This field is also optional. It specifies how to handle the concurrent execution of jobs created by cron jobs. Only one of the following policies is allowed to be specified:
Allow (default): allows concurrent running of jobs
Forbid den: prohibit concurrent operation. If the previous one has not been completed, skip the next one directly
Replace: cancel the currently running Job and replace it with a new one
Note that the current policy can only be applied to jobs created by the same Cron Job. Multiple cron jobs are always allowed to run concurrently.
- . spec.suspend: suspend, this field is also optional. If set to true, all subsequent executions will be suspended. It has no effect on jobs that have already started execution. The default value is false.
- . spec.successfuljobsshistorylimit and Spec.failedjobsshistorylimit: history limit. It is an optional field. They specify how many completed and failed jobs can be retained. By default, they are set to 3 and 1, respectively. Set the limit value to 0. Job s of related types will not be retained after completion.
Example
apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox:1.34.1 args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure
$ kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE hello */1 * * * * False 0 <none> $ kubectl get jobs NAME DESIRED SUCCESSFUL AGE hello-1202039034 1 1 49s $ pods=$(kubectl get pods --selector=job-name=hello-1202039034 --output=jsonpath={.items..metadata.name}) $ kubectl logs $pods Mon Aug 29 21:34:09 UTC 2016 Hello from the Kubernetes cluster # Note that jobs will not be deleted automatically when cronjob s are deleted. These jobs can be deleted with kubectl delete job $ kubectl delete cronjob hello cronjob "hello" deleted
Some limitations of CrondJob itself
Create Job operation should be idempotent