Guide reading
Argo is an open-source workflow tool based on kubernetes, which realizes the control of workflow and the running of tasks.
At present, Alibaba cloud container service ACK cluster already supports the deployment and scheduling of workflow. Here we introduce that if Argo is used in ASK(Serverless Kubernetes) cluster, you can run workflow tasks flexibly and dynamically without reserving node resource pool, and save user's computing cost to the maximum extent.
Preconditions:
- Create ASK cluster https://cs.console.aliyun.com/#/k8s/cluster/create/serverless
Because the pod created by argo often needs large-scale cpu and mem resources, it is recommended to create an ASK cluster with multiple zones. When one zone is short of inventory, the background will try to create a pod in other zones to alleviate the shortage of inventory in a single zone. - Download ags command line, refer to https://help.aliyun.com/document_detail/121342.html
Deploy argo workflow controller
# ags install # kubectl -n argo get pod NAME READY STATUS RESTARTS AGE argo-ui-5c5dbd7d75-hxqfd 1/1 Running 0 60s workflow-controller-848cf55b64-6pzc9 1/1 Running 0 60s # kubectl -n argo get configmap NAME DATA AGE workflow-controller-configmap 0 4m55s
argo uses the docker executor api by default. In the serverless cluster, we need to switch to k8sapi to work normally.
# kubectl -n argo edit configmap workflow-controller-configmap apiVersion: v1 kind: ConfigMap ... data: config: | containerRuntimeExecutor: k8sapi
Running the Hello World workflow sample
Let's run Hello world example: https://github.com/argoproj/argo/blob/master/examples/hello-world.yaml
# ags submit https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml Name: hello-world-l26sx Namespace: default ServiceAccount: default Status: Pending Created: Fri Nov 15 14:45:15 +0800 (now) # kubectl get pod NAME READY STATUS RESTARTS AGE hello-world-l26sx 0/2 Completed 0 88s # ags list NAME STATUS AGE DURATION PRIORITY hello-world-l26sx Succeeded 1m 1m 0
When we need to use large-scale resources to run workflow, we can specify annotation for pod in workflow.
Note that in this case, do not specify the large-scale requests/limits in the container, because the pod generated by argo contains multiple containers. Specifying the large-scale requests/limits for a single container will cause eci to fail to allocate matching resources to the pod, which will lead to creation failure. We recommend to specify ecs specification or cpu/mem for pod to ensure its normal operation, as follows.
apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world- spec: entrypoint: whalesay templates: - name: whalesay metadata: annotations: k8s.aliyun.com/eci-instance-type : "ecs.ic5.3xlarge" container: image: docker/whalesay:latest command: [cowsay] args: ["hello world"]
End
After running, you can clean up the workflow resources.
# ags delete hello-world-l26sx Workflow 'hello-world-l26sx' deleted # kubectl get pod No resources found.
We can see that, because ASK cluster naturally does not need to manage node resource pool, all pod s are created on demand, which well matches the task form of Argo workflow, flexibly and dynamically allocates computing resources on demand, and better saves costs.
This is the original content of yunqi community, which can not be reproduced without permission.