.gitlab-ci. Complete analysis of YML keywords (2)
Last time we introduced script, image, artifacts ,tags, cache ,stage ,when ,only/except. After learning the usage of these keywords, it is not difficult to configure a simple pipeline. However, if you want to encounter more complex business scenarios, such as micro service, pipeline inheritance, multi pipeline and so on, you can't rely on the above usage alone. Now I'll explain several other more complex keywords to you. The key words of this explanation are before_script, after_script, dependencies, environment, extends, include, interruptible ,parallel, rules ,trigger, services
before_script
before_ The script keyword is used to execute the script before each task, but it will be executed after the artifacts are restored. You can define a global before like this_ script,
default: before_script: - echo "Execute this script in all jobs that don't already have a before_script section."
It can also be defined separately in a task
job: before_script: - echo "Execute this script instead of the global before_script." script: - echo "This script executes after the job's `before_script`"
Before in task_ Script overrides the global before_script
after_script
after_script and before_ Similar to script, it is used to define multi line script, which will be executed after the task is completed, even if the task fails. If the task is cancelled or times out, after_script will not be executed. At present, officials are planning this feature. You can define global or local
default: after_script: - echo "Execute this script in all jobs that don't already have an after_script section." job1: script: - echo "This script executes first. When it completes, the global after_script executes." job: script: - echo "This script executes first. When it completes, the job's `after_script` executes." after_script: - echo "Execute this script instead of the global after_script."
dependencies
The dependencies keyword defines specific job running rules. The default artifacts are generated from the current phase and will be downloaded in subsequent phases, but we can use the dependencies keyword to control where artifacts are downloaded, Here's an example,
build:osx: stage: build script: make build:osx artifacts: paths: - binaries/ build:linux: stage: build script: make build:linux artifacts: paths: - binaries/ test:osx: stage: test script: make test:osx dependencies: - build:osx test:linux: stage: test script: make test:linux dependencies: - build:linux deploy: stage: deploy script: make deploy
According to this example, it is not difficult to see that, Task test:osx depends on build:osx Task test:linux depends on build:linux After the task test:linux is configured in this way, you don't have to wait for the task build:osx to execute. You just need to wait for the task build:linux to complete The dependency relationship is well used to optimize the pipeline speed. The last deployed task will not be executed until the first four tasks are completed.
environment
Environment is used to define environment variables, which can be defined in k-v mode as
deploy to production: stage: deploy script: git push production HEAD:master environment: name: production
Note that the environment variables defined here cannot be used in script values. This keyword can be used with review and merge.
extends
This keyword enables one task to inherit another task. The following cases
.tests: script: rake test stage: test only: refs: - branches rspec: extends: .tests script: rake rspec only: variables: - $RSPEC
The task rspec inherits Tests task, in the pipeline Tests is a hidden task. In the pipeline, the task name starting with English far point is a hidden task. Will not be executed. After being inherited by rspec, the same key will be subject to rspec. If rspec does not have the same key, it will not If there are tests, they are merged into rspec, The combined result is
rspec: script: rake rspec stage: test only: refs: - branches variables: - $RSPEC
Using this method, you can write a template, which can be used after a slight change. It is very suitable for mass writing pipeline.
include
Using include, you can import one or more additional yaml files into your CICD configuration, which you can separate a long pipeline. Use include to import. The same configuration in several pipelines can also be extracted and shared. The file extension introduced must be yaml or yml two, the others can't. Under the include keyword, there are four options, local to import a file of the current project File to introduce a file of a different project remote, import a public network file, Template, which introduces a template provided by GitLab
Here are some examples
include: - local: '/templates/.gitlab-ci-template.yml'
include: - project: 'my-group/my-project' file: '/templates/.gitlab-ci-template.yml'
include: - local: '/templates/.gitlab-ci-template.yml'
include: - project: 'my-group/my-project' ref: master file: '/templates/.gitlab-ci-template.yml' - project: 'my-group/my-project' ref: v1.0.0 file: '/templates/.gitlab-ci-template.yml' - project: 'my-group/my-project' ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA file: '/templates/.gitlab-ci-template.yml'
include: - remote: 'https://gitlab.com/awesome-project/raw/master/.gitlab-ci-template.yml'
trigger
trigger is used to deal with more complex CICD processes, such as multi pipeline and parent-child pipeline It can be used to define a downstream pipeline. Tasks configured with trigger cannot run scripts, that is, scripts cannot be defined before_ Script, and after_script. This project is a multi project pipeline
rspec: stage: test script: bundle exec rspec staging: stage: deploy trigger: my/deployment
After the pipeline executes the test task, it will execute the pipeline of my/deployment project
Configuring downstream pipelining can also branch
rspec: stage: test script: bundle exec rspec staging: stage: deploy trigger: project: my/deployment branch: stablez
rules
Rules are used to specify the execution rules of tasks. An expression is used to standardize the execution of those tasks and the non execution of those tasks You can also trigger another task after the task succeeds or fails. As the following example
docker build: script: docker build -t my-image:$CI_COMMIT_REF_SLUG . rules: - if: '$CI_COMMIT_BRANCH == "master"' when: delayed start_in: '3 hours' allow_failure: true
If the current branch is the master branch, the task execution will be delayed by 3 hours and failure is allowed. Which optional attribute is under rules
- If uses an if expression to add or remove a task, similar to only:variables
- Changes appends or removes tasks based on whether certain files have changed. Similar to only:changes
- exists appends or removes tasks based on the existence of a specific file
All default variables of CICD, branch, source, merge request, commit, push web, schedule, etc. can be used in if. You can configure unnecessary rules for unnecessary scenarios. Take a look at this example
job: script: echo "Hello, Rules!" rules: - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"' when: manual allow_failure: true
The explanation is not complicated. One judgment statement and two assignment statements. That is, if the current branch is master, the execution mode of the task is changed to manual, and the operation fails.
Write at the end
Understand the above keywords, it is not difficult to write a pipeline with complex rules and easy to expand.