Building a continuous deployment on containers and best practices

To understand continuous integration and deployment, you need to understand its components and the relationships between them.The diagram below is the most concise and clear diagram of continuous deployment and integration I've ever seen.

picture source

Continuous deployment:

As shown in the diagram, the development process is as follows:
Programmers download source code from source libraries (Source Control), write programs, submit code to source libraries when they are finished, and Continuous Integration tools download source code from source libraries, compile source code, then submit it it to the Repository, and then continue Delivery tools download code from the Repository.Generate release versions and publish them to different running environments (e.g. DEV, QA, UAT, PROD).

In the diagram, the left part is Continuous Integration, which is primarily about developers and programmers; the right part is Continuous Deployment, which is mainly about testing and maintenance.Continuous Delivery, also known as Continuous Deployment, differs a little if subdivided, but we don't distinguish that much here, collectively referred to as continuous deployment.This article focuses on continuous deployment.

Continuous integration and deployment involves the following key players:

  • Source Code Library: Stores source code, commonly Git and SVN.
  • Continuous Integration and Deployment Tools: Responsible for automatic compilation and packaging and for storing runnable programs in the runtime.More popular are Jenkins, GitLab, Travis CI, CircleCI, etc.
  • Repository Manager: The Repository in the diagram, also known as the runtime, manages program components.The most common is Nexus.It is a private library that manages program components.

The library manager has two functions:

  • Manage third-party libraries: Applications often use many third-party libraries, and different technology stacks require different libraries, which are often stored in third-party public libraries and are not very easy to manage.In general, companies will set up a private management library to centrally and uniformly manage all kinds of third-party software, such as Maven library (Java), Docker library (Docker), and NPM library (JavaScript), to ensure the standardization of the company's software.
  • Manage internal program delivery: All programs issued by companies in various environments (e.g. DEV, QA, UAT, PROD) are managed by it and assigned a uniform version number so that any deliveries are documented and easy to roll back.

Continuous deployment steps:

Companies have different requirements for Continuous Deployment and different steps, but they include the following steps:

  • Download source code: Download the source code from a source code library, such as github.
  • Compile code: This is required for compiling languages
  • Test: Test the program.
  • Generating mirrors: There are two steps, one is to create a mirror, the other is to store the mirror to the mirror library.
  • Deployment Mirrors: Deploy the generated mirrors to containers

The above process is a generalized continuous deployment process, while the narrow process is to retrieve the runnable program from the library manager, eliminating the download source and coding steps, and downloading the executable directly from the library manager instead.However, since not every company has a separate library manager, a broad continuous deployment process is used, which applies to every company.

Continuous deployment instance:

Here we show you how to accomplish a continuous deployment through a specific example.We use Jenkins as a continuous deployment tool to deploy a Go program to the k8s environment.

Our process is basically the narrow process described above, but since there is no Nexus, we have made a slight change by downloading the source program directly from the source library, as follows:

  • Download Source: Download the source code from github to the running environment of Jenkins
  • TEST: This step has no actual content for the moment
  • Generate a mirror: Create a mirror and upload it to the Docker hub.
  • Deployment Mirrors: Deploy generated mirrors to k8s

Before you create the Jenkins project, you need to do some preparatory work:

Create Docker Hub Account

Accounts and mirror libraries need to be created on Docker Hub to upload mirrors.The specific process is not explained in detail here, please consult the relevant information.

Create Credentials on Jenkins

You need to set the user and password to access Docker hub, which can then be referenced as a variable in the Jenkins script so that the password does not appear in the program in plain code.

After logging in to the Jenkins home page with an administrator account, find Manage Jenkins-"Credentials-"System-"Global Credentials-"Add Credentials, and enter your Docker Hub username and password as shown below."ID" is what you want to reference later in the script.

Create Jenkins mirror with pre-installed Docker and k8s

There are no Docker s and k8s in the default container for Jenkins, so we need to re-create a new image based on the Jenkins image, which will be explained in more detail later.
The following is a mirror file (Dockerfile-modified-jenkins)

FROM jenkins/jenkins:lts

USER root

ENV DOCKERVERSION=19.03.4

RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
  && tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 \
                 -C /usr/local/bin docker/docker \
  && rm docker-${DOCKERVERSION}.tgz

RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl \
    && chmod +x ./kubectl \
    && mv ./kubectl /usr/local/bin/kubectl

The above image is based on "jenkins/jenkins:lts" with Docker and kubectl installed to support both software.The mirror uses the 19.03.4 version of docker.There is only a Docker CLI installed here, no Docker engine.When using it, mount the volume of the virtual machine onto the container and use the virtual machine's Docker engine.Therefore, it is best to keep the Docker version in the container consistent with the Docker version of the virtual machine.

Use the following command to view the Docker version:

vagrant@ubuntu-xenial:/$ docker version

For more information see Configure a CI/CD pipeline with Jenkins on Kubernetes

Now that preparations have been completed, the Jenkins project will be officially created:

Jenkins script:

The project was created on the home page of Jenkins, named "jenkins-k8sdemo", and its most important part is script code, which is also stored in the same source library as the Go program, and the name of the file is also "jenkins-k8sdemo".The script page of the project is shown below.

If you are unfamiliar with installing and creating Jenkins projects, see Installing Jenkins on k8s and common problems

Here is the jenkins-k8sdemo script file:

def POD_LABEL = "k8sdemopod-${UUID.randomUUID().toString()}"
podTemplate(label: POD_LABEL, cloud: 'kubernetes', containers: [
    containerTemplate(name: 'modified-jenkins', image: 'jfeng45/modified-jenkins:1.0', ttyEnabled: true, command: 'cat')
  ],
  volumes: [
     hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
  ]) {

    node(POD_LABEL) {
       def kubBackendDirectory = "/script/kubernetes/backend"
       stage('Checkout') {
            container('modified-jenkins') {
                sh 'echo get source from github'
                git 'https://github.com/jfeng45/k8sdemo'
            }
          }
       stage('Build image') {
            def imageName = "jfeng45/jenkins-k8sdemo:${env.BUILD_NUMBER}"
            def dockerDirectory = "${kubBackendDirectory}/docker/Dockerfile-k8sdemo-backend"
             container('modified-jenkins') {
               withCredentials([[$class: 'UsernamePasswordMultiBinding',
                 credentialsId: 'dockerhub',
                 usernameVariable: 'DOCKER_HUB_USER',
                 passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
                 sh """
                   docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
                   docker build -f ${WORKSPACE}${dockerDirectory} -t ${imageName} .
                   docker push ${imageName}
                   """
               }
             }
           }
       stage('Deploy') {
           container('modified-jenkins') {
               sh "kubectl apply -f ${WORKSPACE}${kubBackendDirectory}/backend-deployment.yaml"
               sh "kubectl apply -f ${WORKSPACE}${kubBackendDirectory}/backend-service.yaml"
             }
       }
    }
}


Let's look at the code one by one:

Set container mirroring:

podTemplate(label: POD_LABEL, cloud: 'kubernetes', containers: [
    containerTemplate(name: 'modified-jenkins', image: 'jfeng45/modified-jenkins:1.0', ttyEnabled: true, command: 'cat')
  ],
  volumes: [
     hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
  ])

This sets the container mirror of the Jenkins child node Pod, using "jfeng45/modified-jenkins:1.0", which we created in the previous step.All the steps in the script use this mirror."Volumes:" is used to mount volumes into the Jenkins container so that the Jenkins subnode can use the virtual machine Docker engine.

For Jenkins script commands and setting mount volumes, see jenkinsci/kubernetes-plugin

Create a mirror:

The following code generates a Docker image file of the Go program. Instead of using the Docker plug-in, we call the Docker command directly. Its benefits will be described later.It references the credentials of the Docker hub we set up earlier to access the Docker library.In the script, we log in to the Docker hub, create a mirror using the source code we downloaded from GitHub in the previous step, and upload the mirror to the Docker hub.Where'${WORKSPACE}'is a Jenkins predefined variable, the source code downloaded from GitHub is stored in'${WORKSPACE}'.

stage('Build image') {

            def imageName = "jfeng45/jenkins-k8sdemo:${env.BUILD_NUMBER}"
            def dockerDirectory = "${kubBackendDirectory}/docker/Dockerfile-k8sdemo-backend"
             container('modified-jenkins') {
               withCredentials([[$class: 'UsernamePasswordMultiBinding',
                 credentialsId: 'dockerhub',
                 usernameVariable: 'DOCKER_HUB_USER',
                 passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
                 sh """
                   docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
                   docker build -f ${WORKSPACE}${dockerDirectory} -t ${imageName} .
                   docker push ${imageName}
                   """
               }
             }
           }

If you want to know more about the Jenkins command, see Set Up a Jenkins CI/CD Pipeline with Kubernetes

Instead of regenerating the mirror file for the Go program, we reuse the mirror file that k8s used to create the Go program. The mirror file path for the Go program is "scriptkubernetesbackend dockerDockerfile-k8sdemo-backend".
Its code is as follows.The benefits of this will be discussed later.

# vagrant@ubuntu-xenial:~/app/k8sdemo/script/kubernetes/backend$
# docker build -t k8sdemo-backend .

FROM golang:latest as builder

# Set the Current Working Directory inside the container
WORKDIR /app

COPY go.mod go.sum ./

RUN go mod download

COPY . .

WORKDIR /app/cmd

# Build the Go app
#RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main.exe

RUN go build -o main.exe

######## Start a new stage from scratch #######
FROM alpine:latest

RUN apk --no-cache add ca-certificates

WORKDIR /root/

RUN mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2

# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/cmd/main.exe .

# Command to run the executable
# CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
CMD 

For more information about Go mirror files, see Create optimized Go image files and trampled pits

Deployment mirror:

The Gos program is deployed to k8s below. Instead of using the kubectl plug-in here, the existing K8s deployment and service configuration files are invoked directly with the kubectl command (the generated Gomirror is referenced in the file), and the benefits are described later.

 stage('Deploy') {
           container('modified-jenkins') {
               sh "kubectl apply -f ${WORKSPACE}${kubBackendDirectory}/backend-deployment.yaml"
               sh "kubectl apply -f ${WORKSPACE}${kubBackendDirectory}/backend-service.yaml"
             }
       }

For more information on k8s deployment and service profiles, see What modifications do you need to make to migrate your application to k8s?

Why is Declarative not used?

There are two ways to script a Pipeline,'Scripted Pipleline'and'Declarative Pipleline', using the first method here."Declarative Pipleline" is a new method and was not used because it started in Declarative mode but did not come out, then used "Scripted Pipleline" instead, and it succeeded.Later, I found out how to set up Declarative, especially how to mount the volume, but after looking at it, it was much more complicated than "Scripted Pipleline", so I stole and did not change it.

If you want to know how to set a mount volume in Declarative mode, see Jenkins Pipeline Kubernetes Agent shared Volumes

Automatically execute the project:

Projects in Jenkins now need to be started manually. If you need to start a project automatically, you need to create a webhook. Both GitHub and dockerhub support webhooks and have setup options on their pages."Webhook" is a reverse-invoked URL that GitHub and dockerhub call whenever new code or mirror submissions occur, and the URL is set to the project address of Jenkins so that the associated project starts automatically.

Test results:

Now that the Jenkins project is fully configured, you need to run the project and verify the results.After starting the project,
Look at "Console Output". Below are some outputs (all too long, see appendix) indicating successful deployment.

. . . 
+ kubectl apply -f /home/jenkins/workspace/test1/script/kubernetes/backend/backend-deployment.yaml
deployment.apps/k8sdemo-backend-deployment created
[Pipeline] sh+ kubectl apply -f /home/jenkins/workspace/test1/script/kubernetes/backend/backend-service.yaml
service/k8sdemo-backend-service created
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

View the results:
Get Pod Name:

vagrant@ubuntu-xenial:/home$ kubectl get pod
NAME                                           READY   STATUS    RESTARTS   AGE
envar-demo                                     1/1     Running   15         32d
k8sdemo-backend-deployment-6b99dc6b8c-8kxt9    1/1     Running   0          50s
k8sdemo-database-deployment-578fc88c88-mm6x8   1/1     Running   9          20d
k8sdemo-jenkins-deployment-675dd574cb-r57sb    1/1     Running   0          2d23h

Log in to Pod and run the program:

vagrant@ubuntu-xenial:/home$ kubectl exec -ti k8sdemo-backend-deployment-6b99dc6b8c-8kxt9 -- /bin/sh
~ # ./main.exe
DEBU[0000] connect to database
DEBU[0000] dataSourceName:dbuser:dbuser@tcp(k8sdemo-database-service:3306)/service_config?charset=utf8
DEBU[0000] FindAll()
DEBU[0000] created=2019-10-21
DEBU[0000] find user:{1 Tony IT 2019-10-21}
DEBU[0000] find user list:[{1 Tony IT 2019-10-21}]
DEBU[0000] user lst:[{1 Tony IT 2019-10-21}]

The result is correct.

Jenkins principle

The Instance section is complete, so let's explore best practices.Before that, it's important to understand how Jenkins works.

Executable Command

I've always had a problem with commands that Jenkins can execute through a shell?Jenkins is different from Docker and k8s, which have their own set of commands, as long as they are learned.Jenkins works by integrating with other systems, so its executable commands are related to other systems, which makes it difficult to know which commands are executable and which are not.You need to understand how it works to get an answer.When Jenkins executes a script, the primary node automatically generates a child node (the Docker container) in which all Jenkins commands are executed.So the commands that can be executed are closely related to containers.Generally speaking, you can run Linux commands through a shell.Then the following questions come up:

  1. Why can't I use Bash?

    Because the container of the child node you are using might be a compact version of Linux, such as Alpine, which has no Bash.

  2. Why can't I run the Docker command or Kubectl?

    Because its default container is jenkinsci/jnlp-slave, it does not have Docker or kubectl pre-installed inside.You can execute these commands by specifying your own container instead of the default container and preinstalling the above software.

How to Share Files

A Jenkins project is usually done in several stages, such as when the source you download is shared among several steps. How?Jenkins allocates a WORKSPACE (disk space) for each project, which stores all files downloaded from source libraries and elsewhere and can be shared between different stages through WORKSPACE.

For more information on WORKSPACE, see [Jenkins Project Artifacts and Workspace](
https://stackoverflow.com/que...

Best Practices

To summarize best practices, you need to understand the role and location of continuous deployment throughout the development process, which plays a key role in linking the various components.Programs are deployed by k8s and Docker, so scripts for program deployment are also in k8s and are maintained by k8s.We don't want to maintain a similar set of scripts in Jenkins, so the best way is to compress Jenkins'scripts to a minimum and call as many k8s scripts directly as possible.

Additionally, do not configure on a page if you can write code. Only code can be executed repeatedly and stable results can be guaranteed. Page configurations cannot be ported and do not guarantee the same results for each configuration.

Minimize the use of plug-ins

Jenkins has many plug-ins, and basically everything you want to do has a plug-in.For example, if you need to use the Docker function, there is a "Docker Pipeline" plug-in, and if you want to use the k8s function, there is a "kubectl" plug-in.But it can cause a lot of problems.

  • First, each plug-in has its own settings (typically set on the Jenkins plug-in page), which are incompatible with other continuous deployment tools.These settings need to be discarded if you want to migrate to other continuous deployment tools in the future.
  • Second, each plug-in has its own command format, so you need to learn a new set of commands.
  • Third, these plug-ins tend to support only a few features, limiting what you can do.

For example, you need to create a Docker image file with the following command, which creates a mirror named "jfeng45/jenkins-k8sdemo", the default file for which is the Docker file in the project's root directory.

app = docker.build("jfeng45/jenkins-k8sdemo")

But the Create Docker Mirror File command has many parameter options. For example, if your mirror file name is not a Dockerfile and the directory is not at the project root, how should you write it?This was not supported in previous versions, but was supported in later versions, but after all, it was not very convenient, so we had to learn new commands.The best way is to use the Docker command directly, which perfectly solves the three problems mentioned above.The answer is in the Jenkins principle mentioned earlier. In fact, most plug-ins are not needed. You only need to create a container of Jenkins subnodes and install the corresponding software to solve this problem.

Here's a comparison of scripts that use plug-ins versus those that don't, which look longer because scripts that use plug-ins integrate better with credential settings in Jenkins than scripts that don't.But apart from this minor disadvantage, scripts that are not used in other ways are far superior to plug-ins.

Scripts using plug-ins (with plug-in commands):

stage('Create Docker images') {
  container('docker') {
      app = docker.build("jfeng45/codedemo", "-f ${WORKSPACE}/script/kubernetes/backend/docker/Dockerfile-k8sdemo-test .")
      docker.withRegistry('', 'dockerhub') {
          // Push image and tag it with our build number for versioning purposes.
          app.push("${env.BUILD_NUMBER}")
      }
    }
  }

Scripts that do not use plug-ins (using the Docker command directly):

stage('Create a d ocker image') {
     def imageName = "jfeng45/codedemo:${env.BUILD_NUMBER}"
     def dockerDirectory = "${kubBackendDirectory}/docker/Dockerfile-k8sdemo-backend"
      container('modified-jenkins') {
        withCredentials([[$class: 'UsernamePasswordMultiBinding',
          credentialsId: 'dockerhub',
          usernameVariable: 'DOCKER_HUB_USER',
          passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
          sh """
            docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
            docker build -f ${WORKSPACE}${dockerDirectory} -t ${imageName} .
            docker push ${imageName}
            """
        }
      }
    }

Use k8s and Dcoker as much as possible

For example, if we want to create a mirror of an application, we can write a Docker file and call it in a Jenkins script to create it, or we can write a Jenkins script to create a mirror in a script.A better method is the former.Because both Docker and k8s are de facto standards, porting is easy.

The fewer code the Jenkins script has, the better

If you agree with the first two principles, then this one is logical for the same reason.

Common problem:

1. Variables should be placed in double quotes
Jenkins'scripts can use either single or double quotes, but if you reference variables in quotes, use double quotes.

The correct command:

sh "kubectl apply -f ${WORKSPACE}${kubBackendDirectory}/backend-deployment.yaml"

Wrong command:

sh 'kubectl apply -f ${WORKSPACE}${kubBackendDirectory}/backend-deployment.yaml'

2.docker not found

If there is no Docker in the container of Jenkins, but you call the Docker command again, the following error will occur in Console Output:

+ docker inspect -f . k8sdemo-backend:latest
/var/jenkins_home/workspace/k8sdec@2@tmp/durable-01e26997/script.sh: 1:     /var/jenkins_home/workspace/k8sdec@2@tmp/durable-01e26997/script.sh: docker:     not found

3.Jenkins is down

When I debugged Jenkins, I created a new mirror file and uploaded it to the Docker hub, and I noticed that Jenkins was down.Check Pod, find the problem, k8s can't find Jenkins mirror file (mirror file disappeared from disk).Because the Jenkins deployment file is set to "imagePullPolicy: Never", it will not automatically re-download once the mirror is gone.The reason was found later that Vagrant's default disk size was 10G. If there was not enough space, it would automatically delete other mirrored files from the disk to make room. As a result, Jenkins'mirrored files were deleted. The solution was to expand Vagrant's disk size.

Below is the modified Vagrantfile, which changes disk space to 16G.

Vagrant.configure(2) do |config|
     . . . 
     config.vm.box = "ubuntu/xenial64"
     config.disksize.size = '16GB'
     . . . 
end

See for details How can I increase disk size on a Vagrant VM?

Source code:

Full source github link

Below are the sections of the project related to this article:

Indexes

  1. Nexus Platform Overview
  2. Configure a CI/CD pipeline with Jenkins on Kubernetes
  3. Installing Jenkins on k8s and common problems
  4. jenkinsci/kubernetes-plugin
  5. Jenkins Pipeline Kubernetes Agent shared Volumes
  6. Set Up a Jenkins CI/CD Pipeline with Kubernetes
  7. Create optimized Go image files and trampled pits
  8. What modifications do you need to make to migrate your application to k8s?
  9. Jenkins Pipeline Kubernetes Agent shared Volumes
  10. Jenkins Project Artifacts and Workspace
  11. How can I increase disk size on a Vagrant VM?

Appendix:

Here is the complete Console Output after the Jenkins project has run:

Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
'k8sdemopod-030ed100-cb28-4770-b6de-c491970e5baa-twb8s-k9pn3' is offline
Agent k8sdemopod-030ed100-cb28-4770-b6de-c491970e5baa-twb8s-k9pn3 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (k8sdemopod-030ed100-cb28-4770-b6de-c491970e5baa): 
* [modified-jenkins] jfeng45/modified-jenkins:1.0

Running on k8sdemopod-030ed100-cb28-4770-b6de-c491970e5baa-twb8s-k9pn3 in /home/jenkins/workspace/jenkins-k8sdemo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Checkout)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ echo get source from github
get source from github
[Pipeline] git
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/jfeng45/k8sdemo
 > git init /home/jenkins/workspace/jenkins-k8sdemo # timeout=10
Fetching upstream changes from https://github.com/jfeng45/k8sdemo
 > git --version # timeout=10
 > git fetch --tags --force --progress -- https://github.com/jfeng45/k8sdemo +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/jfeng45/k8sdemo # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/jfeng45/k8sdemo # timeout=10
Fetching upstream changes from https://github.com/jfeng45/k8sdemo
 > git fetch --tags --force --progress -- https://github.com/jfeng45/k8sdemo +refs/heads/*:refs/remotes/origin/*
Checking out Revision 90c57dcd8ff362d01631a54125129090b503364b (refs/remotes/origin/master)
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 90c57dcd8ff362d01631a54125129090b503364b
 > git branch -a -v --no-abbrev # timeout=10
 > git checkout -b master 90c57dcd8ff362d01631a54125129090b503364b
Commit message: "added jenkins continous deployment files"
[Pipeline] }
 > git rev-list --no-walk 90c57dcd8ff362d01631a54125129090b503364b # timeout=10
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $DOCKER_HUB_USER or $DOCKER_HUB_PASSWORD
[Pipeline] {
[Pipeline] sh
+ docker login -u **** -p ****
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
+ docker build -f /home/jenkins/workspace/jenkins-k8sdemo/script/kubernetes/backend/docker/Dockerfile-k8sdemo-backend -t ****/jenkins-k8sdemo:7 .
Sending build context to Docker daemon  218.6kB

Step 1/13 : FROM golang:latest as builder
 ---> dc7582e06f8e
Step 2/13 : WORKDIR /app
 ---> Running in c5770704333e
Removing intermediate container c5770704333e
 ---> 73445078c82d
Step 3/13 : COPY go.mod go.sum ./
 ---> 6762344c7bc8
Step 4/13 : RUN go mod download
 ---> Running in 56a1f253c3f5
[91mgo: finding github.com/davecgh/go-spew v1.1.1
[0m[91mgo: finding github.com/go-sql-driver/mysql v1.4.1
[0m[91mgo: finding github.com/konsorten/go-windows-terminal-sequences v1.0.1
[0m[91mgo: finding github.com/pkg/errors v0.8.1
[0m[91mgo: finding github.com/pmezard/go-difflib v1.0.0
[0m[91mgo: finding github.com/sirupsen/logrus v1.4.2
[0m[91mgo: finding github.com/stretchr/objx v0.1.1
[0m[91mgo: finding github.com/stretchr/testify v1.2.2
[0m[91mgo: finding golang.org/x/sys v0.0.0-20190422165155-953cdadca894
[0mRemoving intermediate container 56a1f253c3f5
 ---> 455ef98244eb
Step 5/13 : COPY . .
 ---> 092444c8a5ef
Step 6/13 : WORKDIR /app/cmd
 ---> Running in 558240a3dcb1
Removing intermediate container 558240a3dcb1
 ---> 044e01b8184b
Step 7/13 : RUN go build -o main.exe
 ---> Running in 648899ba522c
Removing intermediate container 648899ba522c
 ---> 69f6652bc706
Step 8/13 : FROM alpine:latest
 ---> 965ea09ff2eb
Step 9/13 : RUN apk --no-cache add ca-certificates
 ---> Using cache
 ---> a27265887a1e
Step 10/13 : WORKDIR /root/
 ---> Using cache
 ---> b9c048c97f07
Step 11/13 : RUN mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2
 ---> Using cache
 ---> 95a2b77e3e0a
Step 12/13 : COPY --from=builder /app/cmd/main.exe .
 ---> Using cache
 ---> c5dc6dfdf037
Step 13/13 : CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
 ---> Using cache
 ---> b141558cb0f3
Successfully built b141558cb0f3
Successfully tagged ****/jenkins-k8sdemo:7
+ docker push ****/jenkins-k8sdemo:7
The push refers to repository [docker.io/****/jenkins-k8sdemo]
0e5809dd35f7: Preparing
8861feb71103: Preparing
5b63d4bd63b4: Preparing
77cae8ab23bf: Preparing
77cae8ab23bf: Mounted from ****/codedemo
8861feb71103: Mounted from ****/codedemo
5b63d4bd63b4: Mounted from ****/codedemo
0e5809dd35f7: Mounted from ****/codedemo
7: digest: sha256:95c780bb08793712cd2af668c9d4529e17c99e58dfb05ffe8df6a762f245ce10 size: 1156
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ kubectl apply -f /home/jenkins/workspace/jenkins-k8sdemo/script/kubernetes/backend/backend-deployment.yaml
deployment.apps/k8sdemo-backend-deployment created
[Pipeline] sh
+ kubectl apply -f /home/jenkins/workspace/jenkins-k8sdemo/script/kubernetes/backend/backend-service.yaml
service/k8sdemo-backend-service created
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

No terminology, no structure, no superstition of authority, no blindness to popularity, insist on independent thinking

Keywords: Go jenkins Docker github Kubernetes

Added by floR on Fri, 08 Nov 2019 05:03:41 +0200