kubeadm source code analysis

k8s Offline Installation Package Three-step installation, incredibly simple

kubeadm source code analysis

To be honest, kubeadm's code is not very good in quality.

Several key points to start with are the core things kubeadm did:

  • kubeadm generates certificates in / etc/kubernetes/pki directory
  • kubeadm generates static pod yaml configuration, all under / etc/kubernetes/manifasts
  • kubeadm generates kubelet configuration, kubectl configuration, etc. under / etc/kubernetes
  • kubeadm starts dns through client go

<!--more-->

kubeadm init

Code entry cmd/kubeadm/app/cmd/init.go suggests you take a look at cobra

Find Run function to analyze the main process:

  1. If the certificate does not exist, create the certificate, so if we have our own certificate, we can put it under / etc/kubernetes/pki. Let's look at how to generate the certificate.
    if res, _ := certsphase.UsingExternalCA(i.cfg); !res {
        if err := certsphase.CreatePKIAssets(i.cfg); err != nil {
            return err
        }
  1. Create a kubeconfig file
        if err := kubeconfigphase.CreateInitKubeConfigFiles(kubeConfigDir, i.cfg); err != nil {
            return err
        }
  1. Create manifest file, etcd apiserver manager scheduler are created here, you can see that if your configuration file has written the address of etcd, it will not be created, so we can install the etcd cluster ourselves instead of the default single etcd, which is very useful.
controlplanephase.CreateInitStaticPodManifestFiles(manifestDir, i.cfg); 
if len(i.cfg.Etcd.Endpoints) == 0 {
    if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(manifestDir, i.cfg); err != nil {
        return fmt.Errorf("error creating local etcd static pod manifest file: %v", err)
    }
}
  1. Waiting for APIserver and kubelet to start successfully, we will encounter the mistake that we often encounter that the mirror can't be pulled down. In fact, sometimes kubelet will report this mistake for other reasons, which makes people mistake it as the mirror can't be pulled down.
if err := waitForAPIAndKubelet(waiter); err != nil {
    ctx := map[string]string{
        "Error":                  fmt.Sprintf("%v", err),
        "APIServerImage":         images.GetCoreImage(kubeadmconstants.KubeAPIServer, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
        "ControllerManagerImage": images.GetCoreImage(kubeadmconstants.KubeControllerManager, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
        "SchedulerImage":         images.GetCoreImage(kubeadmconstants.KubeScheduler, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
    }

    kubeletFailTempl.Execute(out, ctx)

    return fmt.Errorf("couldn't initialize a Kubernetes cluster")
}
  1. Label and stain master, so if you want pod to be scheduled to master, you can remove the stain.
if err := markmasterphase.MarkMaster(client, i.cfg.NodeName); err != nil {
    return fmt.Errorf("error marking master: %v", err)
}
  1. Generating tocken
if err := nodebootstraptokenphase.UpdateOrCreateToken(client, i.cfg.Token, false, i.cfg.TokenTTL.Duration, kubeadmconstants.DefaultTokenUsages, []string{kubeadmconstants.NodeBootstrapTokenAuthGroup}, tokenDescription); err != nil {
    return fmt.Errorf("error updating or creating token: %v", err)
}
  1. Call clientgo to create dns and kube-proxy
if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil {
    return fmt.Errorf("error ensuring dns addon: %v", err)
}

if err := proxyaddonphase.EnsureProxyAddon(i.cfg, client); err != nil {
    return fmt.Errorf("error ensuring proxy addon: %v", err)
}

The author criticizes the process of code mindlessness. If I abstract it into interface RenderConf Save Run Clean and so on, DNS kube-porxy and other components to implement, then the problem is that the configuration of dns and kubeproxy is not rendered, maybe they are not static pod, then they are the following bug s when join ing. Mention

Certificate Generation

The loop calls this function. We just need to look at one or two of them. The others are almost the same.

certActions := []func(cfg *kubeadmapi.MasterConfiguration) error{
    CreateCACertAndKeyfiles,
    CreateAPIServerCertAndKeyFiles,
    CreateAPIServerKubeletClientCertAndKeyFiles,
    CreateServiceAccountKeyAndPublicKeyFiles,
    CreateFrontProxyCACertAndKeyFiles,
    CreateFrontProxyClientCertAndKeyFiles,
}

Root certificate generation:


//Returns the public and private keys of the root certificate
func NewCACertAndKey() (*x509.Certificate, *rsa.PrivateKey, error) {

    caCert, caKey, err := pkiutil.NewCertificateAuthority()
    if err != nil {
        return nil, nil, fmt.Errorf("failure while generating CA certificate and key: %v", err)
    }

    return caCert, caKey, nil
}

There are two functions in the library k8s.io/client-go/util/cert, one generating key and the other generating cert:

key, err := certutil.NewPrivateKey()
config := certutil.Config{
    CommonName: "kubernetes",
}
cert, err := certutil.NewSelfSignedCACert(config, key)

We can also fill in some other certificate information in config:

type Config struct {
    CommonName   string
    Organization []string
    AltNames     AltNames
    Usages       []x509.ExtKeyUsage
}

Private keys encapsulate functions in rsa libraries:

    "crypto/rsa"
    "crypto/x509"
func NewPrivateKey() (*rsa.PrivateKey, error) {
    return rsa.GenerateKey(cryptorand.Reader, rsaKeySize)
}

From the visa, the root certificate contains only CommonName information, and Organization is equivalent to no settings:

func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) {
    now := time.Now()
    tmpl := x509.Certificate{
        SerialNumber: new(big.Int).SetInt64(0),
        Subject: pkix.Name{
            CommonName:   cfg.CommonName,
            Organization: cfg.Organization,
        },
        NotBefore:             now.UTC(),
        NotAfter:              now.Add(duration365d * 10).UTC(),
        KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
        BasicConstraintsValid: true,
        IsCA: true,
    }

    certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key)
    if err != nil {
        return nil, err
    }
    return x509.ParseCertificate(certDERBytes)
}

Write it into a file after it is generated:

 pkiutil.WriteCertAndKey(pkiDir, baseName, cert, key);
certutil.WriteCert(certificatePath, certutil.EncodeCertPEM(cert))

Here the pem library is called for encoding

encoding/pem

func EncodeCertPEM(cert *x509.Certificate) []byte {
    block := pem.Block{
        Type:  CertificateBlockType,
        Bytes: cert.Raw,
    }
    return pem.EncodeToMemory(&block)
}

Then we look at apiserver's certificate generation:

caCert, caKey, err := loadCertificateAuthorithy(cfg.CertificatesDir, kubeadmconstants.CACertAndKeyBaseName)
//Generating apiserver certificates from root certificates
apiCert, apiKey, err := NewAPIServerCertAndKey(cfg, caCert, caKey)

At this point, it is important to pay attention to AltNames. All address domains that need access to master must be added. The apiServer CertSANs field in the configuration file is the same as the root certificate.

config := certutil.Config{
    CommonName: kubeadmconstants.APIServerCertCommonName,
    AltNames:   *altNames,
    Usages:     []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
}

Create k8s configuration file

You can see that these files were created

return createKubeConfigFiles(
    outDir,
    cfg,
    kubeadmconstants.AdminKubeConfigFileName,
    kubeadmconstants.KubeletKubeConfigFileName,
    kubeadmconstants.ControllerManagerKubeConfigFileName,
    kubeadmconstants.SchedulerKubeConfigFileName,
)

k8s encapsulates two rendering configuration functions:
The difference is whether token will be generated in your kubeconfig file. For example, if you need a token to enter dashboard, or if you need a token to call api, please generate a token configuration.
The generated conf files are basically just different things like ClientName, so the encrypted certificates are different. ClientName will be encrypted into the certificate, and then k8s will be taken out when the user uses it.

So the point is that when we do multi-tenancy, we also need to do this. Then bind the role to the tenant.

return kubeconfigutil.CreateWithToken(
    spec.APIServer,
    "kubernetes",
    spec.ClientName,
    certutil.EncodeCertPEM(spec.CACert),
    spec.TokenAuth.Token,
), nil

return kubeconfigutil.CreateWithCerts(
    spec.APIServer,
    "kubernetes",
    spec.ClientName,
    certutil.EncodeCertPEM(spec.CACert),
    certutil.EncodePrivateKeyPEM(clientKey),
    certutil.EncodeCertPEM(clientCert),
), nil

Then fill in the Config structure and write it into the file.

"k8s.io/client-go/tools/clientcmd/api
return &clientcmdapi.Config{
    Clusters: map[string]*clientcmdapi.Cluster{
        clusterName: {
            Server: serverURL,
            CertificateAuthorityData: caCert,
        },
    },
    Contexts: map[string]*clientcmdapi.Context{
        contextName: {
            Cluster:  clusterName,
            AuthInfo: userName,
        },
    },
    AuthInfos:      map[string]*clientcmdapi.AuthInfo{},
    CurrentContext: contextName,
}

Create static pod yaml file

Here the pod structure of apiserver manager scheduler is returned.

specs := GetStaticPodSpecs(cfg, k8sVersion)
staticPodSpecs := map[string]v1.Pod{
    kubeadmconstants.KubeAPIServer: staticpodutil.ComponentPod(v1.Container{
        Name:          kubeadmconstants.KubeAPIServer,
        Image:         images.GetCoreImage(kubeadmconstants.KubeAPIServer, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
        Command:       getAPIServerCommand(cfg, k8sVersion),
        VolumeMounts:  staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeAPIServer)),
        LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeAPIServer, int(cfg.API.BindPort), "/healthz", v1.URISchemeHTTPS),
        Resources:     staticpodutil.ComponentResources("250m"),
        Env:           getProxyEnvVars(),
    }, mounts.GetVolumes(kubeadmconstants.KubeAPIServer)),
    kubeadmconstants.KubeControllerManager: staticpodutil.ComponentPod(v1.Container{
        Name:          kubeadmconstants.KubeControllerManager,
        Image:         images.GetCoreImage(kubeadmconstants.KubeControllerManager, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
        Command:       getControllerManagerCommand(cfg, k8sVersion),
        VolumeMounts:  staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeControllerManager)),
        LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeControllerManager, 10252, "/healthz", v1.URISchemeHTTP),
        Resources:     staticpodutil.ComponentResources("200m"),
        Env:           getProxyEnvVars(),
    }, mounts.GetVolumes(kubeadmconstants.KubeControllerManager)),
    kubeadmconstants.KubeScheduler: staticpodutil.ComponentPod(v1.Container{
        Name:          kubeadmconstants.KubeScheduler,
        Image:         images.GetCoreImage(kubeadmconstants.KubeScheduler, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
        Command:       getSchedulerCommand(cfg),
        VolumeMounts:  staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeScheduler)),
        LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeScheduler, 10251, "/healthz", v1.URISchemeHTTP),
        Resources:     staticpodutil.ComponentResources("100m"),
        Env:           getProxyEnvVars(),
    }, mounts.GetVolumes(kubeadmconstants.KubeScheduler)),
}

//Get a specific version of the image
func GetCoreImage(image, repoPrefix, k8sVersion, overrideImage string) string {
    if overrideImage != "" {
        return overrideImage
    }
    kubernetesImageTag := kubeadmutil.KubernetesVersionToImageTag(k8sVersion)
    etcdImageTag := constants.DefaultEtcdVersion
    etcdImageVersion, err := constants.EtcdSupportedVersion(k8sVersion)
    if err == nil {
        etcdImageTag = etcdImageVersion.String()
    }
    return map[string]string{
        constants.Etcd:                  fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "etcd", runtime.GOARCH, etcdImageTag),
        constants.KubeAPIServer:         fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-apiserver", runtime.GOARCH, kubernetesImageTag),
        constants.KubeControllerManager: fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-controller-manager", runtime.GOARCH, kubernetesImageTag),
        constants.KubeScheduler:         fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-scheduler", runtime.GOARCH, kubernetesImageTag),
    }[image]
}
//And then I put this pod in the file. It's easier.
 staticpodutil.WriteStaticPodToDisk(componentName, manifestDir, spec); 

It's the same as creating etcd. It's not a lot of crap.

Waiting for kubelet to start successfully

This error is very easy to encounter. To see that kubelet is not up, we need to check whether selinux swap is consistent with group driver.
Setenforce 0 & & swapoff-a & & System CTL restart kubelet If not, please ensure that the group driver of kubelet is consistent with docker, docker info|grep Cg

go func(errC chan error, waiter apiclient.Waiter) {
    // This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything special
    if err := waiter.WaitForHealthyKubelet(40*time.Second, "http://localhost:10255/healthz"); err != nil {
        errC <- err
    }
}(errorChan, waiter)

go func(errC chan error, waiter apiclient.Waiter) {
    // This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything special
    if err := waiter.WaitForHealthyKubelet(60*time.Second, "http://localhost:10255/healthz/syncloop"); err != nil {
        errC <- err
    }
}(errorChan, waiter)

Create DNS and kubeproxy

That's where I found coreDNS

if features.Enabled(cfg.FeatureGates, features.CoreDNS) {
    return coreDNSAddon(cfg, client, k8sVersion)
}
return kubeDNSAddon(cfg, client, k8sVersion)

Then the yaml configuration template of coreDNS is written directly in the code:
/app/phases/addons/dns/manifests.go

    CoreDNSDeployment = `
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      serviceAccountName: coredns
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - key: {{ .MasterTaintKey }}
...

Then render the template, and finally call k8sapi to create it. This way of creating can be learned. Although it is a bit clumsy, this place is not as good as kubectl.

coreDNSConfigMap := &v1.ConfigMap{}
if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), configBytes, coreDNSConfigMap); err != nil {
    return fmt.Errorf("unable to decode CoreDNS configmap %v", err)
}

// Create the ConfigMap for CoreDNS or update it in case it already exists
if err := apiclient.CreateOrUpdateConfigMap(client, coreDNSConfigMap); err != nil {
    return err
}

coreDNSClusterRoles := &rbac.ClusterRole{}
if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), []byte(CoreDNSClusterRole), coreDNSClusterRoles); err != nil {
    return fmt.Errorf("unable to decode CoreDNS clusterroles %v", err)
}
...

It is worth mentioning here that the configmap of kubeproxy should really pass in the apiserver address and allow customization, because it is troublesome to specify virtual ip when making high availability.
kubeproxy is not bad, let alone change it: app/phases/addons/proxy/manifests.go

kubeadm join

kubeadm join is relatively simple, one sentence can clearly say, get cluster info, create kubeconfig, how to create kubeinit has been said. Bring token to give kubeadm permission
Can be pulled

return https.RetrieveValidatedClusterInfo(cfg.DiscoveryFile)

cluster info content
type Cluster struct {
    // LocationOfOrigin indicates where this object came from.  It is used for round tripping config post-merge, but never serialized.
    LocationOfOrigin string
    // Server is the address of the kubernetes cluster (https://hostname:port).
    Server string `json:"server"`
    // InsecureSkipTLSVerify skips the validity check for the server's certificate. This will make your HTTPS connections insecure.
    // +optional
    InsecureSkipTLSVerify bool `json:"insecure-skip-tls-verify,omitempty"`
    // CertificateAuthority is the path to a cert file for the certificate authority.
    // +optional
    CertificateAuthority string `json:"certificate-authority,omitempty"`
    // CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority
    // +optional
    CertificateAuthorityData []byte `json:"certificate-authority-data,omitempty"`
    // Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields
    // +optional
    Extensions map[string]runtime.Object `json:"extensions,omitempty"`
}

return kubeconfigutil.CreateWithToken(
    clusterinfo.Server,
    "kubernetes",
    TokenUser,
    clusterinfo.CertificateAuthorityData,
    cfg.TLSBootstrapToken,
), nil

CreateWithToken mentioned above does not go into detail, so you can generate the kubelet configuration file, and then start the kubelet.

The problem with kubeadm join is that it does not use the apiserver address passed in from the command line to render the configuration, but uses the address in cluster info, which is not conducive to our high availability. Maybe we pass in a virtual ip, but the configuration is still the apiser address.

Scanning Focus on sealyun

Exploring additive QQ groups: 98488045

Keywords: Linux Kubernetes DNS kubelet JSON

Added by Megahertza on Wed, 07 Aug 2019 10:33:47 +0300