k8s use temporary containers for troubleshooting

Containers and their surrounding ecosystems change the way engineers deploy, maintain, and troubleshoot workloads. However, debugging applications on the Kubernetes cluster can sometimes be difficult because you may not find the required debugging tools in the container. Many engineers build a basic image without a distribution based on a thin, distributed version, without even a package manager or shell. Even some teams use scratch as the base image and add only the files needed to run the application. Some of the reasons for this common practice are:

  • It has a small attack area.
  • For faster scanning performance.
  • Reduced mirror size.
  • For faster builds and shorter CD/CI cycles.
  • Reduce dependencies.

These thin base images do not include tools for troubleshooting applications or their dependencies. This is the biggest use of Kubernetes temporary container function. Temporary containers allow you to create container images that contain all the debugging tools you might need. Once debugging is required, the temporary container can be deployed to the selected running Pod. We cannot add containers to deployed containers; You need to update the spec and recreate the resource. However, temporary containers can be added to existing pods to troubleshoot online problems. This article describes how to troubleshoot workloads on Kubernetes using temporary containers.

Configuration of temporary containers

Temporary containers share the same spec as regular containers. However, some fields are disabled and some behaviors are changed. Some major changes are listed below; Check the temporary container specification for a complete list.

  • They will not restart.
  • Defining resources is not allowed.
  • Port is not allowed.
  • Start, active, and ready probes are not allowed.

Start temporary container

First, check whether the temporary container function is enabled.

kubectl debug -it <POD_NAME> --image=busybox

If this feature is not enabled, you will see a message similar to the following.

Defaulting debug container name to debugger-wg54p.
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").

Append EphemeralContainers=true to -- feature gates = in kubelet, Kube apiserver, Kube controller manager, Kube proxy and Kube scheduler parameters, for example:


Use temporary containers

Now, the cluster supports the temporary container function. Let's try it. To create a temporary container, use the debug subcommand of the kubectl command line tool. First, create a Deployment

kubectl create deployment nginx-deployment --image=nginx

Get the name of the Pod that needs to be debug ged

$ kubectl get pods

NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-66b6c48dd5-frsv9   1/1     Running   6          62d

The following command will create a new temporary container in pod nginx-deployment-66b6c48dd5-frsv9. The image of the temporary container is busybox- The i and - t parameters allow us to attach to the newly created container.

$ kubectl debug -it pods/nginx-deployment-66b6c48dd5-frsv9 --image=busybox

Now we can debug

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=112 time=9.797 ms
64 bytes from seq=1 ttl=112 time=9.809 ms
/ # nc --help
BusyBox v1.34.1 (2021-11-11 01:55:05 UTC) multi-call binary.

Usage: nc [OPTIONS] HOST PORT  - connect
nc [OPTIONS] -l -p PORT [HOST] [PORT]  - listen

When using kubectl describe pod < pod_ When using the name > command, you can see a new field "Ephemeral Containers", which contains the temporary container and its properties.

$ kubectl describe pods <POD_NAME>

Ephemeral Containers:
    Container ID:   containerd://eec23aa9ee63d96b82970bb947b29cbacc30685bbc3418ba840dee109f871bf0
    Image:          busybox
    Image ID:       docker.io/library/busybox@sha256:e7157b6d7ebbe2cce5eaa8cfe8aa4fa82d173999b9f90a9ec42e57323546c353
    Port:           <none>
    Host Port:      <none>

Share process namespace with temporary container

Process namespace sharing has always been a good troubleshooting option, which can be used for temporary containers. The process namespace share cannot be applied to an existing container, so a copy of the target container must be created-- Share processesflag can realize process namespace sharing when used with -- copy to. These flags copy the existing Pod spec definition into the new definition and enable process namespace sharing in the spec.

$ kubectl debug -it <POD_NAME> --image=busybox --share-processes --copy-to=debug-pod

ps to see which processes are running. As you would expect, you can see / pause in the busybox container and the nginx process in the nginx deployment container.

# ps aux

    1 root      0:00 /pause
    6 root      0:00 nginx: master process nginx -g daemon off;
   11 101       0:00 nginx: worker process
   12 root      0:00 sh
   17 root      0:00 ps aux

Using the process namespace, the shared container file system is also accessible, which is very useful for debugging. You can access the container using the / proc//root link. From the above output, we know that the PID of nginx is 6.

# ls /proc/6/root/etc/nginx

conf.d koi-utf mime.types nginx.conf uwsgi_params fastcgi_params koi-win modules scgi_params win-utf

Here, we can see the Nginx directory structure and configuration files on the target container.


The temporary container function undoubtedly brings great convenience to debugging and troubleshooting, and process namespace sharing allows advanced debugging functions. If you also use applications running in the Kubernetes cluster, it's worth taking the time to try these features. It's not hard to imagine that some teams even use these tools to automate workflows, such as automatically repairing other containers when the readiness probes probe fails.

Original text: https://tinyurl.com/3658tdzs

Added by Crogge on Sat, 05 Mar 2022 16:25:26 +0200