I mentioned in the group before that I will prepare the information of cks for you, but it may be a little late for some reasons. Therefore, I want to collect some existing real problem information and publish it for my partners in need to learn.
The following questions are contributed by a friend who took the exam. I sincerely thank him and wish him a speedy passing cks.
The purpose of this paper:
One is to provide you with common test sites for cks, so that you can study targeted;
The second is to provide you with a place to share and exchange. You are welcome to leave the optimal solution in the comment area;
In the future, the author will sort out and improve the corresponding reference answers according to the message information, and in the next quarter, you will see a cks real problem analysis similar to cka real problem analysis.
1 image scanning ImagePolicyWebhook
- Topic overview
context A container image scanner is set up on the cluster,but It's not yet fully integrated into the cluster's configuration When complete,the container image scanner shall scall scan for and reject the use of vulnerable images. task You have to complete the entire task on the cluster's master node,where all services and files have been prepared and placed Glven an incomplete configuration in directory /etc/kubernetes/aa and a functional container image scanner with HTTPS sendpitont http://192.168.26.60:1323/image_policy 1.enable the necessary plugins to create an image policy 2.validate the control configuration and chage it to an implicit deny 3.Edit the configuration to point the provied HTTPS endpoint correctiy Finally,test if the configurateion is working by trying to deploy the valnerable resource /csk/1/web1.yaml
- analysis
1. Switch cluster,see master,sshmaster 2. ls /etc/kubernetes/xxx 3. vi /etc/kubernetes/xxx/xxx.yaml change true by false vi /etc/kubernetes/xxx/xxx.yaml in https Address of volume Need to mount in 4. Enable ImagePolicyWebhook and- --admission-control-config-file= 5. systemctl restart kubelet 6.kubectl run pod1 --image=nginx https://kubernetes.io/zh/docs/reference/access-authn-authz/admission- controllers/#imagepolicywebhook
2. Sysdig test pod
- Topic overview
you may user you brower to open one additonal tab to access sysdig's documentation ro Falco's documentaion Task: user runtime detection tools to detect anomalous processes spawning and executing frequently in the sigle container belorging to Pod redis. Tow tools are avaliable to use: sysdig falico the tools are pre-installed on the cluster's worker node only;the are not avaliable on the base system or the master node. using the tool of you choice(including any non pre-install tool) analyse the container's behaviour for at lest 30 seconds,using filers that detect newly spawing and executing processes store an incident file at /opt/2/report,containing the detected incidents one per line in the follwing format: [timestamp],[uid],[processName]
- analysis
0. Remember to use sysdig -l |grep Search related fields 1. Switch cluster,Query the corresponding pod,ssh reach pod Corresponding node On the host 2. use sysding,Pay attention to the required format and time,Redirect the result to the corresponding file 3. sysdig -M 30 -p "*%evt.time,%user.uid,%proc.name" container.id=container id > /opt/2/report
3 clusterrole
- Topic overview
context A Role bound to a pod's serviceAccount grants overly permissive permission Complete the following tasks to reduce the set of permissions. task Glven an existing Pod name web-pod running in the namespace monitoring Edit the Roleebound to the Pod's serviceAccount sa-dev-1 to only allow performing list operations,only on resources of type Endpoints create a new Role named role-2 in the namespaces monitoring which only allows performing update operations,only on resources of type persistentvoumeclaims. create a new Rolebind name role role-2-bindding binding the newly created Roleto the Pod's serviceAccount
- analysis
1. lookup rollebind Corresponding rolle Modify permission as list and endpoints kubectl edit role role-1 -n monitoring 2. remember --verb Is permission --resource Is the object kubectl create role role-2 --verb=update --resource=persistentvolumeclaims -n monitoring 3. Create a binding to the corresponding sa kubectl create rolebinding role-2-bindding --role=role-2 --serviceaccount=monitoring:sa-dev-1 -n monitoring
4 AppArmor
- Topic overview
Context AppArmor is enabled on the cluster's worker node. An AppArmor profile is prepared, but not enforced yet. You may use your browser to open one additional tab to access theAppArmor documentation. Task On the cluster's worker node, enforce the prepared AppArmor profile located at /etc/apparmor.d/nginx_apparmor . Edit the prepared manifest file located at /cks/4/pod1.yaml to apply the AppArmor profile. Finally, apply the manifest file and create the pod specified in it
- analysis
1. Switching clustering,Remember to check nodes,ssh reach node node 2. View the corresponding configuration file and name cd /etc/apparmor.d vi nginx_apparmor apparmor_status |grep nginx-profile-3 # No grep indicates that it is not started apparmor_parser -q nginx_apparmor # Load enable this profile 3. Modify correspondence yaml Apply this rule ,Open the URL copy example of the official website,Modify container name and local configuration name vi /cks/4/pod1.yaml ..... metadata: annotations: container.apparmor.security.beta.kubernetes.io/podx: nginx-profile-3 .... 4. Created after modification kubectl apply -f /cks/4/pod1.yaml https://kubernetes.io/zh/docs/tutorials/clusters/apparmor/#%E4%B8%BE%E4%BE%8B
5 PodSecurityPolicy
- Topic overview
context A PodsecurityPolicy shall prevent the creati on of privileged Pods in a specific namespace. Task Create a new PodSecurityPolicy named prevent-psp-policy , which prevents the creation of privileged Pods. Create a new ClusterRole named restrict-access-role , which uses the newly created PodSecurityPolicy prevent-psp-policy . Create a new serviceAccount named psp-denial-sa in the existing namespace development . Finally, create a new clusterRoleBinding named dany-access-bind , which binds the newlycreated ClusterRole restrict-access-role to the newly created serviceAccount
- analysis
0. Switching clustering,Check whether it is enabled # vi /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # systemctl restart kubelet 1. Official website copy psp,Modify deny privilege # cat psp.yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: prevent-psp-policy spec: privileged: false seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny volumes: - '*' # kubectl create -f psp.yaml 2. Create corresponding clusterrole kubectl create clusterrole restrict-access-role --verb=use -- resource=podsecuritypolicy --resource-name=prevent-psp-policy 3. establish sa Look at the corresponding ns kubectl create sa psp-denial-sa -n development 4. Create binding relationship kubectl create clusterrolebinding dany-access-bind --clusterrole=restrict-access- role --serviceaccount=development:psp-denial-sa https://kubernetes.io/zh/docs/concepts/policy/pod-security- policy/#%E5%88%9B%E5%BB%BA%E4%B8%80%E4%B8%AA%E7%AD%96%E7%95%A5%E5%92%8C%E4%B8%80 %E4%B8%AA-pod
6 network strategy
- Topic overview
create a NetworkPolicy named pod-access torestrict access to Pod products-service running in namespace development . only allow the following Pods to connect to Pod products- service : Pods in the namespace testing Pods with label environment: staging , in any namespace Make sure to apply the NetworkPolicy. You can find a skelet on manifest file at /cks/6/p1.yaml
- analysis
1. Host view pod Label of kubectl get pod -n development --show-labels 2. View corresponding ns Label of,No need to set it kubectl label ns testing name=testing 3. cat networkpolicy.yaml kind: NetworkPolicy metadata: name: "pod-access" namespace: "development" spec: podSelector: matchLabels: environment: staging policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: testing - from: - namespaceSelector: matchLabels: podSelector: matchLabels: environment: staging kubectl create -f networkpolicy https://kubernetes.io/zh/docs/concepts/services-networking/network- policies/#networkpolicy-resource
7 dockerfile detection and yaml file problems
- Topic overview
Task Analyze and edit the given Dockerfile (based on the ubuntu:16.04 image) /cks/7/Dockerfile fixing two instructions present in the file being prominent security/best-practice issues. Analyze and edit the given manifest file /cks/7/deployment.yaml fixing two fields present in the file being prominent security/best-practice issues.
- analysis
1.be careful dockerfile Number of errors prompted notes:USER root 2.be careful api Issue version,And privileged networks,It also depends on the mistakes in the title
8 pod security
- Topic overview
context lt is best-practice to design containers to best teless and immutable. Task lnspect Pods running in namespace testing and delete any Pod that is either not stateless or not immutable. use the following strict interpretation of stateless and immutable: Pods being able to store data inside containers must be treated as not stateless. You don't have to worry whether data is actually stored inside containers or not already. Pods being configured to be privileged in any way must be treated as potentially not stateless and not immutable.
- analysis
1. get All pod 2. Check for privileges privi* 3. Check if there are volume 4. Connect privileged networks and volume All delete kubectl get pod pod1 -n testing -o jsonpath={.spec.volumes} | jq kubectl get pod sso -n testing -o yaml |grep "privi.*: true" kubectl delete pod xxxxx -n testing
9 create ServiceAccount
- Topic overview
context A Pod fails to run because of an incorrectly specified ServiceAcccount. Task create a new ServiceAccount named frontend-sa in the existing namespace qa ,which must not have access to any secrets. lnspect the Pod named frontend running inthe namespace qa . Edit the Pod to use the newly created serviceAccount
- analysis
1.obtain sa Template kubectl create serviceaccount frontend-sa -n qa --dry-run -o yaml 2.Find automatic mount through official documents automountServiceAccountToken: false 3.modify pod in serviceAccountName 4.establish pod Delete other sa
10 trivy detect image security
- Topic overview
Task Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace yavin . Look for images with High or Critical severity vulnerabilities,and delete the Pods that use those images. Trivy is pre-installed on the cluster's master node only; it is not available on the base system or the worker nodes. You'll have to connect to the cluster's master node to use Trivy
- analysis
1. Switch cluster,ssh To the corresponding master 2. get pod Put the corresponding image Scan them all,Can't have High or Critical 3. Mirror the problem pod delete
11 create secret
- Topic overview
Task Retrieve the content of the existing secret named db1-test in the istio-system namespace. store the username field in a file named /cks/11/old-username.txt , and the password field in a file named /cks/11/old-pass.txt. You must create both files; they don't exist yet. Do not use/modify the created files in!the following steps, create new temporaryfiles if needed. Create a new secret named test-workflow inthe istio-system namespace, with the following content: username : thanospassword : hahahaha Finally, create a new Pod that has access to the secret test-workflow via a volume: pod name dev-pod namespace istio-system container name dev-container image nginx:1.9 volume name dev-volume mount path /etc/test-secret
- analysis
kubectl get secrets db1-test -n istio-system -o yaml echo -n "aGFoYTAwMQ==" | base64 -d > /cks/11/old-pass.txt echo -n "dG9t" | base64 -d > /cks/11/old-username.txt kubectl create secret generic test-workflow --from-literal=username=thanos -- from-literal=password=hahahaha -n istio-system More demanding creation secrt of pod
12 kube-benct
- Topic overview
context ACIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately. Task Fix all issues via configuration and restart theaffected components to ensure the new settings take effect. Fix all of the following violations that were found against the API server: Ensure that the 1.2.7 --authorization-mode FAIL argument is not set to AlwaysAllow Ensure that the 1.2.8 --authorization-mode FAIL argument includes Node Ensure that the 1.2.9 --authorization-mode FAIL argument includes RBAC Ensure that the 1.2.18 --insecure-bind-address FAIL argument is not set Ensure that the 1.2.19 --insecure-port FAIL argument is set to 0 Fix all of the following violations that were found against the kubelet: Ensure that the 4.2.1 anonymous-auth FAIL argument is set to false Ensure that the4.2.2 --authorization-mode FAIL argument is not set to AlwaysAllow Use webhook authn/authz
- analysis
1. Switch the machine to the corresponding ssh reach master node 2. kube-benct run Find the corresponding entry,Then repair There is one in the exam ETCD
13 gVsior
- Topic overview
context This cluster uses containerd as CRl runtime. Containerd's default runtime handler is runc . Containerd has been prepared to support an additional runtime handler , runsc (gVisor). Task Create a RuntimeClass named untrusted using the prepared runtime handler namedrunsc . Update all Pods in the namespace client to run on gvisor, unless they are already running on anon-default runtime handler. You can find a skeleton manifest file at /cks/13/rc.yam
- analysis
1.Switch the cluster and create one with the official website document runtimeclass 2.More topics require creation pod Use this runtime https://kubernetes.io/zh/docs/concepts/containers/runtime-class/#2- %E5%88%9B%E5%BB%BA%E7%9B%B8%E5%BA%94%E7%9A%84-runtimeclass-%E8%B5%84%E6%BA%90
14 audit
- Topic overview
Task Enable audit logs in the cluster. To do so, enable the log backend, and ensurethat: logs are stored at /var/log/kubernetes/audit-logs.txt log files are retained for 5 days at maximum, a number of 10 auditlog files are retained A basic policy is provided at /etc/kubernetes/logpolicy/sample-policy.yaml . it only specifies what not to log. The base policy is located on thecluster's master node. Edit and extend the basic policy to log:namespaces changes at RequestResponse level the request body of pods changes in the namespace front-apps configMap and secret changes in all namespaces at the Metadata level Also, add a catch-all ruie to log all otherrequests at the Metadata level. Don't forget to apply
- analysis
1.Switch cluster login master,Then create a directory,modify yaml,Enable audit 2.More official website documents to modify the corresponding policies 3.restart kubelet https://kubernetes.io/zh/docs/tasks/debug-application-cluster/audit/#log- %E5%90%8E%E7%AB%AF
15 default network policy
- Topic overview
context A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn't have any other NetworkPolicy defined. Task Create a new default-deny NetworkPolicy named denynetwork in the namespace development for all traffic of type Ingress . The new NetworkPolicy must deny all lngress traffic in the namespace development . Apply the newly created default-deny NetworkPolicy to all Pods running in namespace development . You can find a skeleton manifest file
- analysis
1.Observe whether all conditions are rejected by default or other conditions,More topics require official documents to be written yaml https://kubernetes.io/zh/docs/concepts/services-networking/network- policies/#%E9%BB%98%E8%AE%A4%E6%8B%92%E7%BB%9D%E6%89%80%E6%9C%89%E5%85%A5%E7%AB%99%E6%B5%81%E9%87%8F
explain
- The current exam version is 1 XX (to be supplemented)
be careful!!!
- This question was collected by a friend of the author who took the cks test. The answer is the original reference answer; In the follow-up, the author will sort out the latest real questions and reference answers when taking the exam.
- If you have any questions about the above topics, you are welcome to discuss them in the message area so that you can learn from them.