1. Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
- ServiceAccount: pvviewer
- ClusterRole: pvviewer-role
- ClusterRoleBinding: pvviewer-role-binding
- Pod: pvviewer
- Pod configured to use ServiceAccount pvviewer ?
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
# pvview 라는 service account 만들고 PV에 list 권한을 주고, pvviewer-role 이라는 cluster role 만들고
pvviewer-role-binding이라는 cluster role binding 만들어서 바인딩해라.
$ kubectl create serviceaccount pvviewer
$ kubectl create clusterrole pviviewer-role --resource=persistentvolumes --verb=list
$ kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
$ kubectl run pvviewer --image=redis --dry-run=client -o yaml > pvviewer.yaml
$ vi pvviewer.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvviewer
spec:
cotainers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer
$ kubectl apply -f pvviewer.yaml
2. List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips
$ kubectl get nodes -o jsonpath='{.item[*].status.addresses[?(@.type=="ExternalIP")].address}'
# CheatSeet에서 따온거 Internal IP로 변경
$ kubectl get nodes -o jsonpath='{.item[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/node_ips
3. Create a pod called multi-pod with two containers.
Container 1, name: alpha, image: nginx
Container 2: name: beta, image: busybox, command: sleep 4800
Environment Variables:
container 1:
name: alpha
Container 2:
name: beta
- Pod Name: multi-pod
- Container 1: alpha
- Container 2: beta
- Container beta commands set correctly?
- Container 1 Environment Value Set
- Container 2 Environment Value Set
$ kubectl run alpha --image=nginx --dry-run=client -o yaml > multi-pod.yaml
$ vi multi-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: alpha
name: multi-pod
spec:
containers:
- image: nginx
name: alpha
env:
- name: name
value: alpha
- image: busybox
name: beta
command: ["sleep", "4800"]
env:
- name: name
value: beta
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
$ kubectl apply -f multi-pod.yaml
4. Create a Pod called non-root-pod, image: redis:alpine
runAsUser: 1000
fsGroup: 2000
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
- Pod non-root-pod fsGroup configured
- Pod non-root-pod runAsUser configured
$ kubectl run non-root-pod --image=redis:alpine --dry-run -o yaml > non-root-pod.yaml
$ vi non-root-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- name: non-root-pod
image: redis:alpine
securityContext:
allowPrivilegeExcalation: false
$ kubectl apply -f non-root-pod.yaml
5. We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it. Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.
Important: Don't delete any current objects deployed
https://kubernetes.io/docs/concepts/services-networking/network-policies/
- Important: Don't Alter Existing Objects!
- NetworkPolicy: Applied to All sources (Incoming traffic from all pods)?
- NetWorkPolicy: Correct Port?
- NetWorkPolicy: Applied to correct Pod?
$ kubectl get svc
np-test-service
$ kubectl describe svc np-test-service
$ kubectl get networkpolicies
default-deny
$ kubectl describe netpol default-deny
Name: default-deny
Namespace: default
Created on: 2022-03-06 05:59:40 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Not affecting egress traffic
Policy Types: Ingress
# 확인해보면 모든 인그레스 트래픽이 거부 인 것을 알 수 있다.
# 새로운 port 80이 허용된 network policy를 만들자.
$ kubectl get pod
np-test-1
# 먼저 pod를 확인해서 어떤 pod에 연결해야 하는지 확인.
$ kubectl get netpol default-deny -o yaml > netpol.yaml
$ vi netpol.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-nptest
namespace: default
spec:
podSelector:
matchLabels:
run: np-test-1
ingress:
- ports:
- protocol: TCP
port: 80
policyTypes:
- Ingress
$ kubectl apply -f netpol.yaml
# 만들어진 network policy를 테스트 하기위해 임의의 Pod를 실행시키고, Pod 안에서 연결테스트.
$ kubectl run test-np --image=busybox:1.28 --rm -it -- sh
/ # nc -z -v -w 2 np-test-service 80
np-test-service (10.108.234.186:80) open
# nc는 netcat -z는 연결시 바로 닫는 옵션, -v는 verbose, -w 2 는 timeout 시간을 2초로.
6. Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image redis:alpine with toleration to be scheduled on node01.
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
key: env_type, value: production,
operator: Equal and effect: NoSchedule
- Key = env_type
- Value = production
- Effect = NoSchedule
- pod 'dev-redis' (no tolerations) is not scheduled on node01?
- Create a pod 'prod-redis' to run on node01
# Taint 설정으로 node01이 스케줄링 되지 않게하고, dev-redis라는 Pod를 redis:alpine 이미지를 이용해 생성한다음 workernode에 스케줄되지 않는것을 확인하라. 마지막으로 prod-redis라는 pod를 생성해서 toleration을 이용해 node01에 스케줄 되도록하라.
$ kubectl get nodes -o wide
# node들 목록 확인.
$ kubectl taint nodes node01 env_type=production:NoSchedule
# node01에 taint를 걸어서 env_type값이 production:NoSchedule로 바뀌게 한다.
$ kubectl describe nodes node01 | grep -i taint
# taint 옵션이 노드에 적용되었는지 확인 한다.
$ kubectl run dev-redis --image=redis:alpine
$ kubectl get pod -o wide
# 테스트용 pod 생성후 node01에 스케줄되지 않은 것 확인.
$ kubectl run prod-redis --image=redis:alpine --dry-run=client -o yaml > prod-redis.yaml
$ vi prod-redis.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: prod-redis
name: prod-redis
spec:
containers:
- image: redis:alpine
name: prod-redis
tolerations:
- effect: NoSchedule
key: env_type
value: production
operator: Equal
$ kubectl apply -f prod-redis.yaml
$ kubectl get pod -o wide
# node01에 prod-redis가 스케줄링 된 것 확인.
7. Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier.
Use appropriate labels and create all the required objects if it does not exist in the system already.
- hr-pod labeled with environment production?
- hr-pod labeled with tier frontend?
$ kubectl get ns
$ kubectl create namespace hr
$ kubectl run hr-pod -n hr --image=redis:alpine --labels=environment=production,tier=frontend
$ kubectl get pods -n hr --show-labels
8. A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
- Fix /root/CKA/super.kubeconfig
$ cd /root/CKA
$ kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig
# 확인해보면 포트가 9999로 되어 있는것을 확인 6443으로 고쳐준다.
$ vi super.kubeconcfig
9. We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.
- deployment has 3 replicas
$ kubectl scale deployment nginx-deploy --replicas=3
$ kubectl get delployment
# 레플리카 값을 3을 줘도 여전히 1/3로 Pod 개수가 늘어나지 않는 것을 확인할 수 있다.
# taint 문제는 아니다.
$ kubectl get pod -n kube-system
# kube-controller-manager Pod가 ImagePullBackOff 상태로 걸려있는것 확인.
$ cd /etc/kubernetes/manifest
$ vi kube-contoller-manager.yaml
# 잘 보면 contoller 부분이 conro1ler로 알파벳 l 하나가 숫자 1로 잘못 적혀있는것을 확인 할 수 있다.
# 총 5개의 contol1er를 고쳐주면 되는것으로 보인다.
$ kubectl get pod -n kube-system
$ kubectl get deployment
# Pod 3개 잘 뜨는것 확인.
'IT > CKA' 카테고리의 다른 글
[CKA] 1차 시험 불합격.. 기억나는 대로 써보는 문제들. 2 (1) | 2022.03.27 |
---|---|
[CKA] 1차 시험 불합격.. 기억나는 대로 써보는 문제들. 1 (1) | 2022.03.06 |
[CKA] Mock Exam 2 풀이 (0) | 2022.02.23 |
[CKA] Mock Exam 1 풀이 (0) | 2022.02.23 |