题目:
Context
为部署流水线创建一个新的ClusterRole并将其绑定到范围为特定的 namespace 的特定ServiceAccount。
Task
创建一个名为deployment-clusterrole且仅允许创建以下资源类型的新ClusterRole: Deployment StatefulSet DaemonSet 在现有的 namespace app-team1中创建一个名为cicd-token的新 ServiceAccount。 限于 namespace app-team1中,将新的ClusterRole deployment-clusterrole 绑定到新的 ServiceAccount cicd-token。
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 创建ClusterRole
kubectl create clusterrole deploy-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
# 创建ServiceAccount
kubectl create sa cicd-token -n app-team1
# 创建rolebingding
kubectl create rolebingding cicd-token-binding --clusterrole=deployment-clusterrole --serviceaccout=app-team1:cicd-token --namespcae=app-team1
题目:
将名为 node02 的 node 设置为不可用,并重新调度该 node 上所有运行的 pods。
参考链接:
解答:
# 切换环境
kubectl config use-context ek8s
# 设置节点维护状态
kubectl cordon node2
# 驱逐
kubectl drain node2 --ignore-daemonsets --delete-emptydir-data --force
题目:
现有的Kubernetes 集群正在运行版本1.29.0。仅将master节点上的所有 Kubernetes控制平面和节点组件升级到版本1.29.1。
确保在升级之前 drain master节点,并在升级后 uncordon master节点。
可以使用以下命令,通过ssh连接到master节点:
ssh master01
可以使用以下命令,在该master节点上获取更高权限:
sudo -i
另外,在主节点上升级kubelet和kubectl。
请不要升级工作节点,etcd,container 管理器,CNI插件, DNS服务或任何其他插件。
参考链接:
解答:
# 切换环境
kubectl config use-config mk8s
# 查看节点
kubectl get nodes
# cordon 停止调度,将node调为SchedulingDisabled。新pod不会被调度到该node,但在该node的旧pod不受影响
kubectl cordon master01
# drain 驱逐节点。首先,驱逐该node上的pod,并在其他节点重新创建。接着,将节点调为 SchedulingDisabled
kubectl drain master01 --ignore-daemonsets
# ssh到master节点,并切换到root下
ssh master01
sudo -i
# 更新apt仓库源
apt-get update
# 确定升级版本号
apt-cache show kubeadm|grep 1.29.1
# 安装具体版本
apt-get install kubeadm=1.29.1-1.1
# kubeadm版本检查
kubeadm version
# 验证升级计划
kubeadm upgrade plan
# 开始应用升级 提示输入Y
kubeadm upgrade apply v1.29.1 --etcd-upgrade=false
# 升级kubelet 升级kubectl
apt-get install kubelet=1.29.1-1.1 kubectl=1.29.1-1.1
# 重启kubelet
systemctl daemon-reload
systemctl restart kubelet
# 查看版本
kubelet --version
# 查看版本
kubectl version
# 退出root
exit
# 退出master01
exit
# 恢复
kubectl uncordon master01
题目:
必须从master01主机执行所需的etcdctl命令。
首先,为运行在https://127.0.0.1:2379 上的现有 etcd 实例创建快照并将快照保存到 /var/lib/backup/etcd-snapshot.db
提供了以下TLS证书和密钥,以通过etcdctl连接到服务器。
CA 证书: /opt/KUIN00601/ca.crt
客户端证书: /opt/KUIN00601/etcd-client.crt
客户端密钥: /opt/KUIN00601/etcd-client.key
为给定实例创建快照预计能在几秒钟内完成。 如果该操作似乎挂起,则命令可能有问题。用 CTRL + C 来取消操作,然后重试。
然后通过位于**/data/backup/etcd-snapshot-previous.db** 的先前备份的快照进行还原。
参考链接:
解答:
# 切换环境
kubectl config use-context xk8s
# ssh 到master 节点,并切换到root下
ssh master01
sudo -i
# 备份
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" snapshot save /var/lib/backup/etcd-snapshot.db
# 还原
mkdir -p /opt/bak/
# 将/etc/kubernetes/manifests/kube-*文件移动到临时目录里
mv /etc/kubernetes/manifests/kube-* /opt/bak/
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --data-dir=/var/lib/etcd-restore snapshot restore /data/backup/etcd-snapshot-previous.db
vim /etc/kubernetes/manifests/etcd.yaml
volumes:
- hostPath:
path: /var/lib/etcd-restore #change
type: DirectoryOrCreate
name: etcd-data
# 将临时目录的文件,移动回/etc/kubernetes/manifests/目录里
mv /opt/bak/kube-* /etc/kubernetes/manifests/
# 重启
systemctl daemon-reload
systemctl restart kubelet
# 退出root
exit
# 退出master01
exit
题目:
在现有的namespace my-app中创建一个名为allow-port-from-namespace的新NetworkPolicy。 确保新的NetworkPolicy允许namespace echo中的Pods连接到namespace my-app中的Pods的9000端口。
进一步确保新的NetworkPolicy:
不允许对没有在监听 端口9000的Pods的访问
不允许非来自 namespace echo中的Pods的访问
参考链接:
解答:
# 切换环境
kubectl config use-context hk8s
# 查看所有namespace下的label
kubectl get ns --show-labels
# 如果访问者namespace没有标签label,则需要手动打一个,如果有,则可以直接使用
kubectl label ns echo project=echo
vim networkpolicy.yaml
# 注意 :set paste,防止yaml文件空格错序。
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: my-app
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: echo
ports:
- protocol: TCP
port: 9000
kubectl apply -f networkpolicy.yaml
#检查
kubectl describe networkpolicy -n my-app
题目:
请重新配置现有的deployment front-end 以及添加名为http的端口规范来公开现有容器 nginx 的端口80/tcp。 创建一个名为front-end-svc的新service,以公开容器端口http。 配置此service,以通过各个Pod所在的节点上的 NodePort 来公开他们。
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 检查deployment信息,并记录SELECTOR的Lable标签,这里是app=front-end
kubectl get deployment front-end -o wide
# 添加信息
kubectl edit deployment front-end
spec:
containers:
- image: vicuu/nginx:hello
imagePullPolicy: ifNotPresent
name: nginx
ports:
# 一定要加- name: http, portocol默认是TCP
- name: http
containerPort: 80
portocol: TCP
# 暴露端口
# 注意考试中需要创建的是NodePort,还是ClusterIP。如果是ClusterIP,则应为--type=ClusterIP
#--port是service的端口号,--target-port是deployment里pod的容器的端口号
kubectl expose deployment front-end --type=NodePort --port=80 --target-port=80 --name=front-end-svc
#验证
kubectl get svc -o wide
kubectl get svc front-end-svc -o wide
kubectl get deployment front-end -o wide
# 如果你kubectl expose暴露服务后,发现service的selector标签是空的<none>
# 确保service 的selector 标签与deployment 的selector 标签一致。
kubectl edit svc front-end-svc
selector:
app: frond-end
题目:
如下创建一个新的nginx Ingress资源:
名称: ping
Namespace: ping-internal
使用服务端口5678在路径 /hello 上公开服务 hello
可以使用以下命令检查服务 hello的可用性,该命令应返回 hello: curl -kL /hello
参考链接:
解答:
# 切花环境
kubectl config use-context k8s
# 先检查环境里ingressclass 的名字
kubectl get ingressclass
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ping
namespace: ping-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target:/
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
kubectl apply -f ingress.yaml
# 检查
kubectl -n ping-internal get ingress
题目:
将 deployment presentation 扩展至 4个 pods
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 先检查一下现有deployment的pod数量
kubectl get deployments presentation
# 扩充
kubectl scale deployment presentation --replicas=4
题目:
按如下要求调度一个 pod:
名称:nginx-kusc00401
Image:nginx
Node selector:disk=ssd
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 检查pod
kubectl get pod -A|grep nginx-kusc00401
# 确保node有这个labels
kubectl get nodes --show-labels|grep 'disk=ssd'
# 若不存在,则手动打一个
kubectl label nodes node01 disk=ssd
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: ssd
kubectl apply -f nginx.yaml
题目:
检查有多少 nodes 已准备就绪(不包括被打上 Taint:NoSchedule 的节点), 并将数量写入 /opt/KUSC00402/kusc00402.txt
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 记录总数X
kubectl get node | grep -i ready
# 记录污点数Y
kubectl describe node | grep -i Taints | grep NoSchedule
# 将X-Y的结果存到文件中
echo 1 >> /opt/KUSC00402/kusc00402.txt
# 第二个方法
kubectl describe nodes | grep -i Taints | grep -vc NoSchedule
kubectl describe nodes | grep -i Taints | grep -vc NoSchedule
题目:
按如下要求调度一个Pod:
名称:kucc8
app containers: 2
container 名称/images:
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
apiVersion: v1
kind: Pod
metadata:
name: kucc8
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
- name: memcached
image: memcached
imagePullPolicy: IfNotPresent
kubectl apply -f kucc.yaml
题目:
创建名为 app-config 的 persistent volume,容量为 1Gi,访问模式为 ReadWriteMany。 volume 类型为 hostPath,位于 /srv/app-config
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/srv/app-config"
type: DirectoryOrCreate
kubectl apply -f pv.yaml
题目:
创建一个新的PersistentVolumeClaim:
名称: pv-volume
Class: csi-hostpath-sc
容量: 10Mi
创建一个新的Pod,来将PersistentVolumeClaim作为volume进行挂载:
名称:web-server
Image:nginx:1.16
挂载路径:/usr/share/nginx/html
配置新的Pod,以对volume具有ReadWriteOnce权限。
最后,使用kubectl edit或kubectl patch将 PersistentVolumeClaim的容量扩展为70Mi,并记录此更改。
参考链接:
解答:
# 切换环境
kubectl config use-context ok8s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl apply -f pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: nginx
image: nginx:1.16
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
kubectl apply -f pvc-pod.yaml
扩容
kubectl edit pvc pv-volume --record
题目:
监控 pod foo 的日志并:
提取与错误 RLIMIT_NOFILE相对应的日志行
将这些日志行写入 /opt/KUTR00101/foo
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 监控日志
kubectl logs foo | grep "RLIMIT_NOFILE" > /opt/KUTR00101/foo
题目:
Context
将一个现有的 Pod 集成到 Kubernetes 的内置日志记录体系结构中(例如 kubectl logs)。 添加 streaming sidecar 容器是实现此要求的一种好方法。
Task
使用busybox Image 来将名为sidecar的sidecar 容器添加到现有的Pod 11-factor-app中。 新的sidecar 容器必须运行以下命令: /bin/sh -c tail -n+1 -f /var/log/11-factor-app.log
使用挂载在**/var/log的Volume,使日志文件11-factor-app.log**可用于sidecar 容器。 除了添加所需要的volume mount以外,请勿更改现有容器的规格。
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 导出pod的yaml文件
kubectl get pod 11-factor-app -o yaml > varlog.yaml
# 备份
cp varlog.yaml varlog-bak.yaml
# 修改varlog.yaml
vi varlog.yaml
apiVersion: v1
kind: Pod
metadata:
name: 11-factor-app
spec:
containers:
- name: count-log-1
image: busybox:1.28
args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: sidecar
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/11-factor-app.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
# 删除原先的 pod
kubectl delete pod 11-factor-app
kubectl get pod 11-factor-app
# 新建这个 pod
kubectl apply -f varlog.yaml
# 检查
kubectl logs 11-factor-app sidecar
题目:
通过 pod label name=cpu-loader,找到运行时占用大量 CPU 的 pod, 并将占用 CPU 最高的 pod 名称写入文件 /opt/KUTR000401/KUTR00401.txt(已存在)。
参考链接:
解答:
# 切换环境
kubectl config use-context k8s
# 查看pod名称 -A是所有namespace的意思
kubectl top pod -A -l name=cpu-loader --sort-by=cpu
echo "查出来的 Pod Name" > /opt/KUTR000401/KUTR00401.txt
题目:
名为node02的Kubernetes worker node处于NotReady状态。
调查发生这种情况的原因,并采取相应的措施将node恢复为Ready状态,确保所做的任何更改永久生效。
可以使用以下命令,通过ssh连接到node02节点:
ssh node02
可以使用以下命令,在该节点上获取更高权限:
sudo -i
参考链接:
解答:
# 切换环境
kubectl cobnfig use-context wk8s
# 检查节点
kubectl get nodes
# ssh 到node02节点,并切换到root下
ssh node02
sudo -i
# 查看 kubelet 状态
systemctl status kubelet
# 运行kubelet服务,并设置为开机启动
systemctl start kubelet && systemctl enable kubelet
# 退出root
exit
# 退出master01
exit
题目:
You have access to multiple clusters from your main terminal through kubectl
contexts. Write all those context names into /opt/course/1/contexts
.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh
, the command should use kubectl
.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh
, but without the use of kubectl
.
解答:
kubectl config get-contexts -o name > /opt/course/1/contexts
echo "kubectl config current-context" >> /opt/course/1/context_default_kubectl.sh
echo "cat ~/.kube/config | grep current" >> /opt/course/1/context_default_no_kubectl.sh
题目:
Use context: kubectl config use-context k8s-c1-H
Create a single Pod of image httpd:2.4.41-alpine
in Namespace default
. The Pod should be named pod1
and the container should be named pod1-container
. This Pod should only be scheduled on controlplane nodes. Do not add new labels to any nodes.
解答:
k get node
k describe node cluster1-controlplane1 | grep Taint -A1
k get node cluster1-controlplane1 --show-lables
k run pod1 --image=httpd:2.4.41-alpine $do > 2.yaml
vi 2.yaml
2.yaml
# 2.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1-container # change
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
nodeSelector: # add
node-role.kubernetes.io/control-plane: "" # add
status: {}
kubectl -f 2.yaml create
题目:
Use context: kubectl config use-context k8s-c1-H
There are two Pods named o3db-*
in Namespace project-c13
. C13 management asked you to scale the Pods down to one replica to save resources.
解答:
➜ k -n project-c13 get pod | grep o3db
o3db-0 1/1 Running 0 52s
o3db-1 1/1 Running 0 42s
➜ k -n project-c13 get deploy,ds,sts | grep 03db
statefulset.apps/o3db 2/2 2m56s
➜ k -n project-c13 scale sts 03db --replicas 1
题目:
Use context: kubectl config use-context k8s-c1-H
Do the following in Namespace default
. Create a single Pod named ready-if-service-ready
of image nginx:1.16.1-alpine
. Configure a LivenessProbe which simply executes command true
. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80
is reachable, you can use wget -T2 -O- http://service-am-i-ready:80
for this. Start the Pod and confirm it isn’t ready because of the ReadinessProbe.
Create a second Pod named am-i-ready
of image nginx:1.16.1-alpine
with label id: cross-server-ready
. The already existing Service service-am-i-ready
should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.
解答:
k run ready-if-service-ready --image=nginx:1.16.1-alpine $do > 4_pod1.yaml
vim 4_pod1.yaml
# 4_pod1.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
livenessProbe: # add from here
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80' # to here
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
k -f 4_pod.yaml create
k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"
k describe svc service-am-i-ready
k get ep # also possible
题目:
Use context: kubectl config use-context k8s-c1-H
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh
which lists all Pods sorted by their AGE (metadata.creationTimestamp
).
Write a second command into /opt/course/5/find_pods_uid.sh
which lists all Pods sorted by field metadata.uid
. Use kubectl
sorting for both commands.
解答:
echo "kubectl get pod -A --sort-by=.metadata.creationTimestamp" >> /opt/course/5/find_pods.sh
echo "kubectl get pod -A --sort-by=.metadata.uid" >> /opt/course/5/find_pods_uid.sh
题目:
Use context: kubectl config use-context k8s-c1-H
Create a new PersistentVolume named safari-pv
. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data
and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger
named safari-pvc
. It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari
in Namespace project-tiger
which mounts that volume at /tmp/safari-data
. The Pods of that Deployment should be of image httpd:2.4.41-alpine
.
解答:
# 6_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data"
k -f 6_pv.yaml create
# 6_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
k -f 6_pvc.yaml create
k -n project-tiger create deployment safari --image=httpd:2.4.41-alpine $do > 6_dep.yaml
# 6_dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: safari
name: safari
namespace: project-tiger
spec:
replicas: 1
selector:
matchLabels:
app: safari
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: safari
spec:
volumes: # add
- name: data # add
persistentVolumeClaim: # add
claimName: safari-pvc # add
containers:
- image: httpd:2.4.41-alpine
name: container
volumeMounts: # add
- name: data # add
mountPath: /tmp/safari-data # add
k -f 6_dep.yaml create
题目:
Use context: kubectl config use-context k8s-c1-H
The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:
1.show Nodes resource usage
2.show Pods and their containers resource usage
Please write the commands into /opt/course/7/node.sh
and /opt/course/7/pod.sh
.
解答:
echo "kubetcl top node" >> /opt/course/7/node.sh
echo "kubectl top pod --containers=true" >> /opt/course/7/pod.sh
题目:
Use context: kubectl config use-context k8s-c1-H
Ssh into the controlplane node with ssh cluster1-controlplane1
. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it’s started/installed on the controlplane node.
Write your findings into file /opt/course/8/controlplane-components.txt
. The file should be structured like:
# /opt/course/8/controlplane-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]
Choices of [TYPE]
are: not-installed
, process
, static-pod
, pod
解答:
➜ ssh cluster1-controlplane1
ps aux | grep kubelet
find /usr/lib/systemd | grep kube
find /usr/lib/systemd | grep etcd
find /etc/kubernetes/manifests/
k -n kube-system get pod -o wide | grep controplane1
k -n kube-system get ds
k -n kube-system get deploy
# /opt/course/8/controlplane-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns
题目:
Use context: kubectl config use-context k8s-c2-AC
Ssh into the controlplane node with ssh cluster2-controlplane1
. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule
of image httpd:2.4-alpine
, confirm it’s created but not scheduled on any node.
Now you’re the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1
. Make sure it’s running.
Start the kube-scheduler again and confirm it’s running correctly by creating a second Pod named manual-schedule2
of image httpd:2.4-alpine
and check if it’s running on cluster2-node1
.
解答:
ssh cluster2-controlplane1
k -n kube-system get pod | grep schedule
cd /etc/kubernetes/manifests/
mv kube-scheduler.yaml ..
k run manual-schedule --image=httpd:2.4-alpine
k get pod manual-schedule -o wide
k get pod manual-schedule -o yaml > 9.yaml
# 9.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-09-04T15:51:02Z"
labels:
run: manual-schedule
managedFields:
...
manager: kubectl-run
operation: Update
time: "2020-09-04T15:51:02Z"
name: manual-schedule
namespace: default
resourceVersion: "3515"
selfLink: /api/v1/namespaces/default/pods/manual-schedule
uid: 8e9d2532-4779-4e63-b5af-feb82c74a935
spec:
nodeName: cluster2-controlplane1 # add the controlplane node name
containers:
- image: httpd:2.4-alpine
imagePullPolicy: IfNotPresent
name: manual-schedule
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-nxnc7
readOnly: true
dnsPolicy: ClusterFirst
k -f 9.yaml replace --force
➜ ssh cluster2-controlplane1
cd /etc/kubernetes/manifests/
mv ../kube-scheduler.yaml .
k -n kube-system get pod | grep schedule
k run manual-schedule2 --image=httpd:2.4-alpine
题目:
Use context: kubectl config use-context k8s-c1-H
Create a new ServiceAccount processor
in Namespace project-hamster
. Create a Role and RoleBinding, both named processor
as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
解答:
k -n project-hamster create sa processor
k -n project-hamster create role processor --verb=create --resource=secrets,configmaps
k -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor
题目:
Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger
for the following. Create a DaemonSet named ds-important
with image httpd:2.4-alpine
and labels id=ds-important
and uuid=18426a0b-5f59-4e10-923f-c0e078e82462
. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
解答:
kubectl -n project-tiger create deployment --image=httpd:2.4-alpine ds-important $do > 11.yaml
vim 11.yaml
# 11.yaml
apiVersion: apps/v1
kind: DaemonSet # change from Deployment to Daemonset
metadata:
creationTimestamp: null
labels: # add
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
name: ds-important
namespace: project-tiger # important
spec:
#replicas: 1 # remove
selector:
matchLabels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
#strategy: {} # remove
template:
metadata:
creationTimestamp: null
labels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
spec:
containers:
- image: httpd:2.4-alpine
name: ds-important
resources:
requests: # add
cpu: 10m # add
memory: 10Mi # add
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
#status: {} # remove
题目:
Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger
for the following. Create a Deployment named deploy-important
with label id=very-important
(the Pods
should also have this label) and 3 replicas. It should contain two containers, the first named container1
with image nginx:1.17.6-alpine
and the second one named container2 with image google/pause
.
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1
and cluster1-node2
. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won’t be scheduled, unless a new worker node will be added. Use topologyKey: kubernetes.io/hostname
for this.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
解答:
k -n project-tiger create deployment deploy-important --image=nginx:1.17.6-alpine $do > 12.yaml
vi 12.yaml
# 12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
id: very-important # change
name: deploy-important
namespace: project-tiger # important
spec:
replicas: 3 # change
selector:
matchLabels:
id: very-important # change
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
id: very-important # change
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # change
resources: {}
- image: google/pause # add
name: container2 # add
affinity: # add
podAntiAffinity: # add
requiredDuringSchedulingIgnoredDuringExecution: # add
- labelSelector: # add
matchExpressions: # add
- key: id # add
operator: In # add
values: # add
- very-important # add
topologyKey: kubernetes.io/hostname # add
status: {}
k -f 12.yaml create
题目:
Use context: kubectl config use-context k8s-c1-H
Create a Pod named multi-container-playground
in Namespace default
with three containers, named c1
, c2
and c3
. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn’t be persisted or shared with other Pods.
Container c1
should be of image nginx:1.17.6-alpine
and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME
.
Container c2
should be of image busybox:1.31.1
and write the output of the date
command every second in the shared volume into file date.log
. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done
for this.
Container c3
should be of image busybox:1.31.1
and constantly send the content of file date.log
from the shared volume to stdout. You can use tail -f /your/vol/path/date.log
for this.
Check the logs of container c3
to confirm correct setup.
解答:
k run multi-container-playground --image=nginx:1.17.6-alpine $do > 13.yaml
vim 13.yaml
# 13.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-container-playground
name: multi-container-playground
spec:
containers:
- image: nginx:1.17.6-alpine
name: c1 # change
resources: {}
env: # add
- name: MY_NODE_NAME # add
valueFrom: # add
fieldRef: # add
fieldPath: spec.nodeName # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c2 # add
command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c3 # add
command: ["sh", "-c", "tail -f /vol/date.log"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: vol # add
emptyDir: {} # add
k -f 13.yaml create
➜ k logs multi-container-playground -c c3
题目:
Use context: kubectl config use-context k8s-c1-H
You’re ask to find out following information about the cluster k8s-c1-H
:
cluster1-node1
?Write your answers into file /opt/course/14/cluster-info
, structured like this:
# /opt/course/14/cluster-info
1: [ANSWER]
2: [ANSWER]
3: [ANSWER]
4: [ANSWER]
5: [ANSWER]
解答:
k get node
ssh cluster1-controlplane1
➜ root@cluster1-controlplane1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
- --service-cluster-ip-range=10.96.0.0/12
➜ root@cluster1-controlplane1:~# find /etc/cni/net.d/
/etc/cni/net.d/
/etc/cni/net.d/10-weave.conflist
➜ root@cluster1-controlplane1:~# cat /etc/cni/net.d/10-weave.conflist
{
"cniVersion": "0.3.0",
"name": "weave",
...
# /opt/course/14/cluster-info
1: 1
2: 2
3: 10.96.0.0/12
4: Weave, /etc/cni/net.d/10-weave.conflist
5: -cluster1-node1
题目:
Use context: kubectl config use-context k8s-c2-AC
Write a command into /opt/course/15/cluster_events.sh
which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp
). Use kubectl
for it.
Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log
.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1
and write the events into /opt/course/15/container_kill.log
.
Do you notice differences in the events both actions caused?
解答:
# /opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp
k -n kube-system get pod -o wide | grep proxy
k -n kube-system delete pod kube-proxy-z64cg
sh /opt/course/15/cluster_events.sh > /opt/course/15/pod_kill.log
ssh cluster2-node1
➜ root@cluster2-node1:~# crictl ps | grep kube-proxy
➜ root@cluster2-node1:~# crictl rm 1e020b43c4423
➜ root@cluster2-node1:~# crictl ps | grep kube-proxy
sh /opt/course/15/cluster_events.sh > /opt/course/15/container_kill.log
题目:
Use context: kubectl config use-context k8s-c1-H
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap…) into /opt/course/16/resources.txt
.
Find the project-*
Namespace with the highest number of Roles
defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt
.
解答:
k api-resources --namespaced -o name > /opt/course/16/resources.txt
➜ k -n project-c13 get role --no-headers | wc -l
No resources found in project-c13 namespace.
0
➜ k -n project-c14 get role --no-headers | wc -l
300
➜ k -n project-hamster get role --no-headers | wc -l
No resources found in project-hamster namespace.
0
➜ k -n project-snake get role --no-headers | wc -l
No resources found in project-snake namespace.
0
➜ k -n project-tiger get role --no-headers | wc -l
No resources found in project-tiger namespace.
0
# /opt/course/16/crowded-namespace.txt
project-c14 with 300 resources
题目:
Use context: kubectl config use-context k8s-c1-H
In Namespace project-tiger
create a Pod named tigers-reunite
of image httpd:2.4.41-alpine
with labels pod=container
and container=pod
. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl
:
info.runtimeType
into /opt/course/17/pod-container.txt
/opt/course/17/pod-container.log
解答:
kubectl -n project-tiger run tigers-reunite --image=httpd:2.4.41-alpine --labels "pod=container,container=pod"
k -n project-tiger get pod -o wide
➜ ssh cluster1-node2
➜ root@cluster1-node2:~# crictl ps | grep tigers-reunite
b01edbe6f89ed 54b0995a63052 5 seconds ago Running tigers-reunite ...
➜ root@cluster1-node2:~# crictl inspect b01edbe6f89ed | grep runtimeType
"runtimeType": "io.containerd.runc.v2",
# /opt/course/17/pod-container.txt
b01edbe6f89ed io.containerd.runc.v2
ssh cluster1-node2 'crictl logs b01edbe6f89ed' &> /opt/course/17/pod-container.log
题目:
Use context: kubectl config use-context k8s-c3-CCC
There seems to be an issue with the kubelet not running on cluster3-node1
. Fix it and confirm that cluster has node cluster3-node1
available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1
afterwards.
Write the reason of the issue into /opt/course/18/reason.txt
.
解答:
ssh cluster3-node1
systemctl status kubelet
systemctl start kubelet
➜ root@cluster3-node1:~# /usr/local/bin/kubelet
-bash: /usr/local/bin/kubelet: No such file or directory
➜ root@cluster3-node1:~# whereis kubelet
kubelet: /usr/bin/kubelet
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # fix binary path
systemctl daemon-reload
service kubelet restart
service kubelet status # should now show running
# /opt/course/18/reason.txt
wrong path to kubelet binary specified in service config
题目:
Use context: kubectl config use-context k8s-c3-CCC
Do the following in a new Namespace secret
. Create a Pod named secret-pod
of image busybox:1.31.1
which should keep running for some time.
There is an existing Secret located at /opt/course/19/secret1.yaml
, create it in the Namespace secret
and mount it readonly into the Pod at /tmp/secret1
.
Create a new Secret in Namespace secret
called secret2
which should contain user=user1
and pass=1234
. These entries should be available inside the Pod’s container as environment variables APP_USER
and APP_PASS
.
Confirm everything is working.
解答:
k create ns secret
cp /opt/course/19/secret1.yaml 19_secret1.yaml
vim 19_secret1.yaml
# 19_secret1.yaml
apiVersion: v1
data:
halt: IyEgL2Jpbi9zaAo...
kind: Secret
metadata:
creationTimestamp: null
name: secret1
namespace: secret
k -f 19_secret1.yaml create
k -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234
k -n secret run secret-pod --image=busybox:1.31.1 $do -- sh -c "sleep 5d" > 19.yaml
vi 19.yaml
# 19.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: secret-pod
name: secret-pod
namespace: secret # add
spec:
containers:
- args:
- sh
- -c
- sleep 1d
image: busybox:1.31.1
name: secret-pod
resources: {}
env: # add
- name: APP_USER # add
valueFrom: # add
secretKeyRef: # add
name: secret2 # add
key: user # add
- name: APP_PASS # add
valueFrom: # add
secretKeyRef: # add
name: secret2 # add
key: pass # add
volumeMounts: # add
- name: secret1 # add
mountPath: /tmp/secret1 # add
readOnly: true # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: secret1 # add
secret: # add
secretName: secret1 # add
status: {}
k -f 19.yaml create
题目:
Use context: kubectl config use-context k8s-c3-CCC
Your coworker said node cluster3-node2
is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that’s running on cluster3-controlplane1
. Then add this node to the cluster. Use kubeadm for this.
解答:
ssh cluster3-node2
apt-get update
apt show kubectl -a | grep 1.29
apt install kubelet=1.29.0-1.1 kubectl=1.29.0-1.1
ssh cluster3-controlplane1
kubeadm token create --print-join-command
kubeadm token list
➜ ssh cluster3-node2
➜ root@cluster3-node2:~# kubeadm join 192.168.100.31:6443 --token pbuqzw.83kz9uju8talblrl --discovery-token-ca-cert-hash sha256:eae975465f73f316f322bcdd5eb6a5a53f08662ecb407586561cdc06f74bf7b2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
➜ root@cluster3-node2:~# service kubelet status
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2024-01-04 14:02:45 UTC; 13s ago
Docs: https://kubernetes.io/docs/
Main PID: 44103 (kubelet)
Tasks: 10 (limit: 462)
Memory: 55.5M
CGroup: /system.slice/kubelet.service
└─44103 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/k>
题目:
Use context: kubectl config use-context k8s-c3-CCC
Create a Static Pod
named my-static-pod
in Namespace default
on cluster3-controlplane1
. It should be of image nginx:1.16-alpine
and have resource requests for 10m
CPU and 20Mi
memory.
Then create a NodePort Service named static-pod-service
which exposes that static Pod on port 80 and check if it has Endpoints and if it’s reachable through the cluster3-controlplane1
internal IP address. You can connect to the internal node IPs from your main terminal.
解答:
ssh cluster3-controlplane1
cd /etc/kubernetes/manifests/
kubectl run my-static-pod --image=nginx:1.16-alpine -o yaml --dry-run=client > my-static-pod.yaml
# /etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-static-pod
name: my-static-pod
spec:
containers:
- image: nginx:1.16-alpine
name: my-static-pod
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
kubectl expose pod my-static-pod-cluster3-controlplane1 --name static-pod-service --type=NodePort --port=80
题目:
Use context: kubectl config use-context k8s-c2-AC
Check how long the kube-apiserver server certificate is valid on cluster2-controlplane1
. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration
.
Also run the correct kubeadm
command to list the expiration dates and confirm both methods show the same date.
Write the correct kubeadm
command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh
.
解答:
➜ ssh cluster2-controlplane1
➜ root@cluster2-controlplane1:~# find /etc/kubernetes/pki | grep apiserver
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/apiserver-etcd-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver-kubelet-client.key
➜ root@cluster2-controlplane1:~# openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2
Validity
Not Before: Dec 20 18:05:20 2022 GMT
Not After : Dec 20 18:05:20 2023 GMT
# /opt/course/22/expiration
Dec 20 18:05:20 2023 GMT
➜ root@cluster2-controlplane1:~# kubeadm certs check-expiration | grep apiserver
apiserver Jan 14, 2022 18:49 UTC 363d ca no
apiserver-etcd-client Jan 14, 2022 18:49 UTC 363d etcd-ca no
apiserver-kubelet-client Jan 14, 2022 18:49 UTC 363d ca no
# /opt/course/22/kubeadm-renew-certs.sh
kubeadm certs renew apiserver
题目:
Use context: kubectl config use-context k8s-c2-AC
Node cluster2-node1
has been added to the cluster using kubeadm
and TLS bootstrapping.
Find the “Issuer” and “Extended Key Usage” values of the cluster2-node1
:
Write the information into file /opt/course/23/certificate-info.txt
.
Compare the “Issuer” and “Extended Key Usage” fields of both certificates and make sense of these.
解答:
➜ ssh cluster2-node1
➜ root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
Issuer: CN = kubernetes
➜ root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1
X509v3 Extended Key Usage:
TLS Web Client Authentication
➜ root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer
Issuer: CN = cluster2-node1-ca@1588186506
➜ root@cluster2-node1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1
X509v3 Extended Key Usage:
TLS Web Server Authentication
题目:
Use context: kubectl config use-context k8s-c1-H
There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend
in Namespace project-snake
. It should allow the backend-*
Pods only to:
db1-*
Pods on port 1111db2-*
Pods on port 2222Use the app
label of Pods in your policy.
After implementation, connections from backend-*
Pods to vault-*
Pods on port 3333 should for example no longer work.
解答:
kubectl -n project-snake get pod -L app
# 24_np.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress # policy is only about Egress
egress:
- # first rule
to: # first condition "to"
- podSelector:
matchLabels:
app: db1
ports: # second condition "port"
- protocol: TCP
port: 1111
- # second rule
to: # first condition "to"
- podSelector:
matchLabels:
app: db2
ports: # second condition "port"
- protocol: TCP
port: 2222
kubectl -f 24_np.yaml create
题目:
Use context: kubectl config use-context k8s-c3-CCC
Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db
.
Then create any kind of Pod in the cluster.
Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.
解答:
ssh cluster3-controlplane1
vim /etc/kubernetes/manifests/etcd.yaml
# /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.100.31:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt # use
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.100.31:2380
- --initial-cluster=cluster3-controlplane1=https://192.168.100.31:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key # use
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.100.31:2379 # use
- --listen-metrics-urls=http://127.0.0.1:2381
- --listen-peer-urls=https://192.168.100.31:2380
- --name=cluster3-controlplane1
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt # use
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd:3.3.15-0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /health
port: 2381
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: etcd
resources: {}
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd # important
type: DirectoryOrCreate
name: etcd-data
status: {}
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db
kubectl run test --image=nginx
kubectl get pod -l run=test -w
mv /etc/kubernetes/manifests/* /etc/kubernetes/
watch crictl ps
ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-backup --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot restore /tmp/etcd-backup.db
vim /etc/kubernetes/etcd.yaml
# /etc/kubernetes/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
...
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd-backup # change
type: DirectoryOrCreate
name: etcd-data
status: {}
mv /etc/kubernets/*.yaml /etc/kubernetes/manifests/
watch crictl ps
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- nryq.cn 版权所有 赣ICP备2024042798号-6
违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务