镜像会一直被清除
来源:9-4 Label---小标签大作为

会飞的小白菜
2022-09-17
node-3-bak节点镜像会不断被清除
[root@node-3-bak ~]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/kubernetesui/dashboard-amd64 v2.1.0 9a07b5b4bfac0 68MB
docker.io/kubernetesui/metrics-scraper v1.0.6 48d79e554db69 15.1MB
docker.io/library/nginx 1.19 f0b8a9a541369 53.7MB
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd252 14MB
k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64 1.8.3 078b6f04135ff 15.2MB
k8s.gcr.io/dns/k8s-dns-node-cache 1.16.0 90f9d984ec9a3 56.1MB
k8s.gcr.io/ingress-nginx/controller v0.41.2 81d7cdfa41690 102MB
k8s.gcr.io/kube-apiserver v1.19.7 c15e4f843f010 29.7MB
k8s.gcr.io/kube-controller-manager v1.19.7 67b3bca112d1d 28MB
k8s.gcr.io/kube-proxy v1.19.7 9d368f4517bbe 49.3MB
k8s.gcr.io/kube-scheduler v1.19.7 4fa642720eeaf 13.8MB
registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/library_nginx <none> f6d0b4767a6c4 53.6MB
k8s.gcr.io/pause 3.2 80d28bedfe5de 298kB
k8s.gcr.io/pause 3.3 0184c1613d929 298kB
myhub.com/kubernetes/web v1 de32f56f19ab6 94.5MB
quay.io/calico/cni v3.16.5 9165569ec2362 46.3MB
quay.io/calico/kube-controllers v3.16.5 1120bf0b8b414 22.4MB
quay.io/calico/node v3.16.5 c1fa37765208c 57.3MB
[root@node-3-bak ~]#
[root@node-3-bak ~]#
[root@node-3-bak ~]#
[root@node-3-bak ~]#
[root@node-3-bak ~]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/library/nginx 1.19 f0b8a9a541369 53.7MB
k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64 1.8.3 078b6f04135ff 15.2MB
k8s.gcr.io/dns/k8s-dns-node-cache 1.16.0 90f9d984ec9a3 56.1MB
k8s.gcr.io/kube-proxy v1.19.7 9d368f4517bbe 49.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5de 298kB
myhub.com/kubernetes/web v1 de32f56f19ab6 94.5MB
quay.io/calico/cni v3.16.5 9165569ec2362 46.3MB
quay.io/calico/kube-controllers v3.16.5 1120bf0b8b414 22.4MB
quay.io/calico/node v3.16.5 c1fa37765208c 57.3MB
[root@node-1-bak 5-scheduler]# kubectl get all -o wide -ndefault
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/dubbo-demo-59c97cf47b-2jrcv 0/1 Evicted 0 4m13s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-5d8ff 1/1 Running 0 4m8s 192.168.0.110 node-2-bak <none> <none>
pod/dubbo-demo-59c97cf47b-5rr8x 0/1 Evicted 0 4m14s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-6pxmn 0/1 Evicted 0 4m12s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-6xc8g 0/1 Evicted 0 4m10s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-9hp59 0/1 Evicted 0 4m10s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-brgwx 0/1 Evicted 0 2d4h <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-fcmj2 0/1 Evicted 0 4m9s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-kqrch 0/1 Evicted 0 4m14s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-ktwtv 0/1 Evicted 0 4m13s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-kvbsp 0/1 Evicted 0 4m14s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-mj7b4 0/1 Evicted 0 4m11s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-rn9lw 0/1 Evicted 0 4m9s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-rnx54 0/1 Evicted 0 4m14s <none> node-3-bak <none> <none>
pod/dubbo-demo-59c97cf47b-wkqkn 0/1 Evicted 0 4m12s <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-bbsw7 0/1 Evicted 0 65m <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-dhvd9 0/1 Evicted 0 65m <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-fcp68 1/1 Running 0 65m 10.233.208.15 node-2-bak <none> <none>
pod/web-demo-8497d59c48-jj92z 0/1 Evicted 0 2d4h <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-jpsps 0/1 Evicted 0 65m <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-q4l2p 0/1 Evicted 0 65m <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-q8dmn 0/1 Evicted 0 65m <none> node-3-bak <none> <none>
pod/web-demo-8497d59c48-x4hbf 0/1 Evicted 0 65m <none> node-3-bak <none> <none>
status:
message: 'Pod The node had condition: [DiskPressure]. '
phase: Failed
reason: Evicted
startTime: "2022-09-16T23:11:25Z"
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FreeDiskSpaceFailed 45m kubelet failed to garbage collect required amount of images. Wanted to free 547300147 bytes, but freed 271373995 bytes
Normal NodeHasDiskPressure 29m (x3 over 72m) kubelet Node node-3-bak status is now: NodeHasDiskPressure
Normal NodeHasNoDiskPressure 18m (x37 over 75m) kubelet Node node-3-bak status is now: NodeHasNoDiskPressure
Warning FreeDiskSpaceFailed 10m kubelet failed to garbage collect required amount of images. Wanted to free 587547443 bytes, but freed 0 bytes
Warning EvictionThresholdMet 10m (x8 over 72m) kubelet Attempting to reclaim ephemeral-storage
Warning ImageGCFailed 5m53s kubelet failed to garbage collect required amount of images. Wanted to free 613024563 bytes, but freed 324881122 bytes
Capacity:
cpu: 3
ephemeral-storage: 12786Mi
hugepages-2Mi: 0
memory: 2913968Ki
pods: 110
Allocatable:
cpu: 2900m
ephemeral-storage: 12066383443
hugepages-2Mi: 0
memory: 2549424Ki
pods: 110
这个怎么决解呢?是磁盘空间不足么,感觉还有很多啊
写回答
1回答
-
刘果国
2022-09-17
恩 是磁盘空间不足,pod都被驱逐了。看容器对应的分区磁盘空间,建议把docker的数据目录放到比较大的磁盘分区上
00
相似问题