worker node NotReady
来源:4-8 安装配置worker node节点

weixin_慕莱坞3424064
2022-05-18
我有1个master(node-238),2个worker(node-205,node-216)
我一直出现node-216 NotReady的情况,请问可以如何解决?
root@node-216:/etc/kubernetes# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-205 Ready <none> 174m v1.23.6
node-216 NotReady <none> 13s v1.23.6
node-238 Ready control-plane,master 3h26m v1.23.6
查询日志
root@node-216:/etc/kubernetes# journalctl -f -u kubelet
May 18 22:46:51 node-216 kubelet[4249]: I0518 22:46:51.401053 4249 cni.go:205] "Error validating CNI config list" configList="{\n \"name\": \"cbr0\",\n \"cniVersion\": \"0.3.1\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n" err="[failed to find plugin \"portmap\" in path [/opt/cni/bin]]"
May 18 22:46:51 node-216 kubelet[4249]: I0518 22:46:51.401105 4249 cni.go:240] "Unable to update cni config" err="no valid networks found in /etc/cni/net.d"
May 18 22:46:51 node-216 kubelet[4249]: E0518 22:46:51.686929 4249 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
我检查了flannel是已经安装,并且pod里面也有
root@node-216:/etc/kubernetes# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d8c4cb4d-hp5f7 1/1 Running 0 3h9m
kube-system coredns-6d8c4cb4d-qgshz 1/1 Running 0 3h9m
kube-system etcd-node-238 1/1 Running 0 3h27m
kube-system kube-apiserver-node-238 1/1 Running 0 3h27m
kube-system kube-controller-manager-node-238 1/1 Running 0 3h27m
kube-system kube-flannel-ds-ch7nm 1/1 Running 0 65s
kube-system kube-flannel-ds-g94ht 1/1 Running 0 34m
kube-system kube-flannel-ds-mp8tt 1/1 Running 0 34m
kube-system kube-proxy-2777s 1/1 Running 0 175m
kube-system kube-proxy-czsnn 1/1 Running 0 3h26m
kube-system kube-proxy-rb6r8 1/1 Running 0 65s
kube-system kube-scheduler-node-238 1/1 Running 0 3h27m
在/etc/cni/net.d上面的配置如下
root@node-216:/etc/cni/net.d# cat 10-flannel.conflist
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
5-19更新
root@node-216:/etc/cni/net.d# kubectl logs kube-flannel-ds-bshjb -n kube-system
I0519 04:16:07.746910 1 main.go:205] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0519 04:16:07.747307 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0519 04:16:08.053230 1 kube.go:120] Waiting 10m0s for node controller to sync
I0519 04:16:08.053465 1 kube.go:378] Starting kube subnet manager
I0519 04:16:09.144860 1 kube.go:127] Node controller sync successful
I0519 04:16:09.144906 1 main.go:225] Created subnet manager: Kubernetes Subnet Manager - node-216
I0519 04:16:09.144920 1 main.go:228] Installing signal handlers
I0519 04:16:09.145106 1 main.go:454] Found network config - Backend type: vxlan
I0519 04:16:09.145146 1 match.go:189] Determining IP address of default interface
I0519 04:16:09.145837 1 match.go:242] Using interface with name eno1 and address 192.168.1.216
I0519 04:16:09.145906 1 match.go:264] Defaulting external address to interface address (192.168.1.216)
I0519 04:16:09.146011 1 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0519 04:16:09.245151 1 kube.go:339] Setting NodeNetworkUnavailable
I0519 04:16:09.262162 1 main.go:403] Current network or subnet (10.244.0.0/16, 10.244.9.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules
I0519 04:16:09.556746 1 iptables.go:255] Deleting iptables rule: -s 0.0.0.0/0 -d 0.0.0.0/0 -m comment --comment flanneld masq -j RETURN
I0519 04:16:09.557679 1 iptables.go:255] Deleting iptables rule: -s 0.0.0.0/0 ! -d 224.0.0.0/4 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0519 04:16:09.558487 1 iptables.go:255] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -m comment --comment flanneld masq -j RETURN
I0519 04:16:09.644759 1 iptables.go:255] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0519 04:16:09.645609 1 main.go:332] Setting up masking rules
I0519 04:16:09.646077 1 main.go:353] Changing default FORWARD chain policy to ACCEPT
I0519 04:16:09.646126 1 main.go:366] Wrote subnet file to /run/flannel/subnet.env
I0519 04:16:09.646133 1 main.go:370] Running backend.
I0519 04:16:09.646140 1 main.go:391] Waiting for all goroutines to exit
I0519 04:16:09.646154 1 vxlan_network.go:61] watching for new subnet leases
I0519 04:16:09.647502 1 iptables.go:231] Some iptables rules are missing; deleting and recreating rules
I0519 04:16:09.647511 1 iptables.go:255] Deleting iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment flanneld masq -j RETURN
I0519 04:16:09.647621 1 iptables.go:231] Some iptables rules are missing; deleting and recreating rules
I0519 04:16:09.647643 1 iptables.go:255] Deleting iptables rule: -s 10.244.0.0/16 -m comment --comment flanneld forward -j ACCEPT
I0519 04:16:09.648308 1 iptables.go:255] Deleting iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0519 04:16:09.744236 1 iptables.go:255] Deleting iptables rule: -d 10.244.0.0/16 -m comment --comment flanneld forward -j ACCEPT
I0519 04:16:09.744606 1 iptables.go:255] Deleting iptables rule: ! -s 10.244.0.0/16 -d 10.244.9.0/24 -m comment --comment flanneld masq -j RETURN
I0519 04:16:09.745065 1 iptables.go:243] Adding iptables rule: -s 10.244.0.0/16 -m comment --comment flanneld forward -j ACCEPT
I0519 04:16:09.745373 1 iptables.go:255] Deleting iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0519 04:16:09.746086 1 iptables.go:243] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment flanneld masq -j RETURN
I0519 04:16:09.746352 1 iptables.go:243] Adding iptables rule: -d 10.244.0.0/16 -m comment --comment flanneld forward -j ACCEPT
I0519 04:16:09.748030 1 iptables.go:243] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0519 04:16:09.749753 1 iptables.go:243] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.9.0/24 -m comment --comment flanneld masq -j RETURN
I0519 04:16:09.846156 1 iptables.go:243] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment flanneld masq -j MASQUERADE --random-fully
写回答
2回答
-
含泪韵心弦
2024-06-06
在当前work节点使用journalctl -f -u kubelet查看日志
如果出现:k8s-node2 kubelet[3028]: : [failed to find plugin "flannel" in path [/opt/cni/bin]]
将master节点的flannel拷贝到work节点即可
00 -
清风
2022-05-19
看看kube-flannel-ds-g94ht 的日志呢?
042022-05-19
相似问题