启动kubelet服务时,会提示以下错误

来源:7-7 小试牛刀

qq_慕虎604681

2019-06-04

Jun 04 10:33:22 dev-node2 kubelet[11347]: W0604 10:33:22.626801 11347 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "534b6e9c4a60ac1d82d13236c0d2e5695095251e7ba154ffeafbbc3cbf1dcd67"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg=“Extracted identifiers” Node=dev-node2 Orchestrator=k8s Workload=default.object-storage-api-service
Jun 04 10:33:22 dev-node2 kubelet[11347]: Calico CNI releasing IP address
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="No config file specified, loading config from environment"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Datastore type: etcdv2"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg=“Releasing address using workloadID” Workload=default.object-storage-api-service
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Releasing all IPs with handle ‘default.object-storage-api-service’"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Get Key: /calico/ipam/v2/handle/default.object-storage-api-service"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Key not found error"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=error msg=“resource does not exist: {default.object-storage-api-service}” Workload=default.object-storage-api-service
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="No config file specified, loading config from environment"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Datastore type: etcdv2"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Delete Key: /calico/v1/host/dev-node2/workload/k8s/default.object-storage-api-service/endpoint/eth0"
Jun 04 10:33:22 dev-node2 kubelet[11347]: time=“2019-06-04T10:33:22+08:00” level=info msg="Key not found error"
Jun 04 10:33:22 dev-node2 kubelet[11347]: E0604 10:33:22.720526 11347 cni.go:352] Error deleting default_object-storage-api-service/534b6e9c4a60ac1d82d13236c0d2e5695095251e7ba154ffeafbbc3cbf1dcd67 from network calico/calico-k8s-network: resource does not exist: WorkloadEndpoint(hostname=dev-node2, orchestrator=k8s, workload=default.object-storage-api-service, name=eth0)
Jun 04 10:33:22 dev-node2 kubelet[11347]: E0604 10:33:22.721428 11347 remote_runtime.go:132] StopPodSandbox “534b6e9c4a60ac1d82d13236c0d2e5695095251e7ba154ffeafbbc3cbf1dcd67” from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod “object-storage-api-service_default” network: resource does not exist: WorkloadEndpoint(hostname=dev-node2, orchestrator=k8s, workload=default.object-storage-api-service, name=eth0)
Jun 04 10:33:22 dev-node2 kubelet[11347]: E0604 10:33:22.721471 11347 kuberuntime_gc.go:169] Failed to stop sandbox “534b6e9c4a60ac1d82d13236c0d2e5695095251e7ba154ffeafbbc3cbf1dcd67” before removing: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod “object-storage-api-service_default” network: resource does not exist: WorkloadEndpoint(hostname=dev-node2, orchestrator=k8s, workload=default.object-storage-api-service, name=eth0)
Jun 04 10:34:20 dev-node2 kubelet[11347]: I0604 10:34:20.677686 11347 container_manager_linux.go:448] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service

有几个问题:
1、我使用的是Datastore type: etcdv3,他为什么要在V1或V2中去删除一些key?如何解决?
2、他说WorkloadEndpoint不存在?我如何解决?

写回答

2回答

qq_慕虎604681

提问者

2019-06-05

是一个干净的集群,但是发布pod没有成功,过程如下:

[root@dev-node1 soft]# kubectl apply -f kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

serviceaccount/kubernetes-dashboard created

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

deployment.apps/kubernetes-dashboard created

service/kubernetes-dashboard created

[root@dev-node1 soft]# kubectl get -f kubernetes-dashboard.yaml

NAME                                TYPE     DATA   AGE

secret/kubernetes-dashboard-certs   Opaque   0      8s

NAME                                TYPE     DATA   AGE

secret/kubernetes-dashboard-csrf    Opaque   1      8s


NAME                                  SECRETS   AGE

serviceaccount/kubernetes-dashboard   0         8s


NAME                                                          AGE

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal   8s


NAME                                                                        AGE

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal   8s


NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/kubernetes-dashboard   0/1     0            0           8s


NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)     AGE

service/kubernetes-dashboard   ClusterIP   10.68.98.97   <none>        19090/TCP   8s

[root@dev-node1 soft]# kubectl get pods -A

No resources found.

[root@dev-node1 soft]# kubectl get deploy -A -o wide

NAMESPACE     NAME                   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS             IMAGES                                                           SELECTOR

kube-system   kubernetes-dashboard   0/1     0            0           49s   kubernetes-dashboard   hub.swu.edu.cn:9090/library/kubernetes-dashboard-amd64:v1.10.1   k8s-app=kubernetes-dashboard

POD都没有成功创建,不清楚是什么原因。

journalctl -f 也没有发现什么错误日志,如下:

[root@dev-node1 system]# journalctl -f 

-- Logs begin at Wed 2019-06-05 12:25:04 CST. --

Jun 05 12:45:28 dev-node1 kubelet[7197]: I0605 12:45:28.400077    7197 eviction_manager.go:321] eviction manager: no resources are starved

Jun 05 12:45:28 dev-node1 kube-apiserver[5945]: I0605 12:45:28.664158    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:28 dev-node1 kube-apiserver[5945]: I0605 12:45:28.664308    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:29 dev-node1 docker[16710]: 2019-06-05 04:45:29.648 [INFO][59] int_dataplane.go 907: Applying dataplane updates

Jun 05 12:45:29 dev-node1 docker[16710]: 2019-06-05 04:45:29.648 [INFO][59] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"

Jun 05 12:45:29 dev-node1 docker[16710]: 2019-06-05 04:45:29.648 [INFO][59] ipsets.go 306: Resyncing ipsets with dataplane. family="inet"

Jun 05 12:45:29 dev-node1 docker[16710]: 2019-06-05 04:45:29.650 [INFO][59] ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.197264ms

Jun 05 12:45:29 dev-node1 docker[16710]: 2019-06-05 04:45:29.650 [INFO][59] int_dataplane.go 921: Finished applying updates to dataplane. msecToApply=2.480388

Jun 05 12:45:29 dev-node1 kube-apiserver[5945]: I0605 12:45:29.664489    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:29 dev-node1 kube-apiserver[5945]: I0605 12:45:29.664649    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:30 dev-node1 kube-apiserver[5945]: I0605 12:45:30.664829    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:30 dev-node1 kube-apiserver[5945]: I0605 12:45:30.665003    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:31 dev-node1 kube-apiserver[5945]: I0605 12:45:31.665216    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:31 dev-node1 kube-apiserver[5945]: I0605 12:45:31.665378    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:32 dev-node1 kube-apiserver[5945]: I0605 12:45:32.665564    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:32 dev-node1 kube-apiserver[5945]: I0605 12:45:32.665710    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:33 dev-node1 kube-apiserver[5945]: I0605 12:45:33.665898    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:33 dev-node1 kube-apiserver[5945]: I0605 12:45:33.666062    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:34 dev-node1 kube-apiserver[5945]: I0605 12:45:34.666250    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:34 dev-node1 kube-apiserver[5945]: I0605 12:45:34.666448    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:35 dev-node1 kube-apiserver[5945]: I0605 12:45:35.666645    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:35 dev-node1 kube-apiserver[5945]: I0605 12:45:35.666812    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:36 dev-node1 docker[16710]: 2019-06-05 04:45:36.044 [INFO][59] int_dataplane.go 907: Applying dataplane updates

Jun 05 12:45:36 dev-node1 docker[16710]: 2019-06-05 04:45:36.044 [INFO][59] route_table.go 222: Queueing a resync of routing table. ipVersion=0x4

Jun 05 12:45:36 dev-node1 docker[16710]: 2019-06-05 04:45:36.046 [INFO][59] int_dataplane.go 921: Finished applying updates to dataplane. msecToApply=1.705201

Jun 05 12:45:36 dev-node1 kube-apiserver[5945]: I0605 12:45:36.666982    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:36 dev-node1 kube-apiserver[5945]: I0605 12:45:36.667141    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:37 dev-node1 kube-apiserver[5945]: I0605 12:45:37.667341    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:37 dev-node1 kube-apiserver[5945]: I0605 12:45:37.667515    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.400189    7197 eviction_manager.go:230] eviction manager: synchronize housekeeping

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412878    7197 helpers.go:822] eviction manager: observations: signal=nodefs.available, available: 49761136Ki, capacity: 51175Mi, time: 2019-06-05 12:45:38.401482733 +0800 CST m=+1223.038206611

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412921    7197 helpers.go:822] eviction manager: observations: signal=nodefs.inodesFree, available: 26179614, capacity: 26214400, time: 2019-06-05 12:45:38.401482733 +0800 CST m=+1223.038206611

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412933    7197 helpers.go:822] eviction manager: observations: signal=imagefs.available, available: 49761136Ki, capacity: 51175Mi, time: 2019-06-05 12:45:38.401482733 +0800 CST m=+1223.038206611

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412941    7197 helpers.go:822] eviction manager: observations: signal=imagefs.inodesFree, available: 26179614, capacity: 26214400, time: 2019-06-05 12:45:38.401482733 +0800 CST m=+1223.038206611

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412949    7197 helpers.go:822] eviction manager: observations: signal=pid.available, available: 32153, capacity: 32Ki, time: 2019-06-05 12:45:38.412347096 +0800 CST m=+1223.049070941

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412958    7197 helpers.go:822] eviction manager: observations: signal=memory.available, available: 15509392Ki, capacity: 16264720Ki, time: 2019-06-05 12:45:38.401482733 +0800 CST m=+1223.038206611

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412966    7197 helpers.go:822] eviction manager: observations: signal=allocatableMemory.available, available: 16264720Ki, capacity: 16264720Ki, time: 2019-06-05 12:45:38.412792194 +0800 CST m=+1223.049516070

Jun 05 12:45:38 dev-node1 kubelet[7197]: I0605 12:45:38.412985    7197 eviction_manager.go:321] eviction manager: no resources are starved

Jun 05 12:45:38 dev-node1 kube-apiserver[5945]: I0605 12:45:38.667717    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:38 dev-node1 kube-apiserver[5945]: I0605 12:45:38.667876    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:39 dev-node1 kube-apiserver[5945]: I0605 12:45:39.668047    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001

Jun 05 12:45:39 dev-node1 kube-apiserver[5945]: I0605 12:45:39.668231    5945 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

Jun 05 12:45:39 dev-node1 docker[16710]: 2019-06-05 04:45:39.840 [INFO][59] int_dataplane.go 907: Applying dataplane updates

Jun 05 12:45:39 dev-node1 docker[16710]: 2019-06-05 04:45:39.840 [INFO][59] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"

Jun 05 12:45:39 dev-node1 docker[16710]: 2019-06-05 04:45:39.840 [INFO][59] ipsets.go 306: Resyncing ipsets with dataplane. family="inet"

Jun 05 12:45:39 dev-node1 docker[16710]: 2019-06-05 04:45:39.842 [INFO][59] ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=1.689065ms

Jun 05 12:45:39 dev-node1 docker[16710]: 2019-06-05 04:45:39.842 [INFO][59] int_dataplane.go 921: Finished applying updates to dataplane. msecToApply=1.9875180000000001

^C


0
0

刘果国

2019-06-05

你好,集群处于什么状态?是已经部署完了,跑应用了发现的问题,还是在部署中的干净的集群?看日志感觉有服务在跑,像老数据导致的问题呢。etcd本身的版本确认一下,正常就没问题。具体info为什么打出来了v2我也没关注过,info的不用太关注

0
6
刘果国
回复
qq_慕虎604681
没错,最新版本确实有很大变化的。这里的k8s学习为主,不追求最新版本了,手动一点点去了解每个组件的逻辑关系。原理层面的东西是没变化的,如果想部署最新版本,按照这个教程确实不合适了。后面会考虑升级的,感谢建议!
2019-06-12
共6条回复

Docker+Kubernetes(k8s)微服务容器化实践

从开发到编排,快速,完整,深入的掌握微服务

2607 学习 · 607 问题

查看课程