kubeadm init 初始化一直过不去

来源:4-1 Kuberentes的起源和发展

慕仰4468487

2022-05-31

[root@node01 /]# **

## kubeadm init --apiserver-advertise-address=10.24.24.70 --kubernetes-version v1.20.10 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

****
[init] Using Kubernetes version: v1.20.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node01] and IPs [10.96.0.1 10.24.24.70]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node01] and IPs [10.24.24.70 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node01] and IPs [10.24.24.70 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 114.114.114.114:53: no such host.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 114.114.114.114:53: no such host.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 114.114.114.114:53: no such host.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 114.114.114.114:53: no such host.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 114.114.114.114:53: no such host.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
[root@node01 /]# **

## journalctl -xeu kubelet

**
5月 31 11:38:49 node01 kubelet[28998]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc00086ec80, 0xc0
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manage
5月 31 11:38:49 node01 kubelet[28998]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manage
5月 31 11:38:49 node01 kubelet[28998]: goroutine 503 [select]:
5月 31 11:38:49 node01 kubelet[28998]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc00086ec80, 0xc001487500)
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manage
5月 31 11:38:49 node01 kubelet[28998]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manage
5月 31 11:38:49 node01 kubelet[28998]: goroutine 504 [select]:
5月 31 11:38:49 node01 kubelet[28998]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc00086ec80, 0xc001487560)
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manage
5月 31 11:38:49 node01 kubelet[28998]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manage
5月 31 11:38:49 node01 kubelet[28998]: goroutine 509 [IO wait]:
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.runtime_pollWait(0x7feb845526b0, 0x72, 0x4f321c0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/runtime/netpoll.go:222 +0x55
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.(*pollDesc).wait(0xc0007b3398, 0x72, 0x4f32100, 0x6fffeb8, 0x0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.(*pollDesc).waitRead(...)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/internal/poll/fd_poll_runtime.go:92
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.(*FD).Read(0xc0007b3380, 0xc0015f5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5
5月 31 11:38:49 node01 kubelet[28998]: net.(*netFD).Read(0xc0007b3380, 0xc0015f5000, 0x1000, 0x1000, 0x43e1dc, 0xc000f31b58, 0x46b7a0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/fd_posix.go:55 +0x4f
5月 31 11:38:49 node01 kubelet[28998]: net.(*conn).Read(0xc0001152e0, 0xc0015f5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/net.go:182 +0x8e
5月 31 11:38:49 node01 kubelet[28998]: net/http.(*persistConn).Read(0xc00074b200, 0xc0015f5000, 0x1000, 0x1000, 0xc000ce4c00, 0xc000f31c58, 0x409095)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:1894 +0x77
5月 31 11:38:49 node01 kubelet[28998]: bufio.(*Reader).fill(0xc0015eac00)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/bufio/bufio.go:101 +0x105
5月 31 11:38:49 node01 kubelet[28998]: bufio.(*Reader).Peek(0xc0015eac00, 0x1, 0x0, 0x1, 0x1, 0x0, 0xc001487f20)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/bufio/bufio.go:139 +0x4f
5月 31 11:38:49 node01 kubelet[28998]: net/http.(*persistConn).readLoop(0xc00074b200)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:2047 +0x1a8
5月 31 11:38:49 node01 kubelet[28998]: created by net/http.(*Transport).dialConn
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:1715 +0xcb7
5月 31 11:38:49 node01 kubelet[28998]: goroutine 510 [select]:
5月 31 11:38:49 node01 kubelet[28998]: net/http.(*persistConn).writeLoop(0xc00074b200)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:2346 +0x11c
5月 31 11:38:49 node01 kubelet[28998]: created by net/http.(*Transport).dialConn
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:1716 +0xcdc
lines 960-1001/1001 (END)
5月 31 11:38:49 node01 kubelet[28998]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc00086ec80, 0xc0005089b0, 0xc000f280c0)
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1164 +0xe5
5月 31 11:38:49 node01 kubelet[28998]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1162 +0x21d
5月 31 11:38:49 node01 kubelet[28998]: goroutine 503 [select]:
5月 31 11:38:49 node01 kubelet[28998]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc00086ec80, 0xc001487500)
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:385 +0x145
5月 31 11:38:49 node01 kubelet[28998]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x585
5月 31 11:38:49 node01 kubelet[28998]: goroutine 504 [select]:
5月 31 11:38:49 node01 kubelet[28998]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc00086ec80, 0xc001487560)
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
5月 31 11:38:49 node01 kubelet[28998]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
5月 31 11:38:49 node01 kubelet[28998]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608
5月 31 11:38:49 node01 kubelet[28998]: goroutine 509 [IO wait]:
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.runtime_pollWait(0x7feb845526b0, 0x72, 0x4f321c0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/runtime/netpoll.go:222 +0x55
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.(*pollDesc).wait(0xc0007b3398, 0x72, 0x4f32100, 0x6fffeb8, 0x0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.(*pollDesc).waitRead(...)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/internal/poll/fd_poll_runtime.go:92
5月 31 11:38:49 node01 kubelet[28998]: internal/poll.(*FD).Read(0xc0007b3380, 0xc0015f5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5
5月 31 11:38:49 node01 kubelet[28998]: net.(*netFD).Read(0xc0007b3380, 0xc0015f5000, 0x1000, 0x1000, 0x43e1dc, 0xc000f31b58, 0x46b7a0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/fd_posix.go:55 +0x4f
5月 31 11:38:49 node01 kubelet[28998]: net.(*conn).Read(0xc0001152e0, 0xc0015f5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/net.go:182 +0x8e
5月 31 11:38:49 node01 kubelet[28998]: net/http.(*persistConn).Read(0xc00074b200, 0xc0015f5000, 0x1000, 0x1000, 0xc000ce4c00, 0xc000f31c58, 0x409095)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:1894 +0x77
5月 31 11:38:49 node01 kubelet[28998]: bufio.(*Reader).fill(0xc0015eac00)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/bufio/bufio.go:101 +0x105
5月 31 11:38:49 node01 kubelet[28998]: bufio.(*Reader).Peek(0xc0015eac00, 0x1, 0x0, 0x1, 0x1, 0x0, 0xc001487f20)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/bufio/bufio.go:139 +0x4f
5月 31 11:38:49 node01 kubelet[28998]: net/http.(*persistConn).readLoop(0xc00074b200)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:2047 +0x1a8
5月 31 11:38:49 node01 kubelet[28998]: created by net/http.(*Transport).dialConn
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:1715 +0xcb7
5月 31 11:38:49 node01 kubelet[28998]: goroutine 510 [select]:
5月 31 11:38:49 node01 kubelet[28998]: net/http.(*persistConn).writeLoop(0xc00074b200)
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:2346 +0x11c
5月 31 11:38:49 node01 kubelet[28998]: created by net/http.(*Transport).dialConn
5月 31 11:38:49 node01 kubelet[28998]: /usr/local/go/src/net/http/transport.go:1716 +0xcdc




systemctl status kubelet  状态一直是重启状态 一会好一会失败

多次查询是重启状态

10-kubeadm.conf 等的配置文件

老师 麻烦您帮忙看下,实在是无解了

写回答

2回答

慕数据5257326

2023-09-19

我遇到的问题跟你一样,你搞定了吗?兄弟

0
0

清风

2022-05-31

能安装1.19.3版本吗?

0
2
清风
回复
慕仰4468487
嗯呢,先严格根据我的课程内容安装,1小时能搞定
2022-06-01
共2条回复

Kubernetes 入门到进阶实战,系统性掌握 K8s 生产实践

阿里云最有价值专家亲授,云原生时代必备技能

1502 学习 · 613 问题

查看课程