kubeadm init报错
来源:9-5 使用kubeadm搭建多节点K8S集群
慕少1536510
2020-02-15
执行sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --apiserver-advertise-address 192.168.0.120报错
[root@kmaster kubelet.service.d]# sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --apiserver-advertise-address 192.168.0.120
W0215 01:44:40.011210 5838 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0215 01:44:40.011370 5838 version.go:102] falling back to the local client version: v1.17.3
W0215 01:44:40.011667 5838 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0215 01:44:40.011686 5838 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing “sa” key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/scheduler.conf”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0215 01:44:42.583236 5838 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0215 01:44:42.589936 5838 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
应该是kubelet的问题,kubelet是启着的,不知道啥问题
[root@kmaster kubelet.service.d]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since 六 2020-02-15 01:57:10 EST; 4s ago
Docs: https://kubernetes.io/docs/
Process: 9311 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 9311 (code=exited, status=255)
2月 15 01:57:10 kmaster kubelet[9311]: --tls-cipher-suites strings …A_WITH_A
2月 15 01:57:10 kmaster kubelet[9311]: --tls-min-version string …
2月 15 01:57:10 kmaster kubelet[9311]: --tls-private-key-file string …
2月 15 01:57:10 kmaster kubelet[9311]: --topology-manager-policy string …
2月 15 01:57:10 kmaster kubelet[9311]: -v, --v Level …erbosity
2月 15 01:57:10 kmaster kubelet[9311]: --version version[=true] …and quit
2月 15 01:57:10 kmaster kubelet[9311]: --vmodule moduleSpec … logging
2月 15 01:57:10 kmaster kubelet[9311]: --volume-plugin-dir string …/exec/")
2月 15 01:57:10 kmaster kubelet[9311]: --volume-stats-agg-period duration …
2月 15 01:57:10 kmaster kubelet[9311]: F0215 01:57:10.881466 9311 server.go:156] unknown flag: --require-kubeconfig
Hint: Some lines were ellipsized, use -l to show in full.
1回答
-
慕少1536510
提问者
2020-02-15
已经解决了,是我的10-kubeadm.conf配置有问题。
00
相似问题