先一个节点部署成功,然后加节点报错 !
来源:1-1 课程介绍

慕斯4289344
2022-01-13
TASK [kubernetes/master : kubeadm | Create kubeadm config] ****************************************************************************************************
changed: [node4] => {“changed”: true, “checksum”: “dd12974f5e2be1d080cb0ea678304d1a413a2cd6”, “dest”: “/etc/kubernetes/kubeadm-config.yaml”, “gid”: 0, “group”: “root”, “md5sum”: “fd55fc837bb9dd9632d0a5300c9d251d”, “mode”: “0640”, “owner”: “root”, “size”: 3385, “src”: “/root/.ansible/tmp/ansible-tmp-1642093921.83172-16626-238128328874601/source”, “state”: “file”, “uid”: 0}
changed: [node2] => {“changed”: true, “checksum”: “a8a710efd3cf532d715a3bf450e97f13df19fa2a”, “dest”: “/etc/kubernetes/kubeadm-config.yaml”, “gid”: 0, “group”: “root”, “md5sum”: “a5b323ef3e73d8716f1378d58a5ea3a1”, “mode”: “0640”, “owner”: “root”, “size”: 3445, “src”: “/root/.ansible/tmp/ansible-tmp-1642093921.8391135-16627-111017830590596/source”, “state”: “file”, “uid”: 0}
Friday 14 January 2022 01:12:02 +0800 (0:00:00.764) 0:02:53.018 ********
Friday 14 January 2022 01:12:02 +0800 (0:00:00.145) 0:02:53.164 ********
Friday 14 January 2022 01:12:02 +0800 (0:00:00.158) 0:02:53.322 ********
Friday 14 January 2022 01:12:02 +0800 (0:00:00.060) 0:02:53.382 ********
Friday 14 January 2022 01:12:02 +0800 (0:00:00.049) 0:02:53.432 ********
TASK [kubernetes/master : kubeadm | Initialize first master] **************************************************************************************************
changed: [node4] => {“attempts”: 1, “changed”: true, “cmd”: [“timeout”, “-k”, “300s”, “300s”, “/usr/local/bin/kubeadm”, “init”, “–config=/etc/kubernetes/kubeadm-config.yaml”, “–ignore-preflight-errors=all”, “–skip-phases=addon/coredns”, “–upload-certs”], “delta”: “0:00:25.015768”, “end”: “2022-01-13 17:15:53.344254”, “failed_when_result”: false, “rc”: 0, “start”: “2022-01-13 17:15:28.328486”, “stderr”: “W0113 17:15:28.386017 27646 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.200.0.10]; the provided value is: [169.254.25.10]\nW0113 17:15:28.392646 27646 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”, “stderr_lines”: [“W0113 17:15:28.386017 27646 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.200.0.10]; the provided value is: [169.254.25.10]”, “W0113 17:15:28.392646 27646 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”], “stdout”: “[init] Using Kubernetes version: v1.19.7\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’\n[certs] Using certificateDir folder “/etc/kubernetes/ssl”\n[certs] Generating “ca” certificate and key\n[certs] Generating “apiserver” certificate and key\n[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb-apiserver.kubernetes.local localhost node2 node4 node4.cluster.local] and IPs [10.200.0.1 10.9.202.43 127.0.0.1]\n[certs] Generating “apiserver-kubelet-client” certificate and key\n[certs] Generating “front-proxy-ca” certificate and key\n[certs] Generating “front-proxy-client” certificate and key\n[certs] External etcd mode: Skipping etcd/ca certificate authority generation\n[certs] External etcd mode: Skipping etcd/server certificate generation\n[certs] External etcd mode: Skipping etcd/peer certificate generation\n[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation\n[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation\n[certs] Generating “sa” key and public key\n[kubeconfig] Using kubeconfig folder “/etc/kubernetes”\n[kubeconfig] Writing “admin.conf” kubeconfig file\n[kubeconfig] Writing “kubelet.conf” kubeconfig file\n[kubeconfig] Writing “controller-manager.conf” kubeconfig file\n[kubeconfig] Writing “scheduler.conf” kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”\n[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder “/etc/kubernetes/manifests”\n[control-plane] Creating static Pod manifest for “kube-apiserver”\n[control-plane] Creating static Pod manifest for “kube-controller-manager”\n[control-plane] Creating static Pod manifest for “kube-scheduler”\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 5m0s\n[apiclient] All control plane components are healthy after 17.504497 seconds\n[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace\n[kubelet] Creating a ConfigMap “kubelet-config-1.19” in namespace kube-system with the configuration for the kubelets in the cluster\n[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace\n[upload-certs] Using certificate key:\n73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb\n[mark-control-plane] Marking the node node2 as control-plane by adding the label “node-role.kubernetes.io/master=’’”\n[mark-control-plane] Marking the node node2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: n09a09.gcq04u06x0on3h8d\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace\n[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown (id−u):(id -u):(id−u):(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun “kubectl apply -f [podnetwork].yaml” with one of the options listed at:\n https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of the control-plane node running the following command on each as root:\n\n kubeadm join 10.9.202.43:6443 --token n09a09.gcq04u06x0on3h8d \\n --discovery-token-ca-cert-hash sha256:696adf694090ce1aabe7fecde75e51b31c32e0e83c3a136d461a9003713975b8 \\n --control-plane --certificate-key 73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use\n"kubeadm init phase upload-certs --upload-certs” to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 10.9.202.43:6443 --token n09a09.gcq04u06x0on3h8d \\n --discovery-token-ca-cert-hash sha256:696adf694090ce1aabe7fecde75e51b31c32e0e83c3a136d461a9003713975b8 “, “stdout_lines”: [”[init] Using Kubernetes version: v1.19.7", “[preflight] Running pre-flight checks”, “[preflight] Pulling images required for setting up a Kubernetes cluster”, “[preflight] This might take a minute or two, depending on the speed of your internet connection”, “[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’”, “[certs] Using certificateDir folder “/etc/kubernetes/ssl””, “[certs] Generating “ca” certificate and key”, “[certs] Generating “apiserver” certificate and key”, “[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb-apiserver.kubernetes.local localhost node2 node4 node4.cluster.local] and IPs [10.200.0.1 10.9.202.43 127.0.0.1]”, “[certs] Generating “apiserver-kubelet-client” certificate and key”, “[certs] Generating “front-proxy-ca” certificate and key”, “[certs] Generating “front-proxy-client” certificate and key”, “[certs] External etcd mode: Skipping etcd/ca certificate authority generation”, “[certs] External etcd mode: Skipping etcd/server certificate generation”, “[certs] External etcd mode: Skipping etcd/peer certificate generation”, “[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation”, “[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation”, “[certs] Generating “sa” key and public key”, “[kubeconfig] Using kubeconfig folder “/etc/kubernetes””, “[kubeconfig] Writing “admin.conf” kubeconfig file”, “[kubeconfig] Writing “kubelet.conf” kubeconfig file”, “[kubeconfig] Writing “controller-manager.conf” kubeconfig file”, “[kubeconfig] Writing “scheduler.conf” kubeconfig file”, “[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env””, “[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml””, “[kubelet-start] Starting the kubelet”, “[control-plane] Using manifest folder “/etc/kubernetes/manifests””, “[control-plane] Creating static Pod manifest for “kube-apiserver””, “[control-plane] Creating static Pod manifest for “kube-controller-manager””, “[control-plane] Creating static Pod manifest for “kube-scheduler””, “[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 5m0s”, “[apiclient] All control plane components are healthy after 17.504497 seconds”, “[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace”, “[kubelet] Creating a ConfigMap “kubelet-config-1.19” in namespace kube-system with the configuration for the kubelets in the cluster”, “[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace”, “[upload-certs] Using certificate key:”, “73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb”, “[mark-control-plane] Marking the node node2 as control-plane by adding the label “node-role.kubernetes.io/master=’’””, “[mark-control-plane] Marking the node node2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]”, “[bootstrap-token] Using token: n09a09.gcq04u06x0on3h8d”, “[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles”, “[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes”, “[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials”, “[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token”, “[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster”, “[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace”, “[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key”, “[addons] Applied essential addon: kube-proxy”, “”, “Your Kubernetes control-plane has initialized successfully!”, “”, “To start using your cluster, you need to run the following as a regular user:”, “”, " mkdir -p $HOME/.kube", " sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config", " sudo chown (id−u):(id -u):(id−u):(id -g) $HOME/.kube/config", “”, “You should now deploy a pod network to the cluster.”, “Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:”, " https://kubernetes.io/docs/concepts/cluster-administration/addons/", “”, “You can now join any number of the control-plane node running the following command on each as root:”, “”, " kubeadm join 10.9.202.43:6443 --token n09a09.gcq04u06x0on3h8d \", " --discovery-token-ca-cert-hash sha256:696adf694090ce1aabe7fecde75e51b31c32e0e83c3a136d461a9003713975b8 \", " --control-plane --certificate-key 73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb", “”, “Please note that the certificate-key gives access to cluster sensitive data, keep it secret!”, “As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use”, ““kubeadm init phase upload-certs --upload-certs” to reload certs afterward.”, “”, “Then you can join any number of worker nodes by running the following on each as root:”, “”, “kubeadm join 10.9.202.43:6443 --token n09a09.gcq04u06x0on3h8d \”, " --discovery-token-ca-cert-hash sha256:696adf694090ce1aabe7fecde75e51b31c32e0e83c3a136d461a9003713975b8 "]}
Friday 14 January 2022 01:12:28 +0800 (0:00:25.248) 0:03:18.680 ********
Friday 14 January 2022 01:12:30 +0800 (0:00:01.880) 0:03:20.560 ********
Friday 14 January 2022 01:12:30 +0800 (0:00:00.067) 0:03:20.628 ********
TASK [kubernetes/master : Create kubeadm token for joining nodes with 24h expiration (default)] ***************************************************************
ok: [node4 -> 10.9.202.43] => {“attempts”: 1, “changed”: false, “cmd”: ["/usr/local/bin/kubeadm", “–kubeconfig”, “/etc/kubernetes/admin.conf”, “token”, “create”], “delta”: “0:00:00.083833”, “end”: “2022-01-13 17:15:55.618513”, “rc”: 0, “start”: “2022-01-13 17:15:55.534680”, “stderr”: “W0113 17:15:55.598321 28014 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”, “stderr_lines”: [“W0113 17:15:55.598321 28014 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”], “stdout”: “f14g0n.rkdsca5st9cenbbv”, “stdout_lines”: [“f14g0n.rkdsca5st9cenbbv”]}
ok: [node2 -> 10.9.202.43] => {“attempts”: 1, “changed”: false, “cmd”: ["/usr/local/bin/kubeadm", “–kubeconfig”, “/etc/kubernetes/admin.conf”, “token”, “create”], “delta”: “0:00:00.080842”, “end”: “2022-01-13 17:15:55.639102”, “rc”: 0, “start”: “2022-01-13 17:15:55.558260”, “stderr”: “W0113 17:15:55.625312 28019 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”, “stderr_lines”: [“W0113 17:15:55.625312 28019 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”], “stdout”: “kf006s.g5e7054jzjbsfk3s”, “stdout_lines”: [“kf006s.g5e7054jzjbsfk3s”]}
Friday 14 January 2022 01:12:30 +0800 (0:00:00.316) 0:03:20.945 ********
TASK [kubernetes/master : Set kubeadm_token] ******************************************************************************************************************
ok: [node4] => {“ansible_facts”: {“kubeadm_token”: “f14g0n.rkdsca5st9cenbbv”}, “changed”: false}
ok: [node2] => {“ansible_facts”: {“kubeadm_token”: “kf006s.g5e7054jzjbsfk3s”}, “changed”: false}
Friday 14 January 2022 01:12:30 +0800 (0:00:00.059) 0:03:21.005 ********
included: /soft/kubespray-2.15.0/roles/kubernetes/master/tasks/kubeadm-secondary.yml for node4, node2
Friday 14 January 2022 01:12:30 +0800 (0:00:00.096) 0:03:21.101 ********
TASK [kubernetes/master : Set kubeadm_discovery_address] ******************************************************************************************************
ok: [node4] => {“ansible_facts”: {“kubeadm_discovery_address”: “10.9.202.43:6443”}, “changed”: false}
ok: [node2] => {“ansible_facts”: {“kubeadm_discovery_address”: “10.9.202.43:6443”}, “changed”: false}
Friday 14 January 2022 01:12:30 +0800 (0:00:00.113) 0:03:21.215 ********
TASK [kubernetes/master : Upload certificates so they are fresh and not expired] ******************************************************************************
changed: [node4] => {“changed”: true, “cmd”: ["/usr/local/bin/kubeadm", “init”, “phase”, “–config”, “/etc/kubernetes/kubeadm-config.yaml”, “upload-certs”, “–upload-certs”], “delta”: “0:00:00.110870”, “end”: “2022-01-13 17:15:56.233112”, “rc”: 0, “start”: “2022-01-13 17:15:56.122242”, “stderr”: “W0113 17:15:56.177546 28034 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.200.0.10]; the provided value is: [169.254.25.10]\nW0113 17:15:56.181666 28034 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”, “stderr_lines”: [“W0113 17:15:56.177546 28034 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.200.0.10]; the provided value is: [169.254.25.10]”, “W0113 17:15:56.181666 28034 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]”], “stdout”: “[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace\n[upload-certs] Using certificate key:\n73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb”, “stdout_lines”: ["[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace", “[upload-certs] Using certificate key:”, “73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb”]}
Friday 14 January 2022 01:12:31 +0800 (0:00:00.322) 0:03:21.537 ********
TASK [kubernetes/master : Parse certificate key if not set] ***************************************************************************************************
ok: [node4] => {“ansible_facts”: {“kubeadm_certificate_key”: “73cc65acaf1654dc1bf8c332c02b0a4cecacccac1bac7b49e941994cc8e5bbcb”}, “changed”: false}
Friday 14 January 2022 01:12:31 +0800 (0:00:00.089) 0:03:21.627 ********
TASK [kubernetes/master : Create kubeadm ControlPlane config] *************************************************************************************************
changed: [node2] => {“backup_file”: “/etc/kubernetes/kubeadm-controlplane.yaml.28078.2022-01-13@17:15:56~”, “changed”: true, “checksum”: “02d2c41829990bba6b434d03c9257a3f3c58b0c5”, “dest”: “/etc/kubernetes/kubeadm-controlplane.yaml”, “gid”: 0, “group”: “root”, “md5sum”: “f4d8cdfbb412a6845304a92ce6544334”, “mode”: “0640”, “owner”: “root”, “size”: 510, “src”: “/root/.ansible/tmp/ansible-tmp-1642093951.2388265-17111-210423397336506/source”, “state”: “file”, “uid”: 0}
Friday 14 January 2022 01:12:31 +0800 (0:00:00.738) 0:03:22.365 ********
TASK [kubernetes/master : Wait for k8s apiserver] *************************************************************************************************************
ok: [node4] => {“changed”: false, “elapsed”: 0, “match_groupdict”: {}, “match_groups”: [], “path”: null, “port”: 6443, “search_regex”: null, “state”: “started”}
ok: [node2] => {“changed”: false, “elapsed”: 0, “match_groupdict”: {}, “match_groups”: [], “path”: null, “port”: 6443, “search_regex”: null, “state”: “started”}
Friday 14 January 2022 01:12:32 +0800 (0:00:00.444) 0:03:22.810 ********
TASK [kubernetes/master : check already run] ******************************************************************************************************************
ok: [node4] => {
“msg”: false
}
ok: [node2] => {
“msg”: false
}
Friday 14 January 2022 01:12:32 +0800 (0:00:00.055) 0:03:22.866 ********
FAILED - RETRYING: Joining control plane node to the cluster. (3 retries left).
FAILED - RETRYING: Joining control plane node to the cluster. (2 retries left).
1回答
-
慕斯4289344
提问者
2022-01-13
修改hosts.yaml如下,不知道修改正确否?
[root@node4 kubespray-2.15.0]# cat inventory/mycluster/hosts.yaml
all:
hosts:
node4:
ansible_host: 10.9.202.47
ip: 10.9.202.47
access_ip: 10.9.202.47
node1:
ansible_host: 10.9.202.43
ip: 10.9.202.43
access_ip: 10.9.202.43
children:
kube-master:
hosts:
node4:
node2:
kube-node:
hosts:
node4:
etcd:
hosts:
node4:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
012022-01-14
相似问题