【已解决】pod/nginx-ingress-controller一直处于Pending状态,异常如下

来源:6-6 部署ingress-nginx(下)

慕少8521559

2021-10-29

kubectl版本为v1.20.2,执行  kubectl apply -f mandatory.yaml  执行失败报错如下:

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole


clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole configured

Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role

role.rbac.authorization.k8s.io/nginx-ingress-role configured

Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding

rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding configured

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding configured

unable to recognize "mandatory.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

unable to recognize "mandatory.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"


所以我就把 因为 mandatory.yaml 里

1 rbac.authorization.k8s.io/v1beta1 都改为 rbac.authorization.k8s.io/v1

2  extensions/v1beta1 改为 apps/v1


最后 kubectl apply -f mandatory.yaml  执行没问题,

  kubectl get  all -n ingress-nginx 查看状态如下

NAME                                            READY   STATUS    RESTARTS   AGE

pod/default-http-backend-6b849d7877-hr9s8       1/1     Running   0          12h

pod/nginx-ingress-controller-7c55698fb9-2zxnx   0/1     Pending   0          12h


NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE

service/default-http-backend   ClusterIP   10.233.86.53   <none>        80/TCP    18h


NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/default-http-backend       1/1     1            1           12h

deployment.apps/nginx-ingress-controller   0/1     1            0           12h


NAME                                                  DESIRED   CURRENT   READY   AGE

replicaset.apps/default-http-backend-6b849d7877       1         1         1       12h

replicaset.apps/nginx-ingress-controller-7c55698fb9   1         1         0       12h



可见 default-http-backend 正常运行,但是 pod/nginx-ingress-controller 却一直pending 。

执行 kubectl logs  nginx-ingress-controller-7c55698fb9-2zxnx -n ingress-nginx, 没有日志输出

执行 kubectl describe  pod/nginx-ingress-controller-7c55698fb9-2zxnx -n ingress-nginx,结果如下

Name:           nginx-ingress-controller-7c55698fb9-2zxnx

Namespace:      ingress-nginx

Priority:       0

Node:           <none>

Labels:         app.kubernetes.io/name=ingress-nginx

                app.kubernetes.io/part-of=ingress-nginx

                pod-template-hash=7c55698fb9

Annotations:    prometheus.io/port: 10254

                prometheus.io/scrape: true

Status:         Pending

IP:

IPs:            <none>

Controlled By:  ReplicaSet/nginx-ingress-controller-7c55698fb9

Containers:

  nginx-ingress-controller:

    Image:       quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0

    Ports:       80/TCP, 443/TCP

    Host Ports:  80/TCP, 443/TCP

    Args:

      /nginx-ingress-controller

      --default-backend-service=$(POD_NAMESPACE)/default-http-backend

      --configmap=$(POD_NAMESPACE)/nginx-configuration

      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

      --udp-services-configmap=$(POD_NAMESPACE)/udp-services

      --publish-service=$(POD_NAMESPACE)/ingress-nginx

      --annotations-prefix=nginx.ingress.kubernetes.io

    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3

    Readiness:  http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3

    Environment:

      POD_NAME:       nginx-ingress-controller-7c55698fb9-2zxnx (v1:metadata.name)

      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-wl4hz (ro)

Conditions:

  Type           Status

  PodScheduled   False

Volumes:

  nginx-ingress-serviceaccount-token-wl4hz:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  nginx-ingress-serviceaccount-token-wl4hz

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  app=ingress

Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Events:

  Type     Reason            Age   From               Message

  ----     ------            ----  ----               -------

  Warning  FailedScheduling  12h   default-scheduler  0/2 nodes are available: 2 node(s) didn't match Pod's node affinity.



可以看到有一个warnging的event 

 Warning  FailedScheduling  12h   default-scheduler  0/2 nodes are available: 2 node(s) didn't match Pod's node affinity.


这种应该处理解决?

谢谢~



【解决办法】

打tag

kubectl label node node-1 app=ingress


然后在node-2 关掉harbod的docker-compose(占用了80端口),就ok了。



写回答

1回答

刘果国

2021-10-30

赞赞赞

0
0

Kubernetes生产落地全程实践

一个互联网公司落地Kubernetes全过程点点滴滴

2293 学习 · 2216 问题

查看课程