apply nginx-deployment.yaml ImagePullBackOff

来源:2-7 云原生组件之声明式API(下)

qq_慕运维0344048

2023-04-05

按照视频操作,搭建了k8s集群,采用的是vbox三台虚拟机

@master0:~/k8s-demo# kubectl get nodes
NAME      STATUS   ROLES                  AGE    VERSION
master0   Ready    control-plane,master   102m   v1.21.6
node01    Ready    <none>                 92m    v1.21.6
node02    Ready    <none>                 90m    v1.21.6

然后照着视频用nginx-deployment.yaml测试一下
nginx-deployment.yaml,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-demo
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
     hostNetwork: true
     containers:
     - name: nginx
       image: nginx:1.14

首先看一下pod,都在running

root@master0:~/k8s-demo# kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-59d64cd4d4-lpgmf          1/1     Running   5          104m
kube-system   coredns-59d64cd4d4-mqz59          1/1     Running   3          104m
kube-system   etcd-master0                      1/1     Running   3          104m
kube-system   kube-apiserver-master0            1/1     Running   3          104m
kube-system   kube-controller-manager-master0   1/1     Running   6          104m
kube-system   kube-flannel-ds-844df             1/1     Running   0          92m
kube-system   kube-flannel-ds-8k244             1/1     Running   3          101m
kube-system   kube-flannel-ds-9pkhs             1/1     Running   0          94m
kube-system   kube-proxy-j52tr                  1/1     Running   3          104m
kube-system   kube-proxy-qphxw                  1/1     Running   0          92m
kube-system   kube-proxy-zntbp                  1/1     Running   0          94m
kube-system   kube-scheduler-master0            1/1     Running   4          104m

然后kubectl apply -f nginx-deployment.yaml,说是pod已经created

root@master0:~/k8s-demo# kubectl apply -f nginx-deployment.yaml 
deployment.apps/nginx-deployment-demo created

实际上

root@master0:~/k8s-demo# kubectl get pod -A
NAMESPACE     NAME                                     READY   STATUS             RESTARTS   AGE
default       nginx-deployment-demo-5996854789-4mqks   0/1     ImagePullBackOff   0          85s
default       nginx-deployment-demo-5996854789-sslvm   0/1     ImagePullBackOff   0          85s
kube-system   coredns-59d64cd4d4-lpgmf                 1/1     Running            5          106m
kube-system   coredns-59d64cd4d4-mqz59                 1/1     Running            3          106m
kube-system   etcd-master0                             1/1     Running            3          106m
kube-system   kube-apiserver-master0                   1/1     Running            3          106m
kube-system   kube-controller-manager-master0          1/1     Running            6          106m
kube-system   kube-flannel-ds-844df                    1/1     Running            0          95m
kube-system   kube-flannel-ds-8k244                    1/1     Running            3          103m
kube-system   kube-flannel-ds-9pkhs                    1/1     Running            0          96m
kube-system   kube-proxy-j52tr                         1/1     Running            3          106m
kube-system   kube-proxy-qphxw                         1/1     Running            0          95m
kube-system   kube-proxy-zntbp                         1/1     Running            0          96m
kube-system   kube-scheduler-master0                   1/1     Running            4          106m

我测试一下是不是docker拉取不到,更换了源,
vim /etc/docker/daemon.json

{
  "registry-mirrors": [
	"https://docker.mirrors.ustc.edu.cn"
    ]
}

systemctl daemon-reload
systemctl restart docker
然后把nginx1.14拉取下来

root@master0:~/k8s-demo# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.6    f6f0f372360b   17 months ago   126MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.6    90050ec9b130   17 months ago   120MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.6    c51494bd8791   17 months ago   50.8MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.21.6    01d07d3b4d18   17 months ago   104MB
registry.cn-hangzhou.aliyuncs.com/chand/flannel                   v0.14.0    8522d622299c   22 months ago   67.9MB
registry.aliyuncs.com/google_containers/pause                     3.4.1      0f8457a4c2ec   2 years ago     683kB
registry.aliyuncs.com/google_containers/coredns                   v1.8.0     296a6d5035e2   2 years ago     42.5MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   2 years ago     253MB
nginx                                                             1.15       53f3fd8007f7   3 years ago     109MB
nginx                                                             1.14       295c7be07902   4 years ago     109MB

然后kubectl delete-f nginx-deployment.yaml
kubectl apply -f nginx-deployment.yaml

root@master0:~/k8s-demo# kubectl get pod -A
NAMESPACE     NAME                                     READY   STATUS             RESTARTS   AGE
default       nginx-deployment-demo-5996854789-hfprq   0/1     ImagePullBackOff   0          84s
default       nginx-deployment-demo-5996854789-s54pf   0/1     ImagePullBackOff   0          84s
kube-system   coredns-59d64cd4d4-lpgmf                 1/1     Running            7          154m
kube-system   coredns-59d64cd4d4-mqz59                 1/1     Running            5          154m
kube-system   etcd-master0                             1/1     Running            5          154m

还是如此,后来查看一下pod
kubectl describe pod xxxxxxxxx

root@master0:~/k8s-demo# kubectl describe pod nginx-deployment-demo-5996854789-hfprq
Name:         nginx-deployment-demo-5996854789-hfprq
Namespace:    default
Priority:     0
Node:         node01/192.168.93.50
Start Time:   Wed, 05 Apr 2023 15:27:25 +0000
Labels:       app=nginx
              pod-template-hash=5996854789
Annotations:  <none>
Status:       Pending
IP:           192.168.93.50
IPs:
  IP:           192.168.93.50
Controlled By:  ReplicaSet/nginx-deployment-demo-5996854789
Containers:
  nginx:
    Container ID:   
    Image:          nginx:1.14
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxkmq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-gxkmq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                            From               Message
  ----     ------     ----                           ----               -------
  Normal   Scheduled  3m49s                          default-scheduler  Successfully assigned default/nginx-deployment-demo-5996854789-hfprq to node01
  Normal   Pulling    <invalid> (x4 over <invalid>)  kubelet            Pulling image "nginx:1.14"
  Warning  Failed     <invalid> (x4 over <invalid>)  kubelet            Failed to pull image "nginx:1.14": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     <invalid> (x4 over <invalid>)  kubelet            Error: ErrImagePull
  Warning  Failed     <invalid> (x6 over <invalid>)  kubelet            Error: ImagePullBackOff
  Normal   BackOff    <invalid> (x7 over <invalid>)  kubelet            Back-off pulling image "nginx:1.14"

发现 :Error response from daemon: Get “https://registry-1.docker.io/v2/”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
已经加了registry-mirrors了,它还是从https://registry-1.docker.io/v2/拉取,
运行docker info命令:

root@master0:~/k8s-demo# docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 32
  Running: 16
  Paused: 0
  Stopped: 16
 Images: 10
 Server Version: 20.10.21
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 
 runc version: 
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-208-generic
 Operating System: Ubuntu 18.04.6 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 1.946GiB
 Name: master0
 ID: 3MXB:ES47:PEYN:5I6W:3QYN:RKLG:36XG:KV73:BJXN:5IOE:VZEV:OT37
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://docker.mirrors.ustc.edu.cn/
  https://hub-mirror.c.163.com/
  https://registry.docker-cn.com/
 Live Restore Enabled: false

WARNING: No swap limit support

关键信息 Registry: https://index.docker.io/v1/和Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
https://hub-mirror.c.163.com/
https://registry.docker-cn.com/
后来我去网上查了一下能否更改Registry: https://index.docker.io/v1/,发现不行。
请问怎么解决这个问题呢?谢谢

写回答

1回答

暮闲

2023-04-12

同学你好 我仔细分析你的日志信息  发现是镜像拉取失败导致的 用docker pull命令你可以拉取镜像吗?
0
0

云原生+边缘计算项目实战-KubeEdge打造边缘管理平台

抢位前沿技术,获得先发优势

258 学习 · 265 问题

查看课程