11-5 kubectl apply -f glusterfs-pvc.yaml为pending状态

来源:11-5 共享存储 --- PV、PVC和StorageClass(下)

yl_testimooc3804939

2022-12-06


[root@node-1 9-persistent-volume]# cat glusterfs-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storage-class
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://172.16.1.26:30001"
  restauthenabled: "false"

[root@node-1 9-persistent-volume]# cat glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-pvc
spec:
  storageClassName: glusterfs-storage-class
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi


[root@node-1 9-persistent-volume]# kubectl apply -f glusterfs-storage-class.yaml
storageclass.storage.k8s.io/glusterfs-storage-class created
[root@node-1 9-persistent-volume]# kubectl get storageclass -o wide
NAME                      PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
glusterfs-storage-class   kubernetes.io/glusterfs   Delete          Immediate           false                  10s

[root@node-1 9-persistent-volume]# kubectl apply -f glusterfs-pvc.yaml
persistentvolumeclaim/glusterfs-pvc created
[root@node-1 9-persistent-volume]# kubectl get persistentvolumeclaim -o wide
NAME            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS              AGE   VOLUMEMODE
glusterfs-pvc   Pending                                      glusterfs-storage-class   8s    Filesystem

这个glusterfs-pvc一直处于Pending状态,查了下日志

[root@node-1 ~]# kubectl describe pvc glusterfs-pvc
Name:          glusterfs-pvc
Namespace:     default
StorageClass:  glusterfs-storage-class
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  9s (x6 over 76s)  persistentvolume-controller  Failed to provision volume with StorageClass "glusterfs-storage-class": 
  failed to create volume: failed to create volume: see kube-controller-manager.log for details

说去要看kube-controller-manager.log。
我通过journalctl -f -u kube-controller-manager,查询到以下异常

Dec 06 01:29:11 node-2 kube-controller-manager[2079]: E1206 01:29:11.984047    2079 goroutinemap.go:150] 
Operation for "provision-default/glusterfs-pvc[ecf3941f-ce21-4440-babc-c7dffb20daa6]" failed. 
No retries permitted until 2022-12-06 01:29:12.484029203 +0800 CST m=+3747.950475700 (durationBeforeRetry 500ms). 
Error: "failed to create volume: failed to create volume: see kube-controller-manager.log for details"



Dec 06 01:29:56 node-2 kube-controller-manager[2079]: I1206 01:29:56.439527    2079 glusterfs.go:893] 
endpoint &Endpoints{ObjectMeta:{glusterfs-dynamic-ecf3941f-ce21-4440-babc-c7dffb20daa6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> 
map[gluster.kubernetes.io/provisioned-for-pvc:glusterfs-pvc] map[] [] []  []},Subsets:[]EndpointSubset{},} already exist in namespace default

Dec 06 03:47:55 node-1 kube-controller-manager[783]: I1206 03:47:55.423306     783 event.go:291] 
"Event occurred" object="default/glusterfs-pvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" 
message="Failed to provision volume with StorageClass \"glusterfs-storage-class\": failed to create volume: failed to create volume: see kube-controller-manager.log for details"


Dec 06 01:29:56 node-2 kube-controller-manager[2079]: I1206 01:29:56.462861    2079 glusterfs.go:913] 
service &Service{ObjectMeta:{glusterfs-dynamic-ecf3941f-ce21-4440-babc-c7dffb20daa6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> 
map[gluster.kubernetes.io/provisioned-for-pvc:glusterfs-pvc] map[] [] []  []},
Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:1,TargetPort:{0 0 },NodePort:0,AppProtocol:nil,},},
Selector:map[string]string{},ClusterIP:,Type:,ExternalIPs:[],SessionAffinity:,LoadBalancerIP:,LoadBalancerSourceRanges:[],
ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,
TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,},
Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 
already exist in namespace default


Dec 06 00:27:34 node-1 kube-controller-manager[2211]: W1206 00:27:34.852047    2211 authentication.go:303]
 No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, 
 so client certificate authentication won't work.

glusterfs是正常的

[root@node-1 9-persistent-volume]# kubectl get all
NAME                               READY   STATUS    RESTARTS   AGE
pod/glusterfs-487xx                1/1     Running   0          25m
pod/glusterfs-rr6bn                1/1     Running   0          25m
pod/glusterfs-x796r                1/1     Running   0          25m
pod/heketi-696d69f558-swqsj        1/1     Running   0          25m


[root@heketi-696d69f558-b5p8d /]# heketi-cli --user admin --secret admin  topology load --json topology.json
Creating cluster ... ID: 05f7079cf366620521f37a414265799b
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node gluster-01 ... ID: 56331e5a3f6eed1f450d6c2bf99212e2
                Adding device /dev/sdb ... OK
        Creating node gluster-02 ... ID: 3fe40fbbe39018642909294afd8f61c1
                Adding device /dev/sdb ... OK
        Creating node gluster-03 ... ID: d878a4a840634a39645da1b1313b56e0
                Adding device /dev/sdb ... OK
                
[root@gluster-01 ~]# crictl exec -it b7893e3b17c8f bash
[root@gluster-01 /]# gluster peer status
Number of Peers: 2

Hostname: 172.16.1.25
Uuid: 0ff5b35b-35d8-4f89-8363-b41824433352
State: Peer in Cluster (Connected)
Other names:
gluster-02

Hostname: 172.16.1.26
Uuid: 45416bf6-ba8d-4e46-bb49-694ea7c61084
State: Peer in Cluster (Connected)



[root@gluster-02 ~]# crictl ps                
CONTAINER           IMAGE               CREATED                  STATE               NAME                   ATTEMPT             POD ID
84f9767c6de20       b2919ab8d731c       51 minutes ago           Running             glusterfs              0                   674fffa895957
[root@gluster-02 ~]# crictl exec -it  84f9767c6de20 bash
[root@gluster-02 /]# gluster peer status
Number of Peers: 2

Hostname: gluster-01
Uuid: 33ab05bf-3b75-4508-be64-c57a9752b7f5
State: Peer in Cluster (Connected)

Hostname: 172.16.1.26
Uuid: 45416bf6-ba8d-4e46-bb49-694ea7c61084
State: Peer in Cluster (Connected)


[root@gluster-03 ~]# crictl ps
CONTAINER           IMAGE               CREATED                  STATE               NAME                       ATTEMPT             POD ID
c64d9c88f208a       b2919ab8d731c       52 minutes ago           Running             glusterfs                  0                   df73c14bc07f9
[root@gluster-03 ~]# crictl exec -it c64d9c88f208a bash
[root@gluster-03 /]# gluster peer status
Number of Peers: 2

Hostname: 172.16.1.25
Uuid: 0ff5b35b-35d8-4f89-8363-b41824433352
State: Peer in Cluster (Connected)
Other names:
172.16.1.25

Hostname: gluster-01
Uuid: 33ab05bf-3b75-4508-be64-c57a9752b7f5
State: Peer in Cluster (Connected)



[root@heketi-696d69f558-swqsj /]# heketi-cli --user admin --secret admin topology info
Cluster Id: 05f7079cf366620521f37a414265799b

    File:  true
    Block: true

    Volumes:


    Nodes:

        Node Id: 3fe40fbbe39018642909294afd8f61c1
        State: online
        Cluster Id: 05f7079cf366620521f37a414265799b
        Zone: 1
        Management Hostnames: gluster-02
        Storage Hostnames: 172.16.1.25
        Devices:
                Id:8caf22ec845bbee76999c669e8743cae   State:online    Size (GiB):7       Used (GiB):0       Free (GiB):7       
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 56331e5a3f6eed1f450d6c2bf99212e2
        State: online
        Cluster Id: 05f7079cf366620521f37a414265799b
        Zone: 1
        Management Hostnames: gluster-01
        Storage Hostnames: 172.16.1.24
        Devices:
                Id:ec782a085b7acfb43ed8b9e586236d45   State:online    Size (GiB):7       Used (GiB):0       Free (GiB):7       
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: d878a4a840634a39645da1b1313b56e0
        State: online
        Cluster Id: 05f7079cf366620521f37a414265799b
        Zone: 1
        Management Hostnames: gluster-03
        Storage Hostnames: 172.16.1.26
        Devices:
                Id:03e6bb6066559e9b3851680b387af49a   State:online    Size (GiB):7       Used (GiB):0       Free (GiB):7       
                        Known Paths: /dev/sdb

                        Bricks:

在/var/logs/glusterfs/gluster.log和/var/logs/glusterfs/cli.log中也没发现异常。

可是我依然未解决此问题,希望老师帮忙看下,谢谢老师

对了,我在操作前面内容前,在执行下述内容发现错误的时候,

[root@heketi-76f76d58d8-jb8xt /]# heketi-cli topology load --json topology.json 
Error: Unable to get topology information: Invalid JWT token: Token missing iss claim

在heketi-deployment.yaml中加入了密码,不知道对刚才那个错误有没有影响。

 67         - name: HEKETI_ADMIN_KEY
 68           value: "admin"

[root@heketi-696d69f558-b5p8d /]# heketi-cli --user admin --secret admin  topology load --json topology.json
Creating cluster ... ID: 05f7079cf366620521f37a414265799b
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node gluster-01 ... ID: 56331e5a3f6eed1f450d6c2bf99212e2
                Adding device /dev/sdb ... OK
        Creating node gluster-02 ... ID: 3fe40fbbe39018642909294afd8f61c1
                Adding device /dev/sdb ... OK
        Creating node gluster-03 ... ID: d878a4a840634a39645da1b1313b56e0
                Adding device /dev/sdb ... OK
写回答

1回答

刘果国

2022-12-07

感觉还是得从controller-manager入手,重建deployment,看看实时日志

0
2
yl_testimooc3804939
老师,我按照您说的方法,解决了,不过还有个疑问,麻烦您看下我发的另一个问题"11-5 kubectl apply -f glusterfs-pvc.yaml为pending状态(后续1)"。
2022-12-08
共2条回复

Kubernetes生产落地全程实践

一个互联网公司落地Kubernetes全过程点点滴滴

2293 学习 · 2211 问题

查看课程