dockerd -H 启动docker报错

来源:4-10 Docker Overlay网络和etcd实现多机容器通信

学东西要快

2019-05-17

用vmware虚拟机 系统为centos7 docker版本为18 etcd通过yum安装的
根据指引去关闭docker然后再通过dockerd -H 启动docker时报错

WARN[2019-05-17T15:26:45.901798677+08:00] [!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]
INFO[2019-05-17T15:26:45.914205053+08:00] libcontainerd: started new containerd process  pid=7500
INFO[2019-05-17T15:26:45.914278060+08:00] parsed scheme: "unix"                         module=grpc
INFO[2019-05-17T15:26:45.914286434+08:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2019-05-17T15:26:45.914323973+08:00] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}]  module=grpc
INFO[2019-05-17T15:26:45.914333014+08:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2019-05-17T15:26:45.914369107+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc42015ab40, CONNECTING  module=grpc
INFO[2019-05-17T15:26:45.946248295+08:00] starting containerd                           revision=bb71b10fd8f58240ca47fbb579b9d1028eea7c84 version=1.2.5
INFO[2019-05-17T15:26:45.946476990+08:00] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2019-05-17T15:26:45.946496843+08:00] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
WARN[2019-05-17T15:26:45.946602269+08:00] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
INFO[2019-05-17T15:26:45.946615920+08:00] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
WARN[2019-05-17T15:26:45.950623927+08:00] failed to load plugin io.containerd.snapshotter.v1.aufs  error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"
INFO[2019-05-17T15:26:45.950646157+08:00] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2019-05-17T15:26:45.950672385+08:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2019-05-17T15:26:45.950720086+08:00] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
WARN[2019-05-17T15:26:45.950835392+08:00] failed to load plugin io.containerd.snapshotter.v1.zfs  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
INFO[2019-05-17T15:26:45.950845517+08:00] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2019-05-17T15:26:45.950857123+08:00] could not use snapshotter zfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
WARN[2019-05-17T15:26:45.950862587+08:00] could not use snapshotter btrfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
WARN[2019-05-17T15:26:45.950869584+08:00] could not use snapshotter aufs in metadata plugin  error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"
INFO[2019-05-17T15:26:45.950940088+08:00] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2019-05-17T15:26:45.950954357+08:00] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2019-05-17T15:26:45.950977857+08:00] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.950987235+08:00] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.950997908+08:00] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.951006974+08:00] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.951015380+08:00] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.951026763+08:00] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.951036120+08:00] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.951044275+08:00] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2019-05-17T15:26:45.951097033+08:00] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2019-05-17T15:26:45.951129560+08:00] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2019-05-17T15:26:45.951392916+08:00] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2019-05-17T15:26:45.951417680+08:00] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2019-05-17T15:26:45.951518122+08:00] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.951531181+08:00] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.951539534+08:00] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.951547772+08:00] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.951555239+08:00] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.951562996+08:00] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.951570802+08:00] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.952989622+08:00] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.953011192+08:00] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2019-05-17T15:26:45.953223543+08:00] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.953255247+08:00] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.953266472+08:00] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.953274863+08:00] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2019-05-17T15:26:45.953472592+08:00] serving...                                    address="/var/run/docker/containerd/containerd-debug.sock"
INFO[2019-05-17T15:26:45.953520692+08:00] serving...                                    address="/var/run/docker/containerd/containerd.sock"
INFO[2019-05-17T15:26:45.953531569+08:00] containerd successfully booted in 0.007729s
INFO[2019-05-17T15:26:45.955552449+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc42015ab40, READY  module=grpc
INFO[2019-05-17T15:26:45.972262006+08:00] parsed scheme: "unix"                         module=grpc
INFO[2019-05-17T15:26:45.972282560+08:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2019-05-17T15:26:45.972311748+08:00] parsed scheme: "unix"                         module=grpc
INFO[2019-05-17T15:26:45.972317492+08:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2019-05-17T15:26:45.980912992+08:00] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}]  module=grpc
INFO[2019-05-17T15:26:45.980944166+08:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2019-05-17T15:26:45.980972186+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc42015b920, CONNECTING  module=grpc
INFO[2019-05-17T15:26:45.981147902+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc42015b920, READY  module=grpc
INFO[2019-05-17T15:26:45.981184880+08:00] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}]  module=grpc
INFO[2019-05-17T15:26:45.981194598+08:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2019-05-17T15:26:45.981222966+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc42015bbb0, CONNECTING  module=grpc
INFO[2019-05-17T15:26:45.981381409+08:00] pickfirstBalancer: HandleSubConnStateChange: 0xc42015bbb0, READY  module=grpc
INFO[2019-05-17T15:26:45.983284894+08:00] [graphdriver] using prior storage driver: overlay2
INFO[2019-05-17T15:26:45.991157631+08:00] Graph migration to content-addressability took 0.00 seconds
INFO[2019-05-17T15:26:45.991187719+08:00] Initializing discovery without TLS
INFO[2019-05-17T15:26:45.991897412+08:00] Loading containers: start.
INFO[2019-05-17T15:26:46.064699464+08:00] 2019/05/17 15:26:46 [INFO] serf: EventMemberJoin: localhost.localdomain 10.0.0.79

ERRO[2019-05-17T15:26:46.065395479+08:00] joining serf neighbor 10.0.0.79 failed: Failed to join the cluster at neigh IP 10.0.0.112: 1 error(s) occurred:

* Failed to join 10.0.0.112: dial tcp 10.0.0.112:7946: connect: connection refused
INFO[2019-05-17T15:26:46.202652034+08:00] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address
INFO[2019-05-17T15:26:46.235454848+08:00] Loading containers: done.
INFO[2019-05-17T15:26:46.271019020+08:00] Docker daemon                                 commit=481bc77 graphdriver(s)=overlay2 version=18.09.6
INFO[2019-05-17T15:26:46.271214129+08:00] Daemon has completed initialization
INFO[2019-05-17T15:26:46.276038494+08:00] API listen on /var/run/docker.sock
INFO[2019-05-17T15:26:46.276096926+08:00] API listen on [::]:2375

报的是这个错误

ERRO[2019-05-17T15:26:46.065395479+08:00] joining serf neighbor 10.0.0.79 failed: Failed to join the cluster at neigh IP 10.0.0.112: 1 error(s) occurred:

* Failed to join 10.0.0.112: dial tcp 10.0.0.112:7946: connect: connection refused

检查过etcdctl cluster-health是健康的

etcdctl cluster-health
member c587136dbf4fbef2 is healthy: got healthy result from http://10.0.0.112:2379
member e4700ec77a90325a is healthy: got healthy result from http://10.0.0.79:2379
cluster is healthy
写回答

1回答

麦兜搞IT

2019-05-18

您的ETCD的启动命令是怎么写的呢,就是那些参数。 您可以完全按照这个步骤来一遍试试https://docker-k8s-lab.readthedocs.io/en/latest/docker/docker-etcd.html

0
7
学东西要快
回复
麦兜搞IT
ch-19941018
2019-05-19
共7条回复

系统学习Docker 践行DevOps理念

无论你是开发、测试还是运维,Docker都是你的必备技能。

3297 学习 · 1895 问题

查看课程