Producer Send一直报错无法发送消息!

来源:4-9 -Kafka Producer Java API编程

Savvy1127

2018-09-21

老师,我在尝试Kafka Producer API的时候发送失败了,本地配置/etc/hosts都已经加了,防火墙也没有,本机是mac,跑的Producer,Kafka server跑在了你给的那个VM虚拟机上,起得是server.properties配置,zookeeper什么的都正常启动,求解!

Exception in thread “Thread-0” kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries. at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.imooc.spark.kafka.KafkaProducer.run(KafkaProducer.java:36)

写回答

1回答

Michael_PK

2018-09-21

本机配置VM的host没

0
2
Savvy1127
VM IP = 192.168.50.112 /etc/hosts 已经更新 Producer 代码跑在mac上 IP 192.168.50.72 VM的server.properties的关键配置如下: broker.id=0 listeners=PLAINTEXT://:9092 host.name=hadoop000 advertised.host.name=192.168.50.112 advertised.port=9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/home/hadoop/app/tmp/kafka-logs num.partitions=1 num.recovery.threads.per.data.dir=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 log.cleaner.enable=false zookeeper.connect=hadoop000:2181 zookeeper.connection.timeout.ms=6000 我在VM上启动server和consumer topic是Cao kafka-server-start.sh -daemon server.properties kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic Cao
2018-09-21
共2条回复

Spark Streaming实时流处理项目实战

Flume+Kafka+Spark Streaming 构建通用实时流处理平台

1404 学习 · 571 问题

查看课程