sparkstreaming管理kafka的offset问题
来源:15-5 -offset管理演示一

qq_无妄_3
2018-12-20
<properties>
<scala.version>2.11.8</scala.version>
<kafka.version>0.10.2.2</kafka.version>
<spark.version>2.4.0</spark.version>
</properties>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
sparkstreaming应用程序
package com.kun.kafka_offset
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
object Offset01APP {
def main(args: Array[String]): Unit = {
System.setProperty("hadoop.home.dir", "D:\hadoop-common-2.2.0-bin-master")
val sparkConf=new SparkConf().setAppName("Offset01APP").setMaster("local[2]")
val ssc=new StreamingContext(sparkConf,Seconds(10))
val kfakaParams=Map[String,Object](
"bootstrap.servers"->"192.168.232.8:9092",
"auto.offset.reset"-> "earliest",//latest, earliest, none
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "hello_topic_test"
)
val topics = Array("hello_topic")
val messages=KafkaUtils.createDirectStream[String,String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kfakaParams))
messages.foreachRDD(rdd=>
if(!rdd.isEmpty()){
println("无妄:"+rdd.count())
}
)
ssc.start()
ssc.awaitTermination()
}
}
输出日志
"C:\Program Files\Java\jdk1.8.0_191\bin\java.exe" "-javaagent:D:\Program Files\JetBrains\IntelliJ IDEA
18/12/20 20:45:00 INFO ConsumerConfig: ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [192.168.232.8:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = hello_topic_test
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
18/12/20 20:45:01 INFO AppInfoParser: Kafka version : 2.0.0
18/12/20 20:45:01 INFO AppInfoParser: Kafka commitId : 3402a8361b734732
18/12/20 20:45:01 INFO Metadata: Cluster ID: nt6HH1AcQRuYzwWWKcBJfw
18/12/20 20:45:01 INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=hello_topic_test] Discovered group coordinator 192.168.232.8:9092 (id: 2147483647 rack: null)
18/12/20 20:45:01 INFO ConsumerCoordinator: [Consumer clientId=consumer-1, groupId=hello_topic_test] Revoking previously assigned partitions []
18/12/20 20:45:01 INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=hello_topic_test] (Re-)joining group
18/12/20 20:45:01 INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=hello_topic_test] Successfully joined group with generation 37
18/12/20 20:45:01 INFO ConsumerCoordinator: [Consumer clientId=consumer-1, groupId=hello_topic_test] Setting newly assigned partitions [hello_topic-0]
18/12/20 20:45:01 INFO RecurringTimer: Started timer for JobGenerator at time 1545309910000
18/12/20 20:45:01 INFO JobGenerator: Started JobGenerator at 1545309910000 ms
18/12/20 20:45:01 INFO JobScheduler: Started JobScheduler
18/12/20 20:45:01 INFO StreamingContext: StreamingContext started
18/12/20 20:45:10 INFO Fetcher: [Consumer clientId=consumer-1, groupId=hello_topic_test] Resetting offset for partition hello_topic-0 to offset 906.
PK哥;我测试spark2.4和kafka0.10.2.2 怎么测试它的count值都不会出现你的那种情况。。。也不会从头消费;好搞笑啊;我设置的earliest;它竟然还是从末尾
groupId=hello_topic_test] Resetting offset for partition hello_topic-0 to offset 906.
写回答
3回答
-
用上课的版本测试,不同版本可能有差别
012018-12-20 -
qq_无妄_3
提问者
2018-12-20
我好像解决了。。。
earliest
当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
latest
当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据
none
topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常enable.auto.commit它默认是true。。。。我一直没设置。。。。。
112018-12-20 -
Michael_PK
2018-12-20
你这API根本就不是上课的那一套,所以可能不会有问题
00
相似问题