ImportTsv导入数据到hbase出错
来源:12-17 -功能一之将Spark Streaming的处理结果写入到HBase中

qq_多少幅度_0
2019-06-21
使用./hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=","
-Dimporttsv.columns=HBASE_ROW_KEY,info:name,info:age,info:address
user /test/input/hbase.csv 将数据导入到hbase出现下述问题
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=ocdp
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/ocdp
2019-06-21 09:45:17,517 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/usr/hdp/2.6.0.3-8/hbase
2019-06-21 09:45:17,518 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=dn01.asiainfo:2181,dn02.asiainfo:2181,dn03.asiainfo:2181 sessionTimeout=90000 watcher=hconnection-0x2e1d27ba0x0, quorum=dn01.asiainfo:2181,dn02.asiainfo:2181,dn03.asiainfo:2181, baseZNode=/hbase-unsecure
2019-06-21 09:45:17,543 INFO [main-SendThread(dn03.asiainfo:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn03.asiainfo/10.21.19.97:2181. Will not attempt to authenticate using SASL (unknown error)
2019-06-21 09:45:17,555 INFO [main-SendThread(dn03.asiainfo:2181)] zookeeper.ClientCnxn: Socket connection established to dn03.asiainfo/10.21.19.97:2181, initiating session
2019-06-21 09:45:17,577 INFO [main-SendThread(dn03.asiainfo:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn03.asiainfo/10.21.19.97:2181, sessionid = 0x36a2ff9326e14a7, negotiated timeout = 60000
2019-06-21 09:45:18,780 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2019-06-21 09:45:18,831 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x36a2ff9326e14a7
2019-06-21 09:45:18,834 INFO [main] zookeeper.ZooKeeper: Session: 0x36a2ff9326e14a7 closed
2019-06-21 09:45:18,834 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2019-06-21 09:45:19,213 INFO [main] impl.TimelineClientImpl: Timeline service address: http://nn02.asiainfo:8188/ws/v1/timeline/
Exception in thread “main” java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:157)
at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:87)
at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:70)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:159)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapred.ResourceMgrDelegate.serviceStart(ResourceMgrDelegate.java:100)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapred.ResourceMgrDelegate.(ResourceMgrDelegate.java:89)
at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:111)
at org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:95)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:738)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:747)
请问老师如何解决,首先zoo.cfg配置正确,hosts配置正确
1回答
-
Michael_PK
2019-06-21
这个东西我没用过呢,我记得HBase官网上有这个导入操作的描述,你可以参考下
00
相似问题