phoenix连接hbase错误问题

来源:2-13 Springboot+Mybatis+phoenix整合Hbase

慕斯卡4063741

2022-04-08

问题描述

老师您好,我遇到phoenix连接hbase错误问题,请指教。
(1) 确认hbase和zk相关容器已经启动

hbase-master           /entrypoint.sh /run.sh           Up      0.0.0.0:16000->16000/tcp,:::16000->16000/tcp, 0.0.0.0:16010->16010/tcp,:::16010->16010/tcp, 0.0.0.0:8765->8765/tcp,:::8765->8765/tcp
hbase-regionserver-1   /entrypoint.sh /run.sh           Up      16020/tcp, 16030/tcp, 0.0.0.0:16120->16120/tcp,:::16120->16120/tcp, 0.0.0.0:16130->16130/tcp,:::16130->16130/tcp
hbase-regionserver-2   /entrypoint.sh /run.sh           Up      16020/tcp, 16030/tcp, 0.0.0.0:16220->16220/tcp,:::16220->16220/tcp, 0.0.0.0:16230->16230/tcp,:::16230->16230/tcp
hbase-regionserver-3   /entrypoint.sh /run.sh           Up      16020/tcp, 16030/tcp, 0.0.0.0:16320->16320/tcp,:::16320->16320/tcp, 0.0.0.0:16330->16330/tcp,:::16330->16330/tcp
phoenix                /run-phoenix-server.sh           Up      0.0.0.0:8766->8765/tcp,:::8766->8765/tcp
zoo1                   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2182->2181/tcp,:::2182->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp
zoo2                   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2183->2181/tcp,:::2183->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp
zoo3                   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2184->2181/tcp,:::2184->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp

(2)启动时会报如下错误

2022-04-08 21:38:15.935  INFO 11992 --- [reate-925672150] org.apache.zookeeper.ZooKeeper           : Initiating client connection, connectString=192.168.71.128:2182 sessionTimeout=90000 watcher=hconnection-0x77bee70a0x0, quorum=192.168.71.128:2182, baseZNode=/hbase
2022-04-08 21:38:15.982  INFO 11992 --- [68.71.128:2182)] org.apache.zookeeper.ClientCnxn          : Opening socket connection to server 192.168.71.128/192.168.71.128:2182. Will not attempt to authenticate using SASL (unknown error)
2022-04-08 21:38:15.997  INFO 11992 --- [68.71.128:2182)] org.apache.zookeeper.ClientCnxn          : Socket connection established to 192.168.71.128/192.168.71.128:2182, initiating session
2022-04-08 21:38:16.008  INFO 11992 --- [68.71.128:2182)] org.apache.zookeeper.ClientCnxn          : Session establishment complete on server 192.168.71.128/192.168.71.128:2182, sessionid = 0x100019111400040, negotiated timeout = 40000
2022-04-08 21:38:16.307  INFO 11992 --- [reate-925672150] o.a.p.query.ConnectionQueryServicesImpl  : HConnection established. Stacktrace for informational purposes: hconnection-0x77bee70a java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.phoenix.util.LogUtil.getCallerStackTrace(LogUtil.java:55)
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:410)
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:256)
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2408)
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1652)
com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1718)
com.alibaba.druid.pool.DruidDataSource$CreateConnectionThread.run(DruidDataSource.java:2785)

代码

application.yml

spring:
#  data:
#    elasticsearch:
#      # 是否开启本地缓存
#      repositories:
#        enabled: false
#      # 9300这个端口是TCP端口,Java是使用TCP连接
#      # 9200这个端口是Http api端口
#      cluster-nodes: 192.168.71.128:9301
#  redis:
#    port: 6379
#    host: 192.168.71.128
#    jedis:
#      pool:
#        max-active: 8
#        max-wait: -1
#        max-idle: 500
#        min-idle: 0
#      lettuce:
#        shutdown-timeout: 0
  # hbase这里的配置绑定到HBaseConfig
  hbase:
    'hbase.zookeeper.quorum': 'zoo1:2182'
  datasource:
    type: com.alibaba.druid.pool.DruidDataSource
    # 配置德鲁伊连接池
    druid:
      # 配置 数据源
      hive:
        url: jdbc:hive2://192.168.71.128:10000
        driver-class-name: org.apache.hive.jdbc.HiveDriver
      phoenix:
        url: jdbc:phoenix:192.168.71.128:2182
        driver-class-name: org.apache.phoenix.jdbc.PhoenixDriver
#      clickhouse:
#        url: jdbc:clickhouse://192.168.71.128:8123
#        driver-class-name: ru.yandex.clickhouse.ClickHouseDriver
#        username: default
#        password:
#      mysql:
#        username: root
#        password: 123456
#        url: jdbc:mysql://192.168.71.128:3306/imooc?serverTimezone=Asia/Shanghai&useSSL=false
#        driver-class-name: com.mysql.cj.jdbc.Driver
      # 最大的连接数
      max-active: 200
      # 初始化的连接数
      initial-size: 10
      max-wait: 60000
      # 最小的连接数·
      min-idle: 10
      # 配置间隔时间 间隔多长时间检测需要关闭的空闲连接数 毫秒
      time-between-eviction-runs-millis: 60000
      # 一个连接最小的生存时间
      min-evictable-idle-time-millis: 300000
      test-while-idle: true
      test-on-borrow: false
      test-on-return: false
      validation-query: select 1
      # 是否缓存PrepareStatements
      pool-prepared-statements: true
      max-open-prepared-statements: 200
      break-after-acquire-failure: true
      time-between-connect-error-millis: 300000
      connection-properties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
      use-global-dataSourceStat: true
      filter:
        config:
          enabled: false
      # 配置基础监控
      web-stat-filter:
        enabled: true
        url-pattern: /*
        # 不纳入统计的URI
        exclusions: .js,*.gif,*.jpg,*.bmp,*.png,*.css,*.ico,/druid/*
      # 统计监控可视化
      stat-view-servlet:
        enabled: true
        url-pattern: /druid/*
        # 允许访问的URI
        allow:
        # 禁止访问的URI
        deny:
      # 配置监控拦截filters
      filters:
        stat:
          enable: true
          log-slow-sql: true
          slow-sql-millis: 1000
          merge-sql: true
          # 针对防火墙
          wall:
            config:
              multi-statement-allow: true



#mybatis:
#  mapper-locations: classpath:com/imooc/dmp/mapper/*.xml

PhoenixTConnTest.java

package com.imooc.dmp;

import com.alibaba.druid.pool.DruidDataSource;
//import com.imooc.dmp.entity.phoenix.PhoenixT;
//import com.imooc.dmp.mapper.phoenix.PhoenixTMapper;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import javax.sql.DataSource;
import java.sql.Connection;
import java.util.List;


@RunWith(SpringRunner.class)
@SpringBootTest
public class PhoenixTConnTest {

    @Autowired
    @Qualifier("phoenixDataSource")
    private DataSource dataSource;
//    @Autowired
//    private PhoenixTMapper phoenixTMapper;

    /**
     * 测试通过phoenix driver是否能连接上hbase
     * @throws Exception
     */
    @Test
    public void testPhoenixDataSource() throws Exception{
        Connection connection = dataSource.getConnection();
        System.out.println("#################");
        System.out.println("当前 dataSource 驱动:"+ connection.getMetaData().getDriverName());
        System.out.println("#################");
        DruidDataSource druidDataSource = (DruidDataSource) dataSource;
        System.out.println("####### 测试是否能读取 druid 配置 ##########");
        System.out.println("druid设置的最大连接数:"+ druidDataSource.getMaxActive());
        System.out.println("Phoenix 连接成功!");
    }

//    @Test
//    public void testPhoenixRead() {
//        List<PhoenixT> list = phoenixTMapper.getPhoenixT();
//        for(PhoenixT PhoenixT :list ){
//            System.out.println(PhoenixT.getId());
//            System.out.println(PhoenixT.getName());
//            System.out.println("##############");
//        }
//    }

//    @Test
//    public void testConnToHBaseWithoutDriver() throws Exception{
//        String url = "jdbc:phoenix:192.168.137.33:2182";
//        Connection conn = null;
//        Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
//        System.out.println("Connecting to database..");
//        conn = DriverManager.getConnection(url);
//        System.out.println(conn);
//    }

}

PhoenixDataSourceConfig.java

package com.imooc.dmp.config;

import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceBuilder;
import com.imooc.dmp.DmpApplication;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;

import javax.sql.DataSource;

/*************************
 * phoenix数据源 配置类
 * *************************
 */
@Configuration
//@MapperScan(basePackages = "com.imooc.dmp.mapper.phoenix",
//            sqlSessionTemplateRef = "phoenixSqlSessionTemplate",
//nameGenerator = DmpApplication.SpringBeanNameGenerator.class)
public class PhoenixDataSourceConfig {

    /**
     * 生成 Phoenix DataSource
     * @param druidDataSourceConfig
     * @return
     */
    @ConfigurationProperties(prefix = "spring.datasource.druid.phoenix")
    @Bean(name = "phoenixDataSource")
    public DataSource phoenixDataSource(DruidDataSourceConfig druidDataSourceConfig){

        DruidDataSource builder = DruidDataSourceBuilder.create().build();
        druidDataSourceConfig.setProperties(builder);
        return builder;
    }

//    /**
//     * 生成 SqlSessionFactory
//     * @param dataSource
//     * @return
//     * @throws Exception
//     */
//    @Bean(name = "phoenixSqlSessionFactory")
//    public SqlSessionFactory phoenixSqlSessionFactory(
//            @Qualifier("phoenixDataSource") DataSource dataSource)
//            throws Exception{
//        SqlSessionFactoryBean sqlSessionFactoryBean
//                = new SqlSessionFactoryBean();
//        sqlSessionFactoryBean.setDataSource(dataSource);
//
//        return sqlSessionFactoryBean.getObject();
//    }
//
//    /**
//     * 生成 SqlSessionTemplate
//     * @param sqlSessionFactory
//     * @return
//     * @throws Exception
//     */
//    @Bean(name = "phoenixSqlSessionTemplate")
//    public SqlSessionTemplate phoenixSqlSessionTemplate(
//            @Qualifier("phoenixSqlSessionFactory") SqlSessionFactory sqlSessionFactory)
//            throws Exception {
//        return new SqlSessionTemplate(sqlSessionFactory);
//    }

}

DruidDataSourceConfig.java

package com.imooc.dmp.config;

import com.alibaba.druid.pool.DruidDataSource;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;


/********************************
 * druid 配置类
 * *****************************
 */
@Configuration
public class DruidDataSourceConfig {

    @Value("${spring.datasource.druid.max-active}")
    private int maxActive;

    /**
     * 配置 druid
     * @param druidDataSource
     * @return
     */
    public DruidDataSource setProperties(DruidDataSource druidDataSource){
        druidDataSource.setMaxActive(maxActive);
        return druidDataSource;
    }


}

写回答

2回答

慕斯卡4063741

提问者

2022-04-08

跟踪代码发现是必打堆栈信息

//img.mukewang.com/szimg/625049030954e78727141166.jpg

0
0

慕斯卡4063741

提问者

2022-04-08

配置完“hbase-regionserver-3”等host映射可以连接成功了(不影响测试)但是还是回报如下错误,请问老师还需要改什么地方吗?

2022-04-08 22:21:01.288  INFO 12498 --- [reate-246168102] org.apache.zookeeper.ZooKeeper           : Initiating client connection, connectString=192.168.71.128:2182 sessionTimeout=90000 watcher=hconnection-0x4767e8950x0, quorum=192.168.71.128:2182, baseZNode=/hbase
2022-04-08 22:21:01.313  INFO 12498 --- [68.71.128:2182)] org.apache.zookeeper.ClientCnxn          : Opening socket connection to server 192.168.71.128/192.168.71.128:2182. Will not attempt to authenticate using SASL (unknown error)
2022-04-08 22:21:01.331  INFO 12498 --- [68.71.128:2182)] org.apache.zookeeper.ClientCnxn          : Socket connection established to 192.168.71.128/192.168.71.128:2182, initiating session
2022-04-08 22:21:01.369  INFO 12498 --- [68.71.128:2182)] org.apache.zookeeper.ClientCnxn          : Session establishment complete on server 192.168.71.128/192.168.71.128:2182, sessionid = 0x10001911140006c, negotiated timeout = 40000
2022-04-08 22:21:01.670  INFO 12498 --- [reate-246168102] o.a.p.query.ConnectionQueryServicesImpl  : HConnection established. Stacktrace for informational purposes: hconnection-0x4767e895 java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.phoenix.util.LogUtil.getCallerStackTrace(LogUtil.java:55)
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:410)
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:256)
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2408)
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1652)
com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1718)
com.alibaba.druid.pool.DruidDataSource$CreateConnectionThread.run(DruidDataSource.java:2785)
#################
当前 dataSource 驱动:PhoenixEmbeddedDriver
#################
####### 测试是否能读取 druid 配置 ##########
druid设置的最大连接数:200
Phoenix 连接成功!
2022-04-08 22:21:02.927  INFO 12498 --- [       Thread-2] o.s.s.concurrent.ThreadPoolTaskExecutor  : Shutting down ExecutorService 'applicationTaskExecutor'
2022-04-08 22:21:02.928  INFO 12498 --- [       Thread-2] com.alibaba.druid.pool.DruidDataSource   : {dataSource-1} closing ...
2022-04-08 22:21:02.929  INFO 12498 --- [       Thread-2] com.alibaba.druid.pool.DruidDataSource   : {dataSource-1} closed
2022-04-08 22:21:02.930  INFO 12498 --- [       Thread-2] com.alibaba.druid.pool.DruidDataSource   : {dataSource-0} closing ...



0
3
小简同学
回复
慕斯卡4063741
好的,祝学习愉快
2024-09-25
共3条回复

Spark+ES+ClickHouse 构建DMP用户画像

大数据主流技术,数据挖掘核心算法,用户画像完整知识轻松掌握

306 学习 · 219 问题

查看课程