记一次JAVA连接hbase报getMaster attempt 0 of 10 failed异常的解决

本文记录了在使用HBase Java API过程中遇到的连接异常问题及解决过程。作者在CentOS6.4环境下搭建了Hadoop和HBase伪集群,通过一系列调试,包括检查防火墙设置、主机映射、配置文件修改等,最终发现hbase对外暴露方式的问题,并通过修改hosts文件成功解决了外部访问问题。

初学大数据方面的知识,以前没有用过,只好找博客看教程一步一步的来,自己搭的虚拟机,配置如下:

系统:CentOS6.4,主机的hostname:hadoop

IP:192.168.1.108

JDK:1.8

hadoop:1.2,伪集群

hbase:1.1,伪集群

一开始是一路顺畅,先启动hadoop,hdfs,然后启动hbase,hbase连接用zookeeper是自带的,我自己没有单独配置zookeeper,都启动完成之后使用hbase shell命令操作建表,插入数据,查询数据都是可以的,然后就开始使用hbase的java api来进行简单的操作,代码如下:

Configuration configuration = HBaseConfiguration.create();
conf.set("hbase.rootdir", "hdfs://hadoop:9000/hbase");
conf.set("hbase.zookeeper.quorum", "hadoop");

HBaseAdmin admin = new HBaseAdmin(configuration);

String tableName = "student";
if (admin.tableExists(tableName)) {
	System.out.println("表已存在,将进行删除");
	admin.disableTable(tableName);
	admin.deleteTable(tableName);
}
HTableDescriptor tableDesc = new HTableDescriptor(tableName);

tableDesc.addFamily(new HColumnDescriptor("name"));
admin.createTable(tableDesc);
System.out.println("表已经创建");
admin.close();

结果刚一启动就报了异常:getMaster attempt 0 of 10 failed; retrying after sleep of 1000

看看控制台吧,zookeeper的日志输出了,确实是能连接上,就是连接不上hbase,然后就是开始搜索解决方式,解决方式大体是有这么几种,如下:

1、防火墙未关闭,我虚拟机刚装好就已经把防火墙彻底关了,以防万一又查了一次,确实是关闭了,跟防火墙没有关系

2、虚拟机的host加了主机映射,但是windows本机的host文件没有添加映射,我的host文件已经加了,如下:

192.168.1.108 hadoop

通过cmd命令行ping hadoop也是可以ping通过,说明没问题

3、在hbase-site.xml配置文件中增加如下配置,然后重启hbase:

<property> 
   <name>hbase.master</name> 
   <value>hadoop:60000</value> 
</property>

然后在JAVA代码的中增加

conf.set("hbase.master","hadoop:60000");

此方法失败,还是报那个异常

4、禁上用IPV6,此方法无效

然后就开始漫长的调试,重启hadoop,hbase,重启虚拟机,结果还是一样,最后我把代码打成了jar包,传到虚拟里使用java来执行,结果一下子就成功了,我是一脸懵逼的,这TM到底是怎么回事,本机能访问,外部却不能,难道是IP绑定有问题?

又通过netstat命令查询当前端口占用情况,都是在127.0.0.1下,这时候我的虚拟host配置文件如下:

::1 localhost hadoop

127.0.0.1 localhost hadoop

因为在用java操作hdfs时也是这样的配置,是能成功的,到hbase就不行了,难道hbase使得127.0.0.1时外部不能访问?然后我就尝试着把hosts文件由

::1 localhost hadoop

127.0.0.1 localhost hadoop

改为

192.168.1.108 localhost hadoop

两行变一行

然后重新启动hadoop和hbase,再通过windows执行也成功了,应该是hadoop和hbase对外暴露的方式不一样的吧,hbase如果想让外部访问就要绑定到真实的IP下,如果绑定到127.0.0.1,则只能本机访问

转载于:https://my.oschina.net/857359351/blog/2987789

查看了hbase的日志发现 2025-11-11 05:10:01,994 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2025-11-11 05:10:01,996 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2025-11-11 05:10:02,022 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:16000 connecting to ZooKeeper ensemble=hadoop1:2181,hadoop2:2181,hadoop3:2181 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_212 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/home/hadoop/module/jdk1.8.0_212/jre 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: hare/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2-tests.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/contrib/capacity-scheduler/*.jar 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/module/hadoop-2.10.2/lib/native 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-1160.el7.x86_64 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/module/hbase 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.free=189MB 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.max=3959MB 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.total=239MB 2025-11-11 05:10:02,030 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1:2181,hadoop2:2181,hadoop3:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1128620c 2025-11-11 05:10:02,044 INFO [main] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 2025-11-11 05:10:02,046 INFO [main] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes 2025-11-11 05:10:02,052 INFO [main] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled= 2025-11-11 05:10:02,079 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop2/192.168.249.162:2181. Will not attempt to authenticate using SASL (unknown error) 2025-11-11 05:10:02,082 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.249.161:50466, server: hadoop2/192.168.249.162:2181 2025-11-11 05:10:02,117 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop2/192.168.249.162:2181, sessionid = 0x200004582a10001, negotiated timeout = 40000 2025-11-11 05:10:02,197 INFO [main] util.log: Logging initialized @2907ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2025-11-11 05:10:02,299 INFO [main] http.HttpServer: Added global filter &#39;safety&#39; (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2025-11-11 05:10:02,300 INFO [main] http.HttpServer: Added global filter &#39;clickjackingprevention&#39; (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2025-11-11 05:10:02,300 INFO [main] http.HttpServer: Added global filter &#39;securityheaders&#39; (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-11-11 05:10:02,319 INFO [main] http.HttpServer: ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2025-11-11 05:10:02,358 INFO [main] http.HttpServer: Jetty bound to port 16010 2025-11-11 05:10:02,359 INFO [main] server.Server: jetty-9.4.41.v20210516; built: 2021-05-16T23:56:28.993Z; git: 98607f93c7833e7dc59489b13f3cb0a114fb9f4c; jvm 1.8.0_212-b10 2025-11-11 05:10:02,374 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,376 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.s.ServletContextHandler@216914{logs,/logs,file:///home/hadoop/module/hbase/logs/,AVAILABLE} 2025-11-11 05:10:02,376 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,376 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.s.ServletContextHandler@b835727{static,/static,file:///home/hadoop/module/hbase/hbase-webapps/static/,AVAILABLE} 2025-11-11 05:10:02,517 INFO [main] webapp.StandardDescriptorProcessor: NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2025-11-11 05:10:02,524 INFO [main] server.session: DefaultSessionIdManager workerName=node0 2025-11-11 05:10:02,524 INFO [main] server.session: No SessionScavenger set, using defaults 2025-11-11 05:10:02,524 INFO [main] server.session: node0 Scavenging every 660000ms 2025-11-11 05:10:02,541 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,572 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.w.WebAppContext@69fe0ed4{master,/,file:///home/hadoop/module/hbase/hbase-webapps/master/,AVAILABLE}{file:/home/hadoop/module/hbase/hbase-webapps/master} 2025-11-11 05:10:02,591 INFO [main] server.AbstractConnector: Started ServerConnector@36c0d0bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-11-11 05:10:02,591 INFO [main] server.Server: Started @3300ms 2025-11-11 05:10:02,603 INFO [main] master.HMaster: hbase.rootdir=hdfs://hadoop1:9000/hbase, hbase.cluster.distributed=true 2025-11-11 05:10:02,662 INFO [master/hadoop1:16000:becomeActiveMaster] master.HMaster: Adding backup master ZNode /hbase/backup-masters/hadoop1,16000,1762866599968 2025-11-11 05:10:02,725 ERROR [master/hadoop1:16000:becomeActiveMaster] master.HMaster: Failed to become Active Master org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/backup-masters/hadoop1,16000,1762866599968 at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:546) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:525) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch(ZKUtil.java:744) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.setMasterAddress(MasterAddressTracker.java:216) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2162) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:511) at java.lang.Thread.run(Thread.java:748) 2025-11-11 05:10:02,727 ERROR [master/hadoop1:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master hadoop1,16000,1762866599968: Failed to become Active Master ***** org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/backup-masters/hadoop1,16000,1762866599968 at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:546) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:525) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch(ZKUtil.java:744) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.setMasterAddress(MasterAddressTracker.java:216) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2162) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:511) at java.lang.Thread.run(Thread.java:748) 2025-11-11 05:10:02,727 INFO [master/hadoop1:16000:becomeActiveMaster] regionserver.HRegionServer: ***** STOPPING region server &#39;hadoop1,16000,1762866599968&#39; ***** 2025-11-11 05:10:02,727 INFO [master/hadoop1:16000:becomeActiveMaster] regionserver.HRegionServer: STOPPED: Stopped by master/hadoop1:16000:becomeActiveMaster 2025-11-11 05:10:02,741 WARN [master/hadoop1:16000:becomeActiveMaster] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can&#39;t get master address from ZooKeeper; znode data == null 2025-11-11 05:10:05,668 INFO [master/hadoop1:16000] ipc.NettyRpcServer: Stopping server on /192.168.249.161:16000 2025-11-11 05:10:05,670 INFO [master/hadoop1:16000] regionserver.HRegionServer: Stopping infoServer 2025-11-11 05:10:05,679 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.w.WebAppContext@69fe0ed4{master,/,null,STOPPED}{file:/home/hadoop/module/hbase/hbase-webapps/master} 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] server.AbstractConnector: Stopped ServerConnector@36c0d0bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] server.session: node0 Stopped scavenging 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b835727{static,/static,file:///home/hadoop/module/hbase/hbase-webapps/static/,STOPPED} 2025-11-11 05:10:05,693 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@216914{logs,/logs,file:///home/hadoop/module/hbase/logs/,STOPPED} 2025-11-11 05:10:05,694 INFO [master/hadoop1:16000] regionserver.HRegionServer: aborting server hadoop1,16000,1762866599968 2025-11-11 05:10:05,703 INFO [master/hadoop1:16000] regionserver.HRegionServer: stopping server hadoop1,16000,1762866599968; all regions closed. 2025-11-11 05:10:05,703 INFO [master/hadoop1:16000] hbase.ChoreService: Chore service for: master/hadoop1:16000 had [] on shutdown 2025-11-11 05:10:05,706 WARN [master/hadoop1:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can&#39;t get master address from ZooKeeper; znode data == null 2025-11-11 05:10:05,824 INFO [master/hadoop1:16000] zookeeper.ZooKeeper: Session: 0x200004582a10001 closed 2025-11-11 05:10:05,824 INFO [master/hadoop1:16000] regionserver.HRegionServer: Exiting; stopping=hadoop1,16000,1762866599968; zookeeper connection closed. 2025-11-11 05:10:05,824 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:261) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:152) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2962) 2025-11-11 05:10:05,825 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x200004582a10001 启动hmaster后hmaster节点过一会直接挂掉,是什么问题,结合上述日志,该怎么解决具体一点
最新发布
11-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值