hbase ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting

本文详细记录了HBase集群从节点启动时遇到的HRegionServerAborted错误,并提供了两种解决策略:一是修改HBase配置文件中的hostname为IP地址;二是确保ZooKeeper正常运行并同步服务器时间,以解决集群服务器时间不统一导致的服务异常关闭问题。

2018-12-13 17:07:07,513 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2826)
2018-12-13 17:07:07,515 INFO  [Thread-5] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@7e809b79
2018-12-13 17:07:07,515 INFO  [Thread-5] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2018-12-13 17:07:07,518 INFO  [Thread-5] regionserver.ShutdownHook: Shutdown hook finished.

hbase集群从节点报错如上

启动集群后,主节点进程存在:

[root@master ~]# jps
3968 NameNode
1127 HistoryServer
2153 DataNode
29161 RunJar
4906 JobHistoryServer
25515 NodeManager
26158 HRegionServer
24688 QuorumPeerMain
27699 Master
4278 SecondaryNameNode
30072 RunJar
25401 ResourceManager
5786 ThriftServer
24861 Jps
27838 Worker
26015 HMaster

从节点;

[root@slave2 ~]# jps
15892 QuorumPeerMain
27256 Worker
11052 Jps
21773 NodeManager
3006 DataNode
未发现:

HRegionServer服务;web界面也未发现从节点的habse进程服务

但是从节点hbase服务可以用

[root@slave2 ~]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/data/appcom/hbase-1.4.6/lib/phoenix-4.14.1-HBase-1.4-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/data/appcom/hbase-1.4.6/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/data/appcom/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 1.4.6, ra55bcbd4fc87ff9cd3caaae25277e0cfdbb344a5, Tue Jul 24 16:25:52 PDT 2018

hbase(main):001:0> list
TABLE                                                                                                                                  
LU.STUDENTS                                                                                                                            
SHUJUBU                                                                                                                                
SYSTEM.CATALOG                                                                                                                         
SYSTEM.FUNCTION                                                                                                                        
SYSTEM.LOG                                                                                                                             
SYSTEM.MUTEX                                                                                                                           
SYSTEM.SEQUENCE                                                                                                                        
SYSTEM.STATS                                                                                                                           
TEST                                                                                                                                   
TEST.PERSON                                                                                                                            
dim_mobile_hui                                                                                                                         
dim_mobile_yun                                                                                                                         
f2                                                                                                                                     
hbase_shujubu                                                                                                                          
luzhen                                                                                                                                 
mobile                                                                                                                                 
mobile_no                                                                                                                              
people                                                                                                                                 
t1                                                                                                                                     
user                                                                                                                                   
user1                                                                                                                                  
user2                                                                                                                                  
22 row(s) in 0.2400 seconds

=> ["LU.STUDENTS", "SHUJUBU", "SYSTEM.CATALOG", "SYSTEM.FUNCTION", "SYSTEM.LOG", "SYSTEM.MUTEX", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "TEST", "TEST.PERSON", "dim_mobile_hui", "dim_mobile_yun", "f2", "hbase_shujubu", "luzhen", "mobile", "mobile_no", "people", "t1", "user", "user1", "user2"]
hbase(main):002:0>

解决办法:1:hbase的配置文件中将hostname改为ip;

如果第一种办法无法解决上述报错。再查看zookeeper是否正常启动。

2;在zookeeper正常的情况下还是无法解决这个问题那就是服务器时间未同步的原因,集群的服务器时间不统一,不同步,导致hbase从节点服务启动起来后异常关闭。需要同步服务器时间。然后就可以解决这个报错。

 

base/MasterData/oldWALs, maxLogs=10 2025-06-24 21:56:39,697 INFO [master/hadoop102:16000:becomeActiveMaster] wal.AbstractFSWAL: Closed WAL: AsyncFSWAL hadoop102%2C16000%2C1750773392926:(num 1750773399658) 2025-06-24 21:56:39,701 ERROR [master/hadoop102:16000:becomeActiveMaster] master.HMaster: Failed to become active master java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:535) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:615) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:610) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:623) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:190) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:116) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:719) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:128) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:577) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:518) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:263) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:344) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:856) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2199) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:529) at java.lang.Thread.run(Thread.java:750) 2025-06-24 21:56:39,702 ERROR [master/hadoop102:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master hadoop102,16000,1750773392926: Unhandled exception. Starting shutdown. ***** java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:535) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:615) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:610) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:623) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:190) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:116) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:719) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:128) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:577) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:518) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:263) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:344) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:856) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2199) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:529) at java.lang.Thread.run(Thread.java:750) 2025-06-24 21:56:39,703 INFO [master/hadoop102:16000:becomeActiveMaster] regionserver.HRegionServer: ***** STOPPING region server 'hadoop102,16000,1750773392926' ***** 2025-06-24 21:56:39,703 INFO [master/hadoop102:16000:becomeActiveMaster] regionserver.HRegionServer: STOPPED: Stopped by master/hadoop102:16000:becomeActiveMaster 2025-06-24 21:56:40,607 INFO [master/hadoop102:16000] ipc.NettyRpcServer: Stopping server on /192.168.10.102:16000 2025-06-24 21:56:40,628 INFO [master/hadoop102:16000] regionserver.HRegionServer: Stopping infoServer 2025-06-24 21:56:40,652 INFO [master/hadoop102:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.w.WebAppContext@45acdd11{master,/,null,STOPPED}{file:/opt/module/hbase-2.4.18/hbase-webapps/master} 2025-06-24 21:56:40,658 INFO [master/hadoop102:16000] server.AbstractConnector: Stopped ServerConnector@7efd28bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-06-24 21:56:40,659 INFO [master/hadoop102:16000] server.session: node0 Stopped scavenging 2025-06-24 21:56:40,659 INFO [master/hadoop102:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f7da3d3{static,/static,file:///opt/module/hbase-2.4.18/hbase-webapps/static/,STOPPED} 2025-06-24 21:56:40,660 INFO [master/hadoop102:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b10ace9{logs,/logs,file:///opt/module/hbase-2.4.18/logs/,STOPPED} 2025-06-24 21:56:40,664 INFO [master/hadoop102:16000] regionserver.HRegionServer: aborting server hadoop102,16000,1750773392926 2025-06-24 21:56:40,665 INFO [master/hadoop102:16000] regionserver.HRegionServer: stopping server hadoop102,16000,1750773392926; all regions closed. 2025-06-24 21:56:40,665 INFO [master/hadoop102:16000] hbase.ChoreService: Chore service for: master/hadoop102:16000 had [] on shutdown 2025-06-24 21:56:40,672 WARN [master/hadoop102:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2025-06-24 21:56:40,782 INFO [ReadOnlyZKClient-hadoop102:2181,hadoop103:2181,hadoop104:2181@0x1a9293ba] zookeeper.ZooKeeper: Session: 0x20000754c8c0002 closed 2025-06-24 21:56:40,782 INFO [ReadOnlyZKClient-hadoop102:2181,hadoop103:2181,hadoop104:2181@0x1a9293ba-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x20000754c8c0002 2025-06-24 21:56:40,797 INFO [master/hadoop102:16000] zookeeper.ZooKeeper: Session: 0x100007708c00000 closed 2025-06-24 21:56:40,797 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x100007708c00000 2025-06-24 21:56:40,797 INFO [master/hadoop102:16000] regionserver.HRegionServer: Exiting; stopping=hadoop102,16000,1750773392926; zookeeper connection closed. 2025-06-24 21:56:40,798 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:254) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:145) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:140) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2969) [manager1@hadoop102 logs]$
06-25
查看了hbase的日志发现 2025-11-11 05:10:01,994 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2025-11-11 05:10:01,996 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2025-11-11 05:10:02,022 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:16000 connecting to ZooKeeper ensemble=hadoop1:2181,hadoop2:2181,hadoop3:2181 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_212 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/home/hadoop/module/jdk1.8.0_212/jre 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: hare/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2-tests.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/contrib/capacity-scheduler/*.jar 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/module/hadoop-2.10.2/lib/native 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-1160.el7.x86_64 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/module/hbase 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.free=189MB 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.max=3959MB 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.total=239MB 2025-11-11 05:10:02,030 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1:2181,hadoop2:2181,hadoop3:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1128620c 2025-11-11 05:10:02,044 INFO [main] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 2025-11-11 05:10:02,046 INFO [main] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes 2025-11-11 05:10:02,052 INFO [main] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled= 2025-11-11 05:10:02,079 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop2/192.168.249.162:2181. Will not attempt to authenticate using SASL (unknown error) 2025-11-11 05:10:02,082 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.249.161:50466, server: hadoop2/192.168.249.162:2181 2025-11-11 05:10:02,117 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop2/192.168.249.162:2181, sessionid = 0x200004582a10001, negotiated timeout = 40000 2025-11-11 05:10:02,197 INFO [main] util.log: Logging initialized @2907ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2025-11-11 05:10:02,299 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2025-11-11 05:10:02,300 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2025-11-11 05:10:02,300 INFO [main] http.HttpServer: Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-11-11 05:10:02,319 INFO [main] http.HttpServer: ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2025-11-11 05:10:02,358 INFO [main] http.HttpServer: Jetty bound to port 16010 2025-11-11 05:10:02,359 INFO [main] server.Server: jetty-9.4.41.v20210516; built: 2021-05-16T23:56:28.993Z; git: 98607f93c7833e7dc59489b13f3cb0a114fb9f4c; jvm 1.8.0_212-b10 2025-11-11 05:10:02,374 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,376 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.s.ServletContextHandler@216914{logs,/logs,file:///home/hadoop/module/hbase/logs/,AVAILABLE} 2025-11-11 05:10:02,376 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,376 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.s.ServletContextHandler@b835727{static,/static,file:///home/hadoop/module/hbase/hbase-webapps/static/,AVAILABLE} 2025-11-11 05:10:02,517 INFO [main] webapp.StandardDescriptorProcessor: NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2025-11-11 05:10:02,524 INFO [main] server.session: DefaultSessionIdManager workerName=node0 2025-11-11 05:10:02,524 INFO [main] server.session: No SessionScavenger set, using defaults 2025-11-11 05:10:02,524 INFO [main] server.session: node0 Scavenging every 660000ms 2025-11-11 05:10:02,541 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,572 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.w.WebAppContext@69fe0ed4{master,/,file:///home/hadoop/module/hbase/hbase-webapps/master/,AVAILABLE}{file:/home/hadoop/module/hbase/hbase-webapps/master} 2025-11-11 05:10:02,591 INFO [main] server.AbstractConnector: Started ServerConnector@36c0d0bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-11-11 05:10:02,591 INFO [main] server.Server: Started @3300ms 2025-11-11 05:10:02,603 INFO [main] master.HMaster: hbase.rootdir=hdfs://hadoop1:9000/hbase, hbase.cluster.distributed=true 2025-11-11 05:10:02,662 INFO [master/hadoop1:16000:becomeActiveMaster] master.HMaster: Adding backup master ZNode /hbase/backup-masters/hadoop1,16000,1762866599968 2025-11-11 05:10:02,725 ERROR [master/hadoop1:16000:becomeActiveMaster] master.HMaster: Failed to become Active Master org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/backup-masters/hadoop1,16000,1762866599968 at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:546) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:525) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch(ZKUtil.java:744) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.setMasterAddress(MasterAddressTracker.java:216) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2162) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:511) at java.lang.Thread.run(Thread.java:748) 2025-11-11 05:10:02,727 ERROR [master/hadoop1:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master hadoop1,16000,1762866599968: Failed to become Active Master ***** org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/backup-masters/hadoop1,16000,1762866599968 at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:546) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:525) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch(ZKUtil.java:744) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.setMasterAddress(MasterAddressTracker.java:216) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2162) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:511) at java.lang.Thread.run(Thread.java:748) 2025-11-11 05:10:02,727 INFO [master/hadoop1:16000:becomeActiveMaster] regionserver.HRegionServer: ***** STOPPING region server 'hadoop1,16000,1762866599968' ***** 2025-11-11 05:10:02,727 INFO [master/hadoop1:16000:becomeActiveMaster] regionserver.HRegionServer: STOPPED: Stopped by master/hadoop1:16000:becomeActiveMaster 2025-11-11 05:10:02,741 WARN [master/hadoop1:16000:becomeActiveMaster] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2025-11-11 05:10:05,668 INFO [master/hadoop1:16000] ipc.NettyRpcServer: Stopping server on /192.168.249.161:16000 2025-11-11 05:10:05,670 INFO [master/hadoop1:16000] regionserver.HRegionServer: Stopping infoServer 2025-11-11 05:10:05,679 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.w.WebAppContext@69fe0ed4{master,/,null,STOPPED}{file:/home/hadoop/module/hbase/hbase-webapps/master} 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] server.AbstractConnector: Stopped ServerConnector@36c0d0bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] server.session: node0 Stopped scavenging 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b835727{static,/static,file:///home/hadoop/module/hbase/hbase-webapps/static/,STOPPED} 2025-11-11 05:10:05,693 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@216914{logs,/logs,file:///home/hadoop/module/hbase/logs/,STOPPED} 2025-11-11 05:10:05,694 INFO [master/hadoop1:16000] regionserver.HRegionServer: aborting server hadoop1,16000,1762866599968 2025-11-11 05:10:05,703 INFO [master/hadoop1:16000] regionserver.HRegionServer: stopping server hadoop1,16000,1762866599968; all regions closed. 2025-11-11 05:10:05,703 INFO [master/hadoop1:16000] hbase.ChoreService: Chore service for: master/hadoop1:16000 had [] on shutdown 2025-11-11 05:10:05,706 WARN [master/hadoop1:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2025-11-11 05:10:05,824 INFO [master/hadoop1:16000] zookeeper.ZooKeeper: Session: 0x200004582a10001 closed 2025-11-11 05:10:05,824 INFO [master/hadoop1:16000] regionserver.HRegionServer: Exiting; stopping=hadoop1,16000,1762866599968; zookeeper connection closed. 2025-11-11 05:10:05,824 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:261) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:152) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2962) 2025-11-11 05:10:05,825 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x200004582a10001 启动hmaster后hmaster节点过一会直接挂掉,是什么问题,结合上述日志,该怎么解决具体一点
最新发布
11-12
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值