HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)

本文详细介绍HBase在多节点集群环境下两种不同配置(HBASE_MANAGES_ZK=true/false)下的启动步骤。包括如何配置和启动Hadoop、Zookeeper及HBase服务。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 

 

HBase的多节点集群详细启动步骤(3或5节点)分为

  1、HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐)

  2、HBASE_MANAGES_ZK的默认值是true(zookeeper自带)

 

 

 

 

 

 

 

1、HBASE_MANAGES_ZK的默认值是false(推荐)

  伪分布模式下,如(weekend110)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例。但是,该实例只能为单机或伪分布模式下的HBase提供服务。

 

  若是分布式模式,则需要配置自己的Zookeeper集群。如(HadoopMaster、HadoopSlave1、HadoopSlave2)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HBase时,HBase将Zookeeper作为自身的一部分运行。进程变为HQuorumPeer。
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是false,它表示,分布式模式里,需要,先提前手动,每个节点都手动启动Zookeeper,然后再在主节点上启动HBase时,进程变为HMaster(HadoopMaster节点)。

 


  若,HBASE_MANAGES_ZK的默认值是false
1、则,直接在HadoopMaster机器上,先启动Hadoop,
2、在HadoopMaster、HadoopSlave1、HadoopSlave2机器上,分别手动一个一个得去,启动Zookeeper
3、在HadoopMaster机器上,再启动HBase即可。

 


  1、则,直接在HadoopMaster机器上,先启动Hadoop,
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
1998 Jps
[hadoop@HadoopMaster hadoop-2.6.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/02 19:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HadoopMaster]
HadoopMaster: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-namenode-HadoopMaster.out
HadoopSlave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave1.out
HadoopSlave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-HadoopMaster.out
16/11/02 20:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-HadoopMaster.out
HadoopSlave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave2.out
HadoopSlave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave1.out
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
2281 SecondaryNameNode
2124 NameNode
2430 ResourceManager
2736 Jps

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
1877 Jps
[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
2003 NodeManager
2199 Jps
1928 DataNode
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
1893 Jps
[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2019 NodeManager
2195 Jps
1945 DataNode
[hadoop@HadoopSlave2 hadoop-2.6.0]$

 

  2、在HadoopMaster、HadoopSlave1、HadoopSlave2机器上,分别手动一个一个得去,启动Zookeeper
[hadoop@HadoopMaster hadoop-2.6.0]$cd ..
[hadoop@HadoopMaster app]$ cd zookeeper-3.4.6/
[hadoop@HadoopMaster zookeeper-3.4.6]$ bin/zkServer.sh start
[hadoop@HadoopSlave1 zookeeper-3.4.6]$ bin/zkServer.sh start
[hadoop@HadoopSlave2 zookeeper-3.4.6]$ bin/zkServer.sh start

 


  3、在HadoopMaster机器上,再启动HBase即可。
[hadoop@HadoopMaster hadoop-2.6.0]$ cd ..
[hadoop@HadoopMaster app]$ cd hbase-1.2.3
[hadoop@HadoopMaster hbase-1.2.3]$ bin/start-hbase.sh
HadoopSlave2: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave2.out
HadoopSlave1: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave1.out
HadoopMaster: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopMaster.out
starting master, logging to /home/hadoop/app/hbase-1.2.3/logs/hbase-hadoop-master-HadoopMaster.out
HadoopSlave1: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave1.out
HadoopSlave2: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave2.out
[hadoop@HadoopMaster hbase-1.2.3]$ jps

 

 

  进入hbase shell啊,只有HadoopMaster才可进,
[hadoop@HadoopMaster hbase-1.2.3]$ hbase shell
2016-11-02 20:07:31,288 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-1.2.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

hbase(main):001:0>

[hadoop@HadoopSlave1 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave2 hadoop-2.6.0]$


退出hbase shell啊
hbase(main):001:0> exit
[hadoop@HadoopMaster hbase-1.2.3]$

 

 

 

 

 

 

 

 

2、HBASE_MANAGES_ZK的默认值是true

  伪分布模式下,如(weekend110、djt002)
hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例。
但是,该实例只能为单机或伪分布模式下的HBase提供服务。


  若是分布式模式,则需要配置自己的Zookeeper集群。如(HadoopMaster、HadoopSlave1、HadoopSlave2)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HBase时,HBase将Zookeeper作为自身的一部分运行。进程变为HQuorumPeer。
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是false,它表示,分布式模式里,需要,先提前手动,每个节点都手动启动Zookeeper,然后再在主节点上启动HBase时,进程变为HMaster(HadoopMaster节点)。

 


  若,HBASE_MANAGES_ZK的默认值是true
1、则,直接在HadoopMaster机器上,先启动Hadoop,
2、再启动HBase即可。

 

 

 

 

 

  1、则,直接在HadoopMaster机器上,先启动Hadoop,
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
1998 Jps
[hadoop@HadoopMaster hadoop-2.6.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/02 19:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HadoopMaster]
HadoopMaster: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-namenode-HadoopMaster.out
HadoopSlave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave1.out
HadoopSlave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-HadoopMaster.out
16/11/02 20:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-HadoopMaster.out
HadoopSlave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave2.out
HadoopSlave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave1.out
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
2281 SecondaryNameNode
2124 NameNode
2430 ResourceManager
2736 Jps

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
1877 Jps
[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
2003 NodeManager
2199 Jps
1928 DataNode
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
1893 Jps
[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2019 NodeManager
2195 Jps
1945 DataNode
[hadoop@HadoopSlave2 hadoop-2.6.0]$

 


  2、再启动HBase即可。
[hadoop@HadoopMaster hadoop-2.6.0]$ cd ..
[hadoop@HadoopMaster app]$ cd hbase-1.2.3
[hadoop@HadoopMaster hbase-1.2.3]$ bin/start-hbase.sh
HadoopSlave2: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave2.out
HadoopSlave1: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave1.out
HadoopMaster: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopMaster.out
starting master, logging to /home/hadoop/app/hbase-1.2.3/logs/hbase-hadoop-master-HadoopMaster.out
HadoopSlave1: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave1.out
HadoopSlave2: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave2.out
[hadoop@HadoopMaster hbase-1.2.3]$ jps
3201 Jps
2281 SecondaryNameNode
2951 HQuorumPeer
2124 NameNode
2430 ResourceManager
3013 HMaster
[hadoop@HadoopMaster hbase-1.2.3]$

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
2336 HRegionServer
2003 NodeManager
2396 Jps
2257 HQuorumPeer
1928 DataNode
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2019 NodeManager
2254 HQuorumPeer
2451 Jps
2333 HRegionServer
1945 DataNode
[hadoop@HadoopSlave2 hadoop-2.6.0]$

 


  进入hbase shell啊,只有HadoopMaster才可进,
[hadoop@HadoopMaster hbase-1.2.3]$ hbase shell
2016-11-02 20:07:31,288 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-1.2.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

hbase(main):001:0>

[hadoop@HadoopSlave1 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave2 hadoop-2.6.0]$


退出hbase shell啊
hbase(main):001:0> exit
[hadoop@HadoopMaster hbase-1.2.3]$

 

   

 

### 删除 HBaseZooKeeper 中创建的节点HBase 使用自带 ZooKeeper 的情况下,ZooKeeper 中会存储 HBase 的元数据信息,例如 `-ROOT-` 表的地址、`HMaster` 的地址以及 `HRegionServer` 的注册信息等[^4]。如果需要删除这些节点,可以通过以下方法实现。 #### 方法一:使用 ZooKeeper 客户端命令行工具 1. **启动 ZooKeeper 客户端** 进入 HBase 的安目录,启动 ZooKeeper 客户端: ```bash $ bin/zkCli.sh ``` 2. **列出当前节点** 使用 `ls` 命令查看 ZooKeeper 中的节点: ```bash ls / ``` 这将显示根路径下的所有子节点3. **删除指定节点** 使用 `rmr` 命令递归删除目标节点。例如,删除 `/hbase` 节点及其子节点: ```bash rmr /hbase ``` 注意:此操作不可逆,请确保在执行前备份相关数据确认无误。 #### 方法二:停止 HBase 并清理 ZooKeeper 数据 1. **停止 HBase 服务** 确保 HBaseZooKeeper 都已停止运行。如果未正确关闭 HBase,可能会出现 `no zookeeper to stop` 的错误提示[^2]。可以手动检查并删除 ZooKeeper 的 PID 文件: ```bash $ rm -f /var/hadoop/pids/zookeeper.pid ``` 2. **定位 ZooKeeper 数据目录** 查看 `hbase-site.xml` 配置文件中的 `hbase.zookeeper.property.dataDir` 参数,找到 ZooKeeper 存储数据的目录。默认情况下,该目录为 `/tmp/zookeeper`[^3]。 3. **清空数据目录** 删除 ZooKeeper 数据目录中的内容: ```bash $ rm -rf /tmp/zookeeper/* ``` #### 方法三:通过编程方式删除节点 如果需要通过代码删除节点,可以使用 ZooKeeper 的 Java API Python 库(如 `kazoo`)。以下是使用 Java API 的示例代码: ```java import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.KeeperException; public class ZKDeleteNode { public static void main(String[] args) throws Exception { String connectString = "localhost:2181"; // ZooKeeper 地址 int sessionTimeout = 3000; // 会话超时时间 ZooKeeper zk = new ZooKeeper(connectString, sessionTimeout, event -> {}); try { zk.delete("/hbase", -1); // 删除 /hbase 节点,-1 表示忽略版本号 System.out.println("Node deleted successfully."); } catch (KeeperException.NoNodeException e) { System.out.println("Node does not exist."); } finally { zk.close(); } } } ``` #### 注意事项 - 如果 HBase 使用的是ZooKeeper 集群,则需要修改 `hbase-site.xml` 中的 `hbase.zookeeper.quorum` 参数以指向正确的 ZooKeeper 地址列表。 - 删除 ZooKeeper 中的节点后,HBase 的元数据将丢失,重新启动 HBase 时需要重新初始化这些数据。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值