上篇hive搭建可以参考步骤
NN | DN | ZK | Master | Regionserver | |
node1 | 1 | 1 | |||
node2 | 1 | 1 | 1 | 1 | |
node3 | 1 | 1 | 1 | ||
node4 | 1 | 1 | 1 | ||
node5 | 1(backup) |
-
hosts iptables 网络
直接克隆一台basic centos 为node5
-
date同步(node12345)
yum -y install ntp
#安装ntp服务
ntpdate ntp1.aliyun.com
-
jdk(node5,node1234已装)并配置环境变量/etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_67
export HADOOP_HOME=/opt/home/hadoop-2.6.5
export PATH=$PATH:$JAVA_HOME/bin
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
并将hadoop传给node5
-
免秘钥登录(node5-》node1234,node1-》node5)
ssh-keygen
ssh-copy-id -i /root/.ssh/id_rsa.pub node1
-
zookeeper、hadoop集群启动
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh start";
done
start-dfs.sh
-
hbase上传解压(node1)
tar -zxvf hbase-0.98.12.1-hadoop2-bin.tar.gz
mv hbase-0.98.12.1-hadoop2 /opt/home/
配置hbase环境变量/etc/profile
export HBASE_HOME=/opt/home/hbase-0.98.12.1-hadoop2
export PATH=$PATH:$HBASE_HOME/bin
刷新环境变量
source /etc/profile
-
配置配置文件 (node1)conf/
hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_67
export HBASE_MANAGES_ZK=false
hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://mycluster:8020/hbase</value>
#mycluster为hdfs-site.xml文件中hdfs ha的nameservices
#如果其中一个namenode挂掉,也可以去另standby节点的namenode进行hbase操作
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node2,node3,node4</value>
#zookeeper有哪些节点
</property>
regionservers(regionserver有哪些)
node2
node3
node4
backup-masters
在conf目录下没有,就创建一个在输入
vi backup-masters
node5
拷贝hadoop-2.6.5/etc/hadoop的hdfs-site.xml到hbase-0.98.12.1-hadoop2/conf/下
cp hdfs-site.xml /opt/home/hbase-0.98.12.1-hadoop2/conf/
-
分发
将node1的hbase分发出去node2345
scp -r ../hbase-0.98.12.1-hadoop2/ node4:/opt/home/
更改环境变量
-
整体启动步骤
启动顺序zookeeper-》hdfs(即start-dfs.sh)-》hbase
可以写成一个脚本
#!/usr/bin/env bash
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh start";
done
start-dfs.sh
start-hbase.sh
-
关闭步骤
关闭顺序hbase-》hdfs-》zookeeper
#!/usr/bin/env bash
stop-hbase.sh
stop-dfs.sh
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh stop";
done
-
总结
因为配置zookeeper,所以当当前master挂掉后zookeeper可以切换backup-master-》master;因为hdfs-ha,所以当nn挂掉,hbase.rootdir为namespace,也可以自动切换到另一个nn进行hbase操作(standby节点hdfs只能read)