cluster1和cluster2组成hdfs联盟,cluster1有namenodeA和namenodeB,cluster2有namenodeC和namenodeD。
安装配置好后启动顺序应该为:
1.在所有zookeeper集群节点上启动zookeeper:zkServer.sh start
2.在cluster1的namenodeA上格式化zookeeper集群:hdfs zkfc -formatZK
3.在cluster2的namenodeC上格式化zookeeper集群:hdfs zkfc -formatZK
4.在所有journal集群节点上启动journal:hadoop-daemon.sh start journalnode
5.在cluster1的namenodeA上格式化集群:hdfs namenode -format -clusterId cluster1
6.启动cluster1的namenodeA:hadoop-daemon.sh start namenode
7.在cluster1的namenodeB上copynamenodeA的数据的数据:hdfs namenode -bootstrapStandby
8.启动cluster1的namenodeB:hadoop-daemon.sh start namenode
9.在cluster2的namenodeC上格式化集群:hdfs namenode -format -clusterId cluster2
10.启动cluster2的namenodeC:hadoop-daemon.sh start namenode
11.在cluster2的namenodeD上copynamenodeC的数据的数据:hdfs namenode -bootstrapStandby
12.启动cluster2的namenodeD:hadoop-daemon.sh start namenode
13.启动所有namenode的zkfc:hadoop-daemon.sh start zkfc
14.启动所有的datanode:hadoop-daemon.sh start datanode
15.在resourcemanager上启动yarn:start-yarn.sh
16.验证HA,杀死active的namenode:kill -9
17.提交作业验证yarn验证。
参考:国内第一篇详细讲解hadoop2的automatic HA+Federation+Yarn配置的教程