原创转载请注明出处:http://agilestyle.iteye.com/blog/2291823
前期准备:
修改Linux主机名,IP,SSH无密登录,JDK安装,环境变量配置等,统统省略。
集群规划:
| hostname | IP | Software | Process |
| hadoop-0000 | 192.168.5.200 | JDK | NameNode |
| hadoop-0001 | 192.168.5.201 | JDK | NameNode |
| hadoop-0002 | 192.168.5.202 | JDK | ResourceManager |
| hadoop-0003 | 192.168.5.203 | JDK | ResourceManager |
| hadoop-0004 | 192.168.5.204 | JDK | DataNode |
| hadoop-0005 | 192.168.5.205 | JDK | DataNode |
| hadoop-0006 | 192.168.5.206 | JDK | DataNode |
安装步骤:
2. 配置Hadoop集群
添加环境变量(添加保存退出之后,需要scp到其他6台机器上)
vi ~/.bashrc
#setup Java & Hadoop environment
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_77
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.4
export HIVE_HOME=/home/hadoop/app/apache-hive-1.2.1-bin
export HBASE_HOME=/home/hadoop/app/hbase-1.1.4
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.8
export PATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${HIVE_HOME}/bin:${HBASE_HOME}/bin:${ZOOKEEPER_HOME}/bin
source ~/.bashrc
vi hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_77

vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/app/hadoop-2.6.4/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop-0004:2181,hadoop-0005:2181,hadoop-0006:2181</value>
</property>
</configuration>

vi hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>hadoop-0000:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>hadoop-0000:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>hadoop-0001:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>hadoop-0001:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop-0004:8485;hadoop-0005:8485;hadoop-0006:8485/ns1</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/app/hadoop-2.6.4/journaldata</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop-0002</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop-0003</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop-0004:2181,hadoop-0005:2181,hadoop-0006:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

vi slaves
hadoop-0004 hadoop-0005 hadoop-0006

配置完毕之后,将hadoop-0000上的hadoop文件夹scp -r到其他6台机器
启动:
第一次配置启动务必按照以下步骤一步一步往下走,亲测不会出错。
启动zookeeper集群(分别在hadoop-0004, hadoop-0005, hadoop-0006上启动)
zkServer.sh start
查看状态:一个leader,两个follower
zkServer.sh status
启动journalnode(分别在hadoop-0004, hadoop-0005, hadoop-0006上启动)
hadoop-daemon.sh start journalnode
JPS,hadoop-0004, hadoop-0005, hadoop-0006上多了JournalNode进程
格式化HDFS(hadoop-0000上执行)
hdfs namenode -format
格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里配置的是/home/hadoop/app/hadoop-2.6.4/tmp,然后将/home/hadoop/app/hadoop-2.6.4/tmp拷贝到hadoop-0001的/home/hadoop/app/hadoop-2.6.4/下
scp -r tmp/ hadoop-0001:/home/hadoop/app/hadoop-2.6.4/
格式化ZKFC(hadoop-0000上执行)
hdfs zkfc -formatZK
启动HDFS(hadoop-0000上执行)
start-dfs.sh
启动YARN(hadoop-0002上执行)
start-yarn.sh
另外需要在hadoop-0003上执行
yarn-daemon.sh start resourcemanager
测试:
http://hadoop-0000:50070/

http://hadoop-0001:50070/

http://hadoop-0002:8088/

http://hadoop-0003:8088/

以后启动集群,可以先在hadoop-0000上start-dfs.sh

之后在hadoop-0002上start-yarn.sh

接着hadoop-0003上yarn-daemon.sh start resourcemanager

最后可以在hadoop-0004,hadoop-0005,hadoop-0006上JPS

本文详细介绍了一个由7台Linux服务器组成的Hadoop高可用集群的搭建过程,包括Zookeeper集群配置、Hadoop集群配置、环境变量设置、各节点角色分配及启动测试步骤。
753

被折叠的 条评论
为什么被折叠?



