Hadoop 前置安装
(每台)
/etc/hostname: 修改hostname(hadoop000/hadoop001/hadoop002)
/etc/hosts: ip和hostname的映射关系
192.168.100.235 hadoop000
192.168.100.236 hadoop001
192.168.100.237 hadoop002
(每台)ssh
ssh 免密码登录:ssh-keygen -t rsa
在hadoop000机器进行拷贝:ssh-copy-id hadoop000
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop000
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop001
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop002
jdk安装
Haoop部署
解压---配置环境变量
1) hdfs-env.sh
JAVA_HOME
2) yarn-env.sh
JAVA_HOME
3) core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop000:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/app/hadoop-2.6.0/tmp</value>
</property>
4) hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/app/hadoop-2.6.0/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/app/hadoop-2.6.0/tmp/dfs/data</value>
</property>
5) yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop000</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
6) mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
7) slaves
hadoop000
hadoop001
hadoop002
分发hadoop : scp -r
分发环境变量
格式化 :hadoop namenode -format
启动 : start-all.sh