Hadoop-2.5.1 分布式环境搭建
系统环境:Centos 7
三台机器Ip地址
master 192.168.192.11
slave1 192.168.192.12
slave2 192.168.192.13
一、Centos 7 环境搭建
修改主机名
master
$ hostnamectl set-hostname master
slave1
$ hostnamectl set-hostname slave1
slave2
$ hostnamectl set-hostname slave2
获得管理员权限
$ su root
$ chmod -v u+w /etc/sudoers
在 root ALL=(ALL) ALL下添加
Hadoop ALL=(ALL) ALL
$ chmod -v u-w /etc/sudoers
修改 /etc/hosts文件
$ sudo vi /etc/hosts
修改为如下内容
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.192.11 master
192.168.192.12 slave1
192.168.192.13 slave2
同步集群时间
设置时区
$ timedatectl set-timezone Asia/Shanghai
设置时间:
$ timedatectl set-time "YYYY-MM-DD HH:MM:SS"
注:请将YYYY-MM-DD HH:MM:SS 改为具体格式的时间,如2016-01-07 11:11:11
二、安装JDK
1.查看是否有自带JDK
$ Java -version
2.若存在,则卸载自带JDK
$ rpm -qa|grep java
javapackages-tools-3.4.1-6.el7_0.noarch
python-javapackages-3.4.1-6.el7_0.noarch
java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
tzdata-java-2015a-1.el7.noarch
java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
$ sudo yum -y remove java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
$ sudo yum -y remove java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64
3.安装JDK
解压JDK
$ tar -zxvf jdk-7u75-linux-x64.gz
编辑 /etc/profile
$ sudo vi /etc/profile
添加如下内容
export JAVA_HOME=/home/hadoop/java/jdk1.7.0_75
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
生效文件
$ source /etc/profile
4、配置 SSH 免登录
关闭防火墙
$ systemctl stop firewalld.service #停止systemctl
$ disable firewalld.service #禁用
生成密钥
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
导入公钥到认证文件中
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
更改.ssh 以及认证文件的权限
$ chmod 700 .ssh/
$ chmod 600 .ssh/authorized_keys
注:以上操作,请分别在三台机子上操作一遍
实现三台机子间免登录
将公钥发送到其他机器上
master
$ scp authorized_keys hadoop@slave1:~/.ssh/authorized_keys_from_master
$ scp authorized_keys hadoop@slave2:~/.ssh/authorized_keys_from_master
slave1
$ scp authorized_keys hadoop@master:~/.ssh/authorized_keys_from_slave1
$ scp authorized_keys hadoop@slave2:~/.ssh/authorized_keys_from_slave1
slave2
$ scp authorized_keys hadoop@master:~/.ssh/authorized_keys_from_slave2
$ scp authorized_keys hadoop@slave1:~/.ssh/authorized_keys_from_slave2
各机器拷贝公钥
master
$ cat authorized_keys_from_slave1 >> authorized_keys
$ cat authorized_keys_from_slave2 >> authorized_keys
slave1
$ cat authorized_keys_from_master >> authorized_keys
$ cat authorized_keys_from_slave2 >> authorized_keys
slave2
$ cat authorized_keys_from_slave1 >> authorized_keys
$ cat authorized_keys_from_master >> authorized_keys
注:至此以上三台机器已经可以实现免密码登录操作
三、Hadoop-2.5.1 环境配置
解压压缩包
$ tar -zxvf hadoop-2.5.1.tar.gz
创建相关文件夹,注意在所有机器建立相应文件夹
$ mkdir dfs/
$ Mkdir dfs/name
$ Mkdir dfs/data
$ Mkdir tmp/
修改相关配置文件
配置文件1:hadoop-env.sh
修改JAVA_HOME值 为 /home/hadoop/java/jdk1.7.0_75
配置文件2:yarn-env.sh
修改JAVA_HOME值 为 /home/hadoop/java/jdk1.7.0_75
配置文件3:slaves (这个文件里面保存所有slave节点)
写入:
master
slave1
slave2
配置文件4:core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</name>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</name>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/tmp</name>
</property>
<property>
<name>hadoop.proxyuser.master.hosts</name>
<value>*</name>
</property>
<property>
<name>hadoop.proxyuser.master.groups</name>
<value>*</name>
</property>
</configuration>
配置文件5:hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
配置文件6:mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistroy.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
配置文件7:yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
将master 下配置好的Hadoop文件夹复制到另两个节点
$ scp -r /home/hadoop/hadoop-2.5.1 hadoop@slave1:~/
$ scp -r /home/hadoop/hadoop-2.5.1 hadoop@slave2:~/
分别修改各机器上的 /etc/profile
$ sudo vi /etc/profile
修改或添加如下内容
export HADOOP_HOME=/home/hadoop/hadoop-2.5.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
格式化nameNode
$ Hadoop namenode -format
启动
$ cd /home/hadoop/hadoop-2.5.1/sbin
$ ./start-dfs.sh
各机器jps进程情况
Master:
3630 DataNode
3968 Jps
3514 NameNode
3838 SecondaryNameNode
Slave1 和 Slave2:
3253 DataNode
3332 Jps
$ ./start-yarn.sh
各机器jps进程情况
Master:
3630 DataNode
4130 NodeManager
4018 ResourceManager
3514 NameNode
4272 Jps
3838 SecondaryNameNode
Slave1 和 slave2:
3511 Jps
3253 DataNode
3399 NodeManager
至此,Hadoop-2.5.1 环境搭建完成,可以访问http://master:8088 查看各节点信息