一、主机规划
1、准备4台Ubuntu 14.04 64-bit 虚拟机,一台充当resourcemanager和namenode,另外三台充当nodemanager和datanode。由于需要实现主机间ssh无密码访问,主机IP采用静态配置。配置如下:
namenode ip:192.168.1.110
datanode1 ip:192.168.1.111
datanode2 ip:192.168.1.112
datanode3 ip:192.168.1.113
分别修改每一台的主机名和hosts文件
$sudo vim /etc/hostname

$sudo vim /etc/hosts

2、新建用户组和用户
$sudo groupadd cluster
$sudo useradd -m -s /bin/bash -g cluster -G sudo hadoop
$sudo passwd hadoop
sudo usermod -a -G adm hadoop
sudo usermod -a -G sudo hadoop
注销当前用户以hadoop用户登陆,主要是方便之后使用gedit编辑器修改配置文件。
3、安装ssh并配置无密码访问,依次执行下面的命令:
$ sudo apt-get install ssh
$ sudo apt-get install rsync
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
4、配置namenode无密码访问datanode1
在datanode1上切换到/home/hadoop/.ssh执行
$ scp hadoop@namenode:/home/hadoop/.ssh/id_dsa.pub ./namenode_dsa.pub
$ cat namenode_dsa.pub >>authorized_keys

在namenode上执行:
$ ssh hadoop@datanode1(第一次需要输入密码,之后便可无密码访问)

同上分别配置datanode2 和datanode3
二、安装jdk和hadoop-2.6.0
1、安装jdk
到Oracle官网下载jdk-8u25-Linux-x64.tar.gz将其拷贝到/usr目录,执行:$ sudo tar -zxf /usr/jdk-8u25-linux-x64.tar.gz
2、安装hadoop-2.6.0
到http://hadoop.apache.org/下载hadoop-2.6.0.tar.gz拷贝到/home/hadoop目录,执行:$ tar -zxf /home/hdoop/hadoop-2.6.0.tar.gz
3、配置环境变量
执行:$ sudo vim /etc/profile 在文件末尾添加如下内容:
export JAVA_HOME=/usr/jdk1.8.0_25
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_PREFIX=/home/hadoop/hadoop-2.6.0
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
export HADOOP_YARN_HOME=/home/hadoop/hadoop-2.6.0

三、配置hadoop
1、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/core-site.xml
- <?xml version="1.0" encoding="UTF-8"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://namenode:9000</value>
- </property>
- <property>
- <name>io.file.buffer.size</name>
- <value>131072</value>
- <description>Size of read/write buffer used in SequenceFiles.</description>
- </property>
- </configuration>
2、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/hdfs-site.xml
- <?xml version="1.0" encoding="UTF-8"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>/home/hadoop/hdfs/name</value>
- <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>/home/hadoop/hdfs/data</value>
- </property>
- <property>
- <name>dfs.blocksize</name>
- <value>268435456</value>
- </property>
- <property>
- <name>dfs.namenode.handler.count</name>
- <value>100</value>
- </property>
- </configuration>
3、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/yarn-site.xml
- <?xml version="1.0"?>
- <configuration>
- <property>
- <name>yarn.acl.enable</name>
- <value>true</value>
- </property>
- <property>
- <name>yarn.admin.acl</name>
- <value>*</value>
- <description>ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access.</description>
- </property>
- <property>
- <name>yarn.log-aggregation-enable</name>
- <value>false</value>
- <description>Configuration to enable or disable log aggregation</description>
- </property>
- <property>
- <name>yarn.resourcemanager.address</name>
- <value>192.168.1.110:8031</value>
- </property>
- <property>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value>192.168.1.110:8032</value>
- </property>
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value>192.168.1.110:8033</value>
- </property>
- <property>
- <name>yarn.resourcemanager.admin.address</name>
- <value>192.168.1.110:8034</value>
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>192.168.1.110:8088</value>
- </property>
- <property>
- <name>yarn.resourcemanager.hostname</name>
- <value>192.168.1.110</value>
- </property>
- <property>
- <name>yarn.scheduler.minimum-allocation-mb</name>
- <value>256</value>
- </property>
- <property>
- <name>yarn.scheduler.maximum-allocation-mb</name>
- <value>6144</value>
- </property>
- <property>
- <name>yarn.nodemanager.resource.memory-mb</name>
- <value>256</value>
- </property>
- <property>
- <name>yarn.nodemanager.vmem-pmem-ratio</name>
- <value>20</value>
- </property>
- <property>
- <name>yarn.nodemanager.local-dirs</name>
- <value>/home/hadoop/comma</value>
- </property>
- <property>
- <name>yarn.nodemanager.log-dirs</name>
- <value>/home/hadoop/log</value>
- </property>
- <property>
- <name>yarn.nodemanager.log.retain-seconds</name>
- <value>10800</value>
- </property>
- <property>
- <name>yarn.nodemanager.remote-app-log-dir</name>
- <value>/logs</value>
- </property>
- <property>
- <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
- <value>logs</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.log-aggregation.retain-seconds</name>
- <value>-1</value>
- </property>
- <property>
- <name>yarn.log-aggregation.retain-check-interval-seconds</name>
- <value>-1</value>
- </property>
- </configuration>
4、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/mapred-site.xml
- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.map.memory.mb</name>
- <value>1536</value>
- </property>
- <property>
- <name>mapreduce.map.java.opts</name>
- <value>-Xmx1024M</value>
- </property>
- <property>
- <name>mapreduce.reduce.memory.mb</name>
- <value>3072</value>
- </property>
- <property>
- <name>mapreduce.reduce.java.opts</name>
- <value>-Xmx2560M</value>
- </property>
- <property>
- <name>mapreduce.task.io.sort.mb</name>
- <value>512</value>
- </property>
- <property>
- <name>mapreduce.task.io.sort.factor</name>
- <value>100</value>
- </property>
- <property>
- <name>mapreduce.reduce.shuffle.parallelcopies</name>
- <value>50</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>192.168.1.110:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>192.168.1.110:19888</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.intermediate-done-dir</name>
- <value>/mr-history/tmp</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.done-dir</name>
- <value>/mr-history/done</value>
- </property>
- </configuration>
5、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/slaves 把datanode的ip或主机名一行一个写入文件中
datanode1 或者192.168.1.111
datanode2 或者192.168.1.112
datanode3 或者192.168.1.113
6、启动hadoop
在namenode主机上启动如下进程:
格式化hdfs文件系统
$ $HADOOP_PREFIX/bin/hdfs namenode -format
启动namenode
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
启动resourcemanager
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
启动MapReduce JobHistory Server
$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
使用jps命令查看运行情况:

分别在每台datanode上启动:
启动datanode
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
启动nodemanager
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
7、停止服务
要停止相应的服务只需把上面的命令中的start改为stop即可。
可通过浏览器访问如下地址查看hadoop的运行状态:
NameNode http://namenode:50070
ResourceManager http://namenode:8088
MapReduce JobHistory Server http://namenode:19888
四、运行测试
1、在hdfs文件系统上新建输入、输出目录
$ $HADOOP_PREFIX/bin/hdfs -mkdir /input
$ $HADOOP_PREFIX/bin/hdfs -mkdir /output
此次用hadoop自带的wordcount来做测试,在当前目录下新建测试文件test.txt
$ touch test.txt 并向文件中写入一定量的单词文本
$ $HADOOP_PREFIX/bin/hdfs -copyFromLocal test.txt /input
运行程序
$ $HADOOP_PREFIX/bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /input/ /output/result

查看结果
$ $HADOOP_PREFIX/bin/hdfs -cat /output/result/*

Web 页面地址
hadoop集群启动成功后,可以通过以下web页面地址查看几个组件的运行情况:
Daemon | Web Interface | Notes |
NameNode | http://host:port/ | Default HTTP port is 50070. |
ResourceManager | http://host:port/ | Default HTTP port is 8088. |
MapReduce JobHistory Server | http://host:port/ | Default HTTP port is 19888. |