最近学习了下hadoop的安装。下面详细说明下步骤。
一、环境
我的是在Linux环境下进行安装的。
现在有三台服务器,分配如下:
192.168.3.100 NameNode --主机名testhadoop
192.168.3.201 DataNode1 --主机名hadoopsub1
192.168.3.202 DataNode2 --主机名hadoopsub2
NameNode(主服务器)可以看作是分布式文件系统中的管理者,主要负责管理文件系统的命名空间、集群配置信息和存储块的复制等。
DataNode(从服务器)是文件存储的基本单元,它将Block存储在本地文件系统中,保存了Block的Meta-data,同时周期性地将所有存在的Block信息发送给NameNode。
1、安装jdk
# rpm -ivh jdk-7u80-linux-x64.rpm
# vi /etc/profile
JAVA_HOME=/usr/java/jdk1.7.0_80
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
# source /etc/profile
# java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
2、配置ssh
我们需要对ssh服务进行配置,以运行hadoop环境进行ssh无密码登录。即NameNode节点需要能够ssh无密码登录访问DataNode节点。
进入NameNode服务器,输入如下命令:
# cd ~
# cd .ssh/
# ssh-keygen -t rsa
一直回车。.ssh目录下多出两个文件
私钥文件:id_rsa 公钥文件:id_rsa.pub
# cp id_rsa.pub authorized_keys
将公钥文件authorized_keys分发到各DataNode节点:
# scp authorized_keys root@192.168.3.201:/root/.ssh/
# scp authorized_keys root@192.168.3.202:/root/.ssh/
验证ssh无密码登录:
# ssh root@192.168.3.201
Last login: Fri Dec 11 15:52:52 2015 from 192.168.3.100
看到以上信息,表示配置成功!如果还提示要输入密码,则配置失败。
二、下载及安装Hadoop
去hadoop官网上(http://hadoop.apache.org/)下载合适的hadoop版本。我选择的是比较新的2.6.2版本。文件名为hadoop-2.6.2.tar.gz,下载文件上传到/usr/local下(三个服务器都要上传),切换到该目录下,解压:
# tar -zvxf hadoop-2.6.2.tar.gz
# mv hadoop-2.6.2 hadoop
# vi /etc/profile
JAVA_HOME=/usr/java/jdk1.7.0_80
HADOOP_HOME=/usr/local/hadoop
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME HADOOP_HOME CLASSPATH PATH
# source /etc/profile
配置之前,先在本地文件系统创建以下文件夹:/usr/local/hadoop/tmp、/usr/local/hadoop/dfs/data、/usr/local/hadoop/dfs/name。 主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下。
/hadoop/etc/hadoop/hadoop-env.sh
/hadoop/etc/hadoop/yarn-env.sh
/hadoop/etc/hadoop/slaves
/hadoop/etc/hadoop/core-site.xml
/hadoop/etc/hadoop/hdfs-site.xml
/hadoop/etc/hadoop/mapred-site.xml
/hadoop/etc/hadoop/yarn-site.xml
主服务器(192.168.3.100)进入配置目录:cd /usr/local/hadoop/etc/hadoop
修改core-site.xml文件:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.3.100:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>
[color=red]注[/color]:两个[color=blue]从服务器[/color]也要按上面修改core-site.xml配置文件。
以下其余配置只针对[color=blue]主服务器[/color]。
修改hdfs-site.xml文件:
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.3.100:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
[color=green]--这个有几台从服务器就配置几,笔者这里有两台从服务器,所以配置的值是[/color][color=red]2[/color]
<value>[color=red]2[/color]</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
修改mapred-site.xml文件:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>192.168.3.100:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.3.100:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.3.100:19888</value>
</property>
</configuration>
修改yarn-site.xml文件:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.3.100:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.3.100:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.3.100:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>192.168.3.100:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
[color=green]--这里是配置通过web模式访问hadoop下的应用端口[/color]
<value>192.168.3.100:[color=blue]8088[/color]</value>
</property>
</configuration>
修改slaves文件:
192.168.3.201
192.168.3.202
修改hadoop-env.sh文件:
export JAVA_HOME=/usr/java/jdk1.7.0_80
修改yarn-env.sh文件:
export JAVA_HOME=/usr/java/jdk1.7.0_80
# vi hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.201 hadoopsub1
192.168.3.202 hadoopsub2
3、格式化文件系统
# pwd
/usr/local/hadoop
# bin/hdfs namenode -format
[color=red]注意[/color]:这里的格式化文件系统并不是硬盘格式化,只是针对主服务器hdfs-site.xml的dfs.namenode.name.dir和dfs.datanode.data.dir目录做相应的清理工作。
[root@[color=red]hadoopsub1[/color] hadoop]# bin/hdfs namenode -format
[root@[color=red]hadoopsub2[/color] hadoop]# bin/hdfs namenode -format
[color=red]注意[/color]:这里的[color=blue]两个从服务器[/color]也要做清理工作。
4、启动和停止服务
# sbin/start-dfs.sh
# sbin/start-yarn.sh
或者
# sbin/start-all.sh
# sbin/stop-dfs.sh
# sbin/stop-yarn.sh
或者
# sbin/stop-all.sh
5、查看启动的进程
[root@[color=blue]testhadoop[/color] hadoop]# jps
3039 ResourceManager
3311 Jps
2806 NameNode
[root@[color=red]hadoopsub[/color] ~]# jps
3151 Jps
2926 DataNode
3029 NodeManager
6、查看集群状态
# ./bin/hdfs dfsadmin -report
15/12/11 16:36:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 52844687360 (49.22 GB)
Present Capacity: 46288113664 (43.11 GB)
DFS Remaining: 46288089088 (43.11 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.3.201:50010 (hadoopsub)
Hostname: hadoopsub
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6556573696 (6.11 GB)
DFS Remaining: 46288089088 (43.11 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.59%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Dec 11 16:36:39 CST 2015
三、通过浏览器访问
[color=blue]http://192.168.3.100:50070/[/color]
[img]http://dl2.iteye.com/upload/attachment/0113/7030/879ec626-715b-3c45-b5b2-ba94fd0a8662.jpg[/img]
[img]http://dl2.iteye.com/upload/attachment/0113/7638/e993e96b-5b2e-3195-816d-3034f05e3f43.jpg[/img]
[color=blue]http://192.168.3.100:8088/[/color]
[img]http://dl2.iteye.com/upload/attachment/0113/7032/4348dc86-5e07-3b65-aacf-1e92948942ea.jpg[/img]
特别说明下,上面配置主服务器的slaves文件,使用的是ip配置,此时需要在主服务器的/etc/hosts中增加ip到主机名的映射如下:
# vi hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.201 hadoopsub1
192.168.3.202 hadoopsub2
否则,可能在执行start-dfs.sh命令时,从服务器的DateNode节点打印如下错误日志:
2015-12-11 16:50:36,375 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1748412339-192.168.3.201-1420015637155 (Datanode Uuid null) service to /192.168.3.202:9000 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.3.201, hostname=192.168.3.201):
DatanodeRegistration(0.0.0.0, datanodeUuid=3ed21882-db82-462e-a71d-0dd52489d19e, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4237dee9-ea5e-4994-91c2-
008d9e804960;nsid=358861143;c=0)
大意是无法将ip地址解析成主机名,也就是无法获取到主机名,需要在/etc/hosts中进行指定。
【[color=red]注[/color]】:整个配置过程需要在从服务器上的操作有:安装jdk、/hadoop/etc/hadoop/hadoop-env.sh、/hadoop/etc/hadoop/yarn-env.sh、/hadoop/etc/hadoop/core-site.xml,还有在从服务器上需要执行的一个命令# bin/hdfs namenode -format
一、环境
我的是在Linux环境下进行安装的。
现在有三台服务器,分配如下:
192.168.3.100 NameNode --主机名testhadoop
192.168.3.201 DataNode1 --主机名hadoopsub1
192.168.3.202 DataNode2 --主机名hadoopsub2
NameNode(主服务器)可以看作是分布式文件系统中的管理者,主要负责管理文件系统的命名空间、集群配置信息和存储块的复制等。
DataNode(从服务器)是文件存储的基本单元,它将Block存储在本地文件系统中,保存了Block的Meta-data,同时周期性地将所有存在的Block信息发送给NameNode。
1、安装jdk
# rpm -ivh jdk-7u80-linux-x64.rpm
# vi /etc/profile
JAVA_HOME=/usr/java/jdk1.7.0_80
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
# source /etc/profile
# java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
2、配置ssh
我们需要对ssh服务进行配置,以运行hadoop环境进行ssh无密码登录。即NameNode节点需要能够ssh无密码登录访问DataNode节点。
进入NameNode服务器,输入如下命令:
# cd ~
# cd .ssh/
# ssh-keygen -t rsa
一直回车。.ssh目录下多出两个文件
私钥文件:id_rsa 公钥文件:id_rsa.pub
# cp id_rsa.pub authorized_keys
将公钥文件authorized_keys分发到各DataNode节点:
# scp authorized_keys root@192.168.3.201:/root/.ssh/
# scp authorized_keys root@192.168.3.202:/root/.ssh/
验证ssh无密码登录:
# ssh root@192.168.3.201
Last login: Fri Dec 11 15:52:52 2015 from 192.168.3.100
看到以上信息,表示配置成功!如果还提示要输入密码,则配置失败。
二、下载及安装Hadoop
去hadoop官网上(http://hadoop.apache.org/)下载合适的hadoop版本。我选择的是比较新的2.6.2版本。文件名为hadoop-2.6.2.tar.gz,下载文件上传到/usr/local下(三个服务器都要上传),切换到该目录下,解压:
# tar -zvxf hadoop-2.6.2.tar.gz
# mv hadoop-2.6.2 hadoop
# vi /etc/profile
JAVA_HOME=/usr/java/jdk1.7.0_80
HADOOP_HOME=/usr/local/hadoop
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME HADOOP_HOME CLASSPATH PATH
# source /etc/profile
配置之前,先在本地文件系统创建以下文件夹:/usr/local/hadoop/tmp、/usr/local/hadoop/dfs/data、/usr/local/hadoop/dfs/name。 主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下。
/hadoop/etc/hadoop/hadoop-env.sh
/hadoop/etc/hadoop/yarn-env.sh
/hadoop/etc/hadoop/slaves
/hadoop/etc/hadoop/core-site.xml
/hadoop/etc/hadoop/hdfs-site.xml
/hadoop/etc/hadoop/mapred-site.xml
/hadoop/etc/hadoop/yarn-site.xml
主服务器(192.168.3.100)进入配置目录:cd /usr/local/hadoop/etc/hadoop
修改core-site.xml文件:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.3.100:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>
[color=red]注[/color]:两个[color=blue]从服务器[/color]也要按上面修改core-site.xml配置文件。
以下其余配置只针对[color=blue]主服务器[/color]。
修改hdfs-site.xml文件:
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.3.100:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
[color=green]--这个有几台从服务器就配置几,笔者这里有两台从服务器,所以配置的值是[/color][color=red]2[/color]
<value>[color=red]2[/color]</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
修改mapred-site.xml文件:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>192.168.3.100:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.3.100:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.3.100:19888</value>
</property>
</configuration>
修改yarn-site.xml文件:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.3.100:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.3.100:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.3.100:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>192.168.3.100:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
[color=green]--这里是配置通过web模式访问hadoop下的应用端口[/color]
<value>192.168.3.100:[color=blue]8088[/color]</value>
</property>
</configuration>
修改slaves文件:
192.168.3.201
192.168.3.202
修改hadoop-env.sh文件:
export JAVA_HOME=/usr/java/jdk1.7.0_80
修改yarn-env.sh文件:
export JAVA_HOME=/usr/java/jdk1.7.0_80
# vi hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.201 hadoopsub1
192.168.3.202 hadoopsub2
3、格式化文件系统
# pwd
/usr/local/hadoop
# bin/hdfs namenode -format
[color=red]注意[/color]:这里的格式化文件系统并不是硬盘格式化,只是针对主服务器hdfs-site.xml的dfs.namenode.name.dir和dfs.datanode.data.dir目录做相应的清理工作。
[root@[color=red]hadoopsub1[/color] hadoop]# bin/hdfs namenode -format
[root@[color=red]hadoopsub2[/color] hadoop]# bin/hdfs namenode -format
[color=red]注意[/color]:这里的[color=blue]两个从服务器[/color]也要做清理工作。
4、启动和停止服务
# sbin/start-dfs.sh
# sbin/start-yarn.sh
或者
# sbin/start-all.sh
# sbin/stop-dfs.sh
# sbin/stop-yarn.sh
或者
# sbin/stop-all.sh
5、查看启动的进程
[root@[color=blue]testhadoop[/color] hadoop]# jps
3039 ResourceManager
3311 Jps
2806 NameNode
[root@[color=red]hadoopsub[/color] ~]# jps
3151 Jps
2926 DataNode
3029 NodeManager
6、查看集群状态
# ./bin/hdfs dfsadmin -report
15/12/11 16:36:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 52844687360 (49.22 GB)
Present Capacity: 46288113664 (43.11 GB)
DFS Remaining: 46288089088 (43.11 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.3.201:50010 (hadoopsub)
Hostname: hadoopsub
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6556573696 (6.11 GB)
DFS Remaining: 46288089088 (43.11 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.59%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Dec 11 16:36:39 CST 2015
三、通过浏览器访问
[color=blue]http://192.168.3.100:50070/[/color]
[img]http://dl2.iteye.com/upload/attachment/0113/7030/879ec626-715b-3c45-b5b2-ba94fd0a8662.jpg[/img]
[img]http://dl2.iteye.com/upload/attachment/0113/7638/e993e96b-5b2e-3195-816d-3034f05e3f43.jpg[/img]
[color=blue]http://192.168.3.100:8088/[/color]
[img]http://dl2.iteye.com/upload/attachment/0113/7032/4348dc86-5e07-3b65-aacf-1e92948942ea.jpg[/img]
特别说明下,上面配置主服务器的slaves文件,使用的是ip配置,此时需要在主服务器的/etc/hosts中增加ip到主机名的映射如下:
# vi hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.201 hadoopsub1
192.168.3.202 hadoopsub2
否则,可能在执行start-dfs.sh命令时,从服务器的DateNode节点打印如下错误日志:
2015-12-11 16:50:36,375 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1748412339-192.168.3.201-1420015637155 (Datanode Uuid null) service to /192.168.3.202:9000 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.3.201, hostname=192.168.3.201):
DatanodeRegistration(0.0.0.0, datanodeUuid=3ed21882-db82-462e-a71d-0dd52489d19e, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4237dee9-ea5e-4994-91c2-
008d9e804960;nsid=358861143;c=0)
大意是无法将ip地址解析成主机名,也就是无法获取到主机名,需要在/etc/hosts中进行指定。
【[color=red]注[/color]】:整个配置过程需要在从服务器上的操作有:安装jdk、/hadoop/etc/hadoop/hadoop-env.sh、/hadoop/etc/hadoop/yarn-env.sh、/hadoop/etc/hadoop/core-site.xml,还有在从服务器上需要执行的一个命令# bin/hdfs namenode -format