测试集群模式安装实施Hadoop
1.
在VMware中安装三台CentOS虚拟机server1,server2,server3,其中server1作为Hadoop集群的NomeNode和JobTracker,server2和server3作为DataNode和TaskTracker.
2.安装DNS
使用yum安装bind
[root@server1
admin]#
安装完成后检查,
[root@server1
admin]#
bind-dyndb-ldap-1.1.0-0.9.b1.el6_3.1.x86_64
bind-chroot-9.8.2-0.10.rc1.el6_3.6.x86_64
bind-libs-9.8.2-0.10.rc1.el6_3.6.x86_64
bind-sdb-9.8.2-0.10.rc1.el6_3.6.x86_64
bind-utils-9.8.2-0.10.rc1.el6_3.6.x86_64
bind-devel-9.8.2-0.10.rc1.el6_3.6.x86_64
bind-9.8.2-0.10.rc1.el6_3.6.x86_64
安装已经齐全
修改配置文件
修改/etc/named.conf,将127.0.0.1,localhost
[root@server1 etc]#
options {
};
修改/etc/named.rfc1912.zones,
zone "myhadoop.com" IN {
};
zone "1.168.192.in-addr.arpa" IN {
};
在目录/var/named下创建文件myhadoop.com.zone、1.168.192.in-addr.zone
修改myhadoop.com.zone为
$TTL 86400
@
@
server1.myhadoop.com.
server2.myhadoop.com.
server3.myhadoop.com.
修改1.168.192.in-addr.zone为
$TTL 86400
@
@
201
202
202
修改这两个文件的所有者
[root@server1 named]#
[root@server1 named]#
在/etc/resolv.conf中添加以下配置
nameserver 192.168.1.201
用同样的方法修改server2、server3中的/etc/resolv.conf文件
启动DNS
[root@server1 named]#
Starting
named:
设置为开机自动启动
[root@server1 admin]#
测试DNS查询
[root@server1 admin]#
Server:
Address:
Name:
Address: 192.168.1.201
[root@server1 admin]#
Server:
Address:
Name:
Address: 192.168.1.202
[root@server1 admin]#
Server:
Address:
Name:
Address: 192.168.1.203
查询成功,同时在server2,server3中测试,查询都成功
3.安装NFS
查看NFS和rpcbind包是否已安装
[root@server1 admin]#
nfs4-acl-tools-0.3.3-5.el6.x86_64
nfs-utils-1.2.2-7.el6.x86_64
nfs-utils-lib-1.1.5-1.el6.x86_64
[root@server1 admin]#
rpcbind-0.2.0-8.el6.x86_64
可见已经安装完全,若没有安装使用yum安装即可。
修改文件/etc/exports,
/home/admin *(sync,rw)
启动NFS
[root@server1 admin]#
Starting NFS
services:
Starting NFS
quotas:
Starting NFS
daemon:
Starting NFS
mountd:
设置为开机自动启动
[root@server1 admin]#
启动rpcbind
[root@server1 admin]#
Starting
rpcbind:
设置为自动启动
[root@server1 admin]#
输出挂载点
[root@server1 admin]#
Export list for localhost:
/home/admin *
修改/home/admin的权限,为方便设置为777
[root@server1 home]#
在server2中挂载server1中的/home/admin
[root@server2 home]#
测试访问
[root@server2 home]# cd admin_share/
[root@server2 admin_share]# cat test.txt
aaaa,111
bbbb,222
cccc,333
dddd,444
可见访问成功
修改server2的/etc/fstab
server1.myhadoop.com:/home/admin
同理在server3中挂载server1的/home/admin,并测试
4.
为server1,server2,server3的admin用户各自生成登录密钥
[admin@server1 ~]$
Generating public/private rsa key pair.
Enter file in which to save the key (/home/admin/.ssh/id_rsa):
/home/admin/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/admin/.ssh/id_rsa.
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
The key fingerprint is:
46:56:64:8f:83:13:e0:f3:17:cb:b9:7d:d5:fc:9f:52 admin@server1
The key's randomart image is:
+--[ RSA 2048]----+
|
|
|
|
|
|
|
|
|
+-----------------+
[admin@server2 ~]$
[admin@server3 ~]$
在server1中将id_rsa.pub
[admin@server1 ~]$
[admin@server2 ~]$ ln -s
/home/admin_share/.ssh/authorized_keys
[admin@server3 ~]$ ln -s
/home/admin_share/.ssh/authorized_keys
将server2,server3的密钥分别追加到authorized_keys
[admin@server2 ~]$
[admin@server3 ~]$
测试配置
[admin@server1 ~]$
The authenticity of host 'server1.myhadoop.com (192.168.1.201)' can't be established.
RSA key fingerprint is a9:f3:7f:55:56:3a:a7:d7:9e:23:1e:86:a5:eb:90:dc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server1.myhadoop.com,192.168.1.201' (RSA) to the list of known hosts.
Last login: Sun Jan 27 10:02:12 2013 from server1
同样方法测试其他机器,测试成功
5.
在server1上,hadoop中配置core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
</configuration>
配置mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
</configuration>
配置master为
server1.myhadoop.com
配置slaves为
server2.myhadoop.com
server3.myhadoop.com
建立文本文件serverlist.txt,里面包含所有需要分发Hadoop的机器域名,在这里为server2,server3,即内容为
[admin@server1 ~]$ cat serverlist.txt
server2.myhadoop.com
server3.myhadoop.com
生成Hadoop的Shell脚本
[admin@server1 ~]$
内容如下
[admin@server1 ~]$
scp -rp /home/admin/hadoop-0.20.2/ admin@server2.myhadoop.com:/home/admin/
scp -rp /home/admin/hadoop-0.20.2/ admin@server3.myhadoop.com:/home/admin/
运行脚本
[admin@server1
~]$
检查server2
格式化namenode
[admin@server1 logs]$
启动Hadoop
[admin@server1
~]$
starting namenode, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-namenode-server1.out
server2.myhadoop.com: starting datanode, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-datanode-server2.out
server3.myhadoop.com: starting datanode, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-datanode-server3.out
server1.myhadoop.com: starting secondarynamenode, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-secondarynamenode-server1.out
starting jobtracker, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-jobtracker-server1.out
server2.myhadoop.com: starting tasktracker, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-tasktracker-server2.out
server3.myhadoop.com: starting tasktracker, logging to /home/admin/hadoop-0.20.2/bin/../logs/hadoop-admin-tasktracker-server3.out
检查server1、server2、server3,启动成功
[admin@server1
logs]$
6481 NameNode
6612 SecondaryNameNode
6681 JobTracker
6749 Jps
[admin@server2
logs]$
14869 TaskTracker
14917 Jps
14795 DataNode
[admin@server3
logs]$
16354 TaskTracker
16396 Jps
16280 DataNode