Hadoop全分布式环境搭建
远程登录服务器上的三台虚拟机,在这三台虚拟机上搭建hadoop环境
登录211.69.198.201(ysx buzyyu **********)再登陆202.114.10.151(yqf *********)
三台虚拟机的ip: 10.0.5.11 ysx master
10.0.5.12 hadoop01 slave
10.0.5.13 hadoop02 slave
hadoop 依赖java和ssh这两个软件包(java -version; ssh -h)
unpack hadoop的安装包hadoop-0.20.2.tar.gz::tar -xzvf hadoop-0.20.2.tar.gz
echo $JAVA_HOME edit the file conf/hadoop-env.sh to define at least JAVA_HOME to be the root of your Java installation.
vim conf/hadoop-env.sh
1.基本软件准备好,
2.配置无密码登录 ssh localhost
3.修改3个配置文件:
conf/core-site.xml:
<configuration> |
<property> |
<name>fs.default.name</name> |
<value>hdfs://localhost:9000</value> |
</property> |
</configuration> |
conf/hdfs-site.xml:
<configuration> |
<property> |
<name>dfs.replication</name> |
<value>1</value> |
</property> |
</configuration> |
conf/mapred-site.xml:
<configuration> |
<property> |
<name>mapred.job.tracker</name> |
<value>localhost:9001</value> |
</property> |
</configuration> |