1.先安装javaSE,这里不详细说明;
2.去官网下载hadoop,这里我下的1.0.0版本;
3.设置Hadoop环境变量
我们打开〜/.profile,在结件处加上如下两行脚本(环境变量的值根据你自己的实际情况填写)
export HADOOP_HOME=/users/apple/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_HOME_WARN_SUPPRESS=1
4.配置hadoop-env.sh
进入到conf目录下,找到hadoop-env.sh,打开编辑进行如下设置(值根据你自己的实际情况填写)
export JAVA_HOME=/library/Java/Home(去掉注释)
export HADOOP_HEAPSIZE=2000(去掉注释)
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"(去掉注释)
5.配置core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/users/billy/hadoop/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
6.配置hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
7.配置mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>2</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>2</value>
</property>
</configuration>
8.安装HDFS
经过以上的配置,就可以进行HDFS的安装了。
命令如下
$HADOOP_HOME/bin/hadoop namenode -format
9.启动Hadoop
很简单,一条命令搞定。
$HADOOP_HOME/bin/start-all.sh
10.简单调试
如果想试试看是否已经成功启动,可以用自带的例子试验一下:
hadoop jar $HADOOP_HOME/hadoop-example-1.0.0.jar pi 10 100
成功的话,会有类似结果:
Number of Maps = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
……