这里省略hadoop和JDK的安装,只讲需要修改哪些配置。
1、修改/etc/profile,将Java和hadoop添加到环境变量
#----------JDK begin
export JAVA_HOME=/usr/lib/jdk/jdk-9.0.1
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
#-------------JDK end
#----hadoop2.8.2-----
export HADOOP_INSTALL=/home/sunft/app/hadoop-2.8.2
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
执行下面的命令,使配置生效:
source /etc/profile
2、进入到hadoop-2.8.2/etc/hadoop目录下,下面的配置均在该目录下,修改hadoop-env.sh
export JAVA_HOME=/usr/lib/jdk/jdk-9.0.1
3、修改core-site.xml
<configuration>
<!--configure default file system-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://127.0.0.1:9000/</value>
</property>
<!--config hadoop work directory-->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/sunft/app/hadoop-2.8.2/tmp</value>
</property>
</configuration>
4、修改hdfs-site.xml
<!--配置副本,因为只有一台机器,必须配置为1-->
<configuration>
<property>
<!--config replication, only one machine-->
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
5、修改mapred-site.xml.template名称为mapred-site.xml
mv mapred-site.xml.template mapred-site.xml
修改配置项:<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
6、修改yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.hostname</name>
<value>127.0.0.1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
7、执行格式化命令,生成相应的文件
hadoop namenode -format
出现如下信息说明格式化成功:17/11/06 22:39:09 INFO common.Storage: Storage directory /home/sunft/app/hadoop-2.8.2/tmp/dfs/name has been successfully formatted.
17/11/06 22:39:09 INFO namenode.FSImageFormatProtobuf: Saving image file /home/sunft/app/hadoop-2.8.2/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/11/06 22:39:09 INFO namenode.FSImageFormatProtobuf: Image file /home/sunft/app/hadoop-2.8.2/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 322 bytes saved in 0 seconds.
17/11/06 22:39:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/06 22:39:09 INFO util.ExitUtil: Exiting with status 0
17/11/06 22:39:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
测试是否生成指定目录,下面的tmp目录是生成的:cd /home/sunft/app/hadoop-2.8.2/tmp
8、进入到sbin目录下,执行启动命令
可以执行start-all.sh执行所有进程,但是建议单个执行,这里先执行hdfs:
sunft@ubuntu:~/app/hadoop-2.8.2/sbin$ start-dfs.sh
9、使用jps查看已经启动的进程
sunft@ubuntu:~/app/hadoop-2.8.2/sbin$ jps
35488 NameNode
36182 SecondaryNameNode
35607 DataNode
36314 Jps
10、查看监听端口
netstat -nltp
11、在浏览器中输入地址访问测试
http://127.0.0.1:50070