wget http://apache.fayea.com/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
tar xzvf hadoop-2.7.3.tar.gz
mv hadoop-2.7.3 hadoop
修改hadoop2.7.3/etc/hadoop/hadoop-env.sh指定JAVA_HOME
hadoop2.7.3/etc/hadoop/core-site.xml
<configuration>
<!-- 指定HDFS老大(namenode)的通信地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://0.0.0.0:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/xupanpan/hadoop/temp</value>
</property>
</configuration>
修改hadoop2.7.3/etc/hadoop/hdfs-site.xml 如下:
第一次启动得先格式化
/home/xupanpan/hadoop/hadoop/bin/hdfs namenode -format
启动hdfs
/home/xupanpan/hadoop/hadoop/sbin/start-dfs.sh
启动不了报错:
在/hadoop/sbin路径下:
将start-dfs.sh,stop-dfs.sh两个文件顶部添加以下参数
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
1
2
3
4
5
6
还有,start-yarn.sh,stop-yarn.sh顶部也需添加以下:
#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
# Licensed to the Apache Software Foundation (ASF) under one or more
1
2
3
4
5
6
7
8
修改后重启 ./start-dfs.sh,成功!
------http://www.cnblogs.com/woxpp/
https://www.aliyun.com/jiaocheng/158596.html
免密码登陆:ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
hadoop无法访问50070端口
最后发现可能是namenode初始化默认端口失效,于是决定手动修改配置文件设置默认端口
1.hdfs-site.xml 添加如下:
<property>
<name>dfs.http.address</name>
<value>0.0.0.0:50070</value>
</property>