1>hostname hrs-hadoop (which will used in xml)
2>mkdir -p /home/jka07@int.hrs.com/hadoop/dfs/name
3> add property in /home/jka07@int.hrs.com/software/hadoop-2.7.0/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hrs-hadoop:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/jka07@int.hrs.com/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/jka07@int.hrs.com/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
4> core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://hrs-hadoop:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/jka07@int.hrs.com/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
5>mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hrs-hadoop:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hrs-hadoop:19888</value>
</property>
</configuration>
6> yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hrs-hadoop:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hrs-hadoop:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hrs-hadoop:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hrs-hadoop:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hrs-hadoop:8088</value>
</property>
7>
sbin>./start-dfs.sh
sbin>./start-yarn.sh
8> some exception
a)DiskChecker$DiskErrorException: Directory is not readable: /home/hadoop/dfs/data
answer: check if there is user named hadoop ,or you can not mkdir in /home/hadoop (this is is user account)
b)
hadoop datanode启动失败(All directories in dfs.data.dir are invalid)
ans: check the priviledges and change mode by using "chmod"
ref:
http://blog.youkuaiyun.com/lulongzhou_llz/article/details/42653589
http://blog.youkuaiyun.com/shiqidide/article/details/8113568
http://blog.youkuaiyun.com/xyls12345/article/details/23938087