Hadoop全分布式搭建

   Hadoop全分布式搭建

STP1

      新建hadoop文件夹【mkdir hadoop

        说明——【hadoop的安装文件夹】

STP2

      解压hadoop文件【tar -zxf/opt/software/hadoop-2.5.0-cdh5.3.6.tar.gz -C /opt/hadoop/

STP3

      修改hadoop的名称【mv /opt/modules/hadoop-2.5.0-cdh5.3.6//opt/ hadoop

/hadoop-2.5.0

        说明——【修改名称是为了后面方便管理】

STP4

      修改hadoop的配置文件

        说明——文件路径【/opt/hadoop/hadoop-2.5.0/etc/hadoop

STP4 - 1

        修改JDK路径

        修改以下文件中JAVA_HOME配置【hadoop-env.sh、mapred-env.sh、yarn-env.sh 】

        修改内容【export JAVA_HOME=/opt/hadoop/jdk1.7.0_67

       

STP4 - 2

        添加数据节点

        修改以下文件的配置【slaves

        将【localhost】修改为

                            huayi1.org

                            huayi2.org

                            huayi3.org

                            

 

STP4 - 3

        修改配置XML文件

        修改以下文件中XML配置【core-site.xml、hdfs-site.xml、yarn-site.xml 】

        core-site.xml中修改【

                            <configuration>

                                 <property>

                                <name>fs.defaultFS</name>

                                <value>hdfs://huayi1.org:8020</value>

                                 </property>

                                 <property>

                                <name>hadoop.tmp.dir</name>

                                  <value>/opt/hadoop/hadoop-2.5.0/tmp</value>

                                   </property>

                                   </configuration>

                           

        hdfs-site.xml中修改【

                                  <configuration>

                                  <property>

                                  <name>dfs.replication</name>

                                  <value>3</value>

                                  </property>

                                  <property>

                                  <name>dfs.namenode.http-address</name>

                                  <value>huayi1.org:50070</value>

                                  </property>

                                  <property>

                                  <name>dfs.namenode.secondary.http-address</name>

                                  <value>huayi1.org:50090</value>

                                  </property>

                                  </configuration>

                           

        yarn-site.xml中修改【

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>huayi1.org</value>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>86400</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>huayi1.org:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>huayi1.org:19888</value>

</property>

</configuration>

                           

STP4 - 4

        创建集群缓存文件路径【mkdir tmp

        说明——路径【/opt/hadoop/hadoop-2.5.0/

 

STP5

        复制hadoop到子节点机器【scp -rhadoop-2.5.0/ sang@huayi2.org:/opt/hadoop/

       说明——路径【/opt/hadoop/

STP6

      格式化hadoop

       【bin/hadoopnamenode -format

STP7

     启动hadoop

     【sbin/start-all.sh

 

 



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值