基于yarn计算框架和高可用性DFS Hadoop 2.3启动修正(完整)

本文详细介绍了Hadoop集群的搭建过程,包括账号创建、主机名修改、免密码登录、JDK安装、环境变量配置、Hadoop版本安装及配置,并通过格式化HDFS、启动服务等步骤验证了集群的正确运行。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

192.168.81.132 -> hadoop1 namenode:   

192.168.81.130 ->  hadoop2 datanode1:         

192.168.81.129 -> hadoop3 datanode2;

192.168.81.131 -> hadoop4 datanode3;

一、创建账号

1.所有节点创建用户   

useradd hadoop   

passwd hadoop

2.所有节点创建目录  

mkdir -p /home/hadoop/source  

mkdir -p /home/hadoop/tools

3.Slave节点创建目录  

mkdir -p /hadoop/hdfs  

mkdir -p /hadoop/tmp  

mkdir -p /hadoop/log  

chmod -R 777 /hadoop

二、修改主机名

所有节点修改

1.vim /etc/sysconfig/network ,

修改 HOSTNAME=hadoopx

2.vim /etc/hosts

192.168.81.132   hadoop1

192.168.81.130   hadoop2

192.168.81.129   hadoop3

192.168.81.131   hadoop4

3.执行 hostname hadoopx

4.重新登录,即可

三、免密码登录

注意:非root用户免密码登录,需要执行 chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys 如果不修改权限,非root用户无法免密码登录

四、安装JDK(略)

五、配置环境变量  

1. /etc/profile

export JAVA_HOME=/usr/java/jdk1.6.0_27
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:/lib/dt.jar
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/home/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPARED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

2. hadoop-env.sh  

在末尾添加 export JAVA_HOME=/usr/java/jdk1.6.0_27

 

六、Hadoop 2.3安装  

1.core-site.xml文件配置

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
 <description>A base for other temporary directories.</description>
 </property>
 <property>
 <name>fs.defaultFS</name>
 <value>hdfs://192.168.81.132:9000</value>
</property>

</configuration>

 2.hdfs-site.xml文件配置

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
 <name>dfs.replication</name>
 <value>3</value>
</property>

<property>
 <name>dfs.namenode.name.dir</name>
 <value>file:/hadoop/hdfs/name</value>
 <final>true</final>
</property>

<property>
 <name>dfs.dataname.data.dir</name>
 <value>file:/hadoop/hdfs/data</value>
 <final>true</final>
</property>

<property>
   <name>dfs.namenode.secondary.http-address</name>
   <value>192.168.81.132:9001</value>
</property>

 <property>
   <name>dfs.webhdfs.enabled</name>
   <value>true</value>
  </property>

</configuration>

 3.mapred-site.xml文件配置

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.81.132:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.81.132:19888</value>
</property>
</configuration>

 4.yarn-site.xml文件配置

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

<property>
 <name>yarn.resourcemanager.address</name>
 <value>192.168.81.132:18040</value>
</property>

<property>
 <name>yarn.resourcemanager.scheduler.address</name>
 <value>192.168.81.132:18030</value>
</property>

<property>
 <name>yarn.resourcemanager.webapp.address</name>
 <value>192.168.81.132:18088</value>
</property>

<property>
 <name>yarn.resourcemanager.resource-tracker.address</name>
 <value>192.168.81.132:18025</value>
</property>

<property>
 <name>yarn.resourcemanager.admin.address</name>
 <value>192.168.81.132:18141</value>
</property>

<property>  
    <name>yarn.nodemanager.aux-services</name>  
    <value>mapreduce_shuffle</value>  
  </property>

<property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

 七、验证

 

1.格式化HDFS
   hadoop namenode -format
2.启动HDFS
  start-dfs.sh
3.启动任务管理器
  start-yarn.sh
4.启动httpfs
  httpfs.sh start
5.NameNode  验证进程
 NameNode
 Bootstrap
 SecondaryNameNode
 ResourceManager
6.DataNode 验证进程
 DataNode
 NodeManager
7.测试HDFS读写,JOB的运行
hadoop jar hadoop-mapreduce-examples-2.3.0.jar wordcount hdfs://192.168.81.132:9000/input hdfs://192.168.81.132:9000/output

9.WEB浏览集群

 

下一步,验证NameNode的双主。

 

转载于:https://www.cnblogs.com/bobsoft/p/3628469.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值