大数据运维---Linux安装hadoop Hadoop HA集群部署

本文详细介绍了在Linux环境下安装Hadoop并配置Hadoop HA集群的步骤,包括解压Hadoop、配置环境变量、修改配置文件、创建数据目录、分发文件到从节点等,最终成功运行hadoop version验证了配置。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.Hadoop下载好之后解压到相应目录:

        为了方便管理,我们使用mv把名称更改为hadoop

[root@master ~]# tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/src/
[root@master ~]# cd /usr/local/src/
[root@master src]# ls
hadoop-2.7.1  java  zookeeper
[root@master src]# mv hadoop-2.7.1/ hadoop
[root@master src]# ls
hadoop  java  zookeeper

2.配置Hadoop的环境变量

[root@master ~]# vi /etc/profile


#hadoop
export HADOOP_HOME=/usr/local/src/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/bin/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin


//生效环境变量
[root@master ~]# source /etc/profile

3.配置hadoop-env.sh配置文件

进入到hadoop/etc/hadoop下面

[root@master ~]# cd /usr/local/src/
[root@master src]# cd hadoop/etc/hadoop/
[root@master hadoop]# ls
capacity-scheduler.xml  hadoop-env.sh               httpfs-env.sh            kms-env.sh            mapred-env.sh               ssl-server.xml.example
configuration.xsl       hadoop-metrics2.properties  httpfs-log4j.properties  kms-log4j.properties  mapred-queues.xml.template  yarn-env.cmd
container-executor.cfg  hadoop-metrics.properties   httpfs-signature.secret  kms-site.xml          mapred-site.xml.template    yarn-env.sh
core-site.xml           hadoop-policy.xml           httpfs-site.xml          log4j.properties      slaves                      yarn-site.xml
hadoop-env.cmd          hdfs-site.xml               kms-acls.xml             mapred-env.cmd        ssl-client.xml.example
[root@master hadoop]# vi hadoop-env.sh 
//将Java的路径修改为自己的绝对路径

# The java implementation to use.
export JAVA_HOME=/usr/local/src/java

4.创建namenode,datanode,journalnode等存放数据的目录

[root@master hadoop]# pwd
/usr/local/src/hadoop
[root@master hadoop]# mkdir -p tmp/hdfs/nn
[root@master hadoop]# mkdir -p tmp/hdfs/dn
[root@master hadoop]# mkdir -p tmp/hdfs/jn
[root@master hadoop]# mkdir -p tmp/logs

5.配置core-site.xml文件

core-site.xml文件是Hadoop 核心配置,例如HDFS、MapReduce和YARN常用的I/O设置等

[root@master hadoop]# pwd
/usr/local/src/hadoop/etc/hadoop
[root@master hadoop]# vi core-site.xml 

//文件core-site.xml的具体配置如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limita
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值