Hadoop 3.1.3分布式安装教程

这篇教程详细介绍了如何在Hadoop 3.1.3版本上进行分布式安装,包括集群机器hostname配置、Java JDK安装、节点间免密设置、Hadoop安装包下载及环境变量配置、核心参数调整,以及解决安装过程中的常见错误,如端口占用和权限问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

10.0.0.11 dockerapache-01
10.0.0.12 dockerapache-02 #管理节点
10.0.0.13 dockerapache-03
在这里插入图片描述

1 集群机器配置hostname
vim /etc/hosts

这里根据自己的情况配置,配置完成后,互相ping一下host名确认是不是配置成功

ping DockerApache-02
2 java jdk配置

下载好jdk的包以后,上传至服务器 /usr/local , 解压

tar -xvzf jdk-8u231-linux-x64.tar.gz

然后配置环境变量

推荐配置

vim /etc/profile.d/jdk1.8.sh

export JAVA_HOME=/usr/local/jdk1.8.0_161/
export JRE_HOME=${
   JAVA_HOME}/jre
export CLASSPATH=.:${
   JAVA_HOME}/lib:${
   JRE_HOME}/lib
export PATH=${
   JAVA_HOME}/bin:$PATH

或者:

vim /etc/profile
#JAVA
export JAVA_HOME=/usr/local/jdk1.8.0_161
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar



3 配置节点免密

对于管理节点执行以下操作

ssh-keygen 
### Hadoop 3.1.3 Pseudo-Distributed Installation and Configuration Tutorial In a pseudo-distributed mode, Hadoop runs on a single machine but simulates the cluster environment by running multiple daemons (NameNode, DataNode, ResourceManager, NodeManager) as separate Java processes. This setup is useful for testing and development purposes. #### Environment Preparation Ensure that Java is installed since Hadoop requires it to run. Verify this with `java -version`. Also, set up SSH access so you can start Hadoop services without entering passwords manually each time[^1]. #### Downloading and Extracting Hadoop Download the Hadoop package from either the official website or an alternative source like Baidu Cloud Disk. After downloading `hadoop-3.1.3.tar.gz`, extract it into `/usr/local` using: ```bash sudo tar -zxf ~/下载/hadoop-3.1.3.tar.gz -C /usr/local # Unpack under /usr/local directory. cd /usr/local/ sudo mv ./hadoop-3.1.3/ ./hadoop # Rename folder to &#39;hadoop&#39;. sudo chown -R $USER:$USER ./hadoop # Change ownership of files. ``` Adjust `$USER` according to your username if necessary[^2]. #### Configuring Core-Site.xml Edit the core-site.xml file located at `/usr/local/hadoop/etc/hadoop/core-site.xml`. Add these properties inside `<configuration>` tags: ```xml <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> ``` This setting specifies how clients connect to NameNode when accessing HDFS resources. #### Configuring HDFS-Site.xml Modify hdfs-site.xml found similarly within etc/hadoop/. Insert below configurations between existing `<configuration>` elements: ```xml <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/user/data/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/user/data/data</value> </property> ``` These settings define replication factor along with directories where metadata and actual data will reside locally during operation. #### Formatting the Namenode Before starting any service, format the namenode first via command line interface: ```bash $HADOOP_HOME/bin/hdfs namenode -format ``` Confirm successful completion before proceeding further. #### Starting Services Start all required services through scripts provided in bin folders: ```bash start-dfs.sh start-yarn.sh mr-jobhistory-daemon.sh start historyserver ``` Check whether everything started correctly by visiting http://localhost:50070/, which should display Web UI pages related to HDFS operations. #### Verifying Setup Run example MapReduce jobs included out-of-the-box to verify proper functioning after completing above steps successfully. --related questions-- 1. What are common issues encountered while configuring Hadoop in pseudo-distributed mode? 2. How does one configure SSH key-based authentication for passwordless login used in Hadoop setups? 3. Can other versions besides Hadoop 3.1.3 be configured following similar procedures outlined here? 4. Where do logs get stored upon encountering errors post-installation? 5. Is there support available online specifically targeting troubleshooting tips regarding Hadoop installations?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值