Hadoop用户重新部署HDFS

本文详细介绍如何将由root用户部署的Hadoop伪分布式环境转换为hadoop用户部署的过程,包括停止现有进程、更改文件属主、配置相关文件、格式化及启动等步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

前言:
在这篇文章中https://www.jianshu.com/p/eeae2f37a48c
我们使用的是root用户来部署的,在生产环境中,一般某个组件是由某个用户来启动的,本篇文章介绍下怎样用hadoop用户来重新部署伪分布式(HDFS)

1.前期准备

创建hadoop用户,及配置ssh免密登录
参考:https://www.jianshu.com/p/589bb43e0282

2.停止root启动的HDFS进程并删除/tmp目录下的存储文件
[root@hadoop000 hadoop-2.8.1]# pwd
/opt/software/hadoop-2.8.1
[root@hadoop000 hadoop-2.8.1]# jps
32244 NameNode
32350 DataNode
32558 SecondaryNameNode
1791 Jps
[root@hadoop000 hadoop-2.8.1]# sbin/stop-dfs.sh 
Stopping namenodes on [hadoop000]
hadoop000: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
[root@hadoop000 hadoop-2.8.1]# jps
2288 Jps
[root@hadoop000  hadoop-2.8.1]# rm -rf /tmp/hadoop-* /tmp/hsperfdata_*
3.更改文件属主
[root@hadoop000 software]# pwd
/opt/software
[root@hadoop000 software]# chown -R hadoop:hadoop hadoop-2.8.1
4.进入hadoop用户 修改相关配置文件
#第一步:
[hadoop@hadoop000 hadoop]$ pwd
/opt/software/hadoop-2.8.1/etc/hadoop
[hadoop@hadoop000 hadoop]$ vi hdfs-site.xml 
<configuration>
     <property>
                <name>dfs.replication</name>
                <value>1</value>
     </property>
     <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>192.168.6.217:50090</value>
     </property>
     <property>
                <name>dfs.namenode.secondary.https-address</name>
                <value>192.168.6.217:50091</value>
     </property>
</configuration>
#第二步:
[hadoop@hadoop000 hadoop]$ vi core-site.xml 
<configuration>
     <property>
          <name>fs.defaultFS</name>
          <value>hdfs://192.168.6.217:9000</value>
     </property>
</configuration>
#第三步:
[hadoop@hadoop000 hadoop]# vi slaves 
192.168.6.217
5.格式化和启动
[hadoop@hadoop000 hadoop-2.8.1]$ pwd
/opt/software/hadoop-2.8.1
[hadoop@hadoop000 hadoop-2.8.1]$ bin/hdfs namenode -format
[hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.sh
Starting namenodes on [hadoop000]
hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out
192.168.6.217: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out
Starting secondary namenodes [hadoop000]
hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out
[hadoop@hadoop000 hadoop-2.8.1]$ jps
3141 Jps
2806 DataNode
2665 NameNode
2990 SecondaryNameNode
#至此发现HDFS三个进程都是以hadoop000启动,
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值