hadoop相关问题-stop-all.sh

本文详细介绍了Hadoop和HBase在PID文件丢失及多磁盘配置上的问题,包括如何通过环境变量指定PID文件路径以避免因系统清理而导致的集群停止失败,以及如何正确配置datanode存储路径以防止数据损失。同时讨论了多磁盘配置的原理和实践。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1,pid 文件丢失问题

    如果hadoop不指定pid的文件路径,默认的路径在/tmp下面,linux会定期清理/tmp目录,导致里面的pid文件消息

   ./stop-all.sh执行的时候直接从/tmp下面查找pid文件,因为已经被系统删除,便会提示找不到文件。会导致停止集群失败。

   这时可以根据系统里面正在运行的hadoop进程相关信息重构一下这些PID文件。即可执行成功。

    彻底的解决办法是:

    在./etc/hadoop/hadoop-env.sh中能过环境变量指定pid的路径 

   hbase也有同样的问题,修改的办法是在./conf/hbase-env.sh

     

2,关于多磁盘的问题

      由于以前精心,没有配置datanode的存储路径,所以的数据都写入到了core-site.xml中配置的hadoop.tmp.dir,还好这个目已经修改过来,不在/tmp下面

才没有造成数据的损失。

     发现这个问题以后,当然是要把datanode的存储路径放到正确的位置,修改办法,修改hdfs-site.xml中的

    

    主要是利用了hadoop支持多磁盘的原理,多个磁盘用,分开

 

   

 

[root@hadoop hadoop]# start-all.sh Starting namenodes on [hadoop] hadoop: namenode is running as process 7644. Stop it first and ensure /tmp/hadoop-root-namenode.pid file is empty before retry. Starting datanodes hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.4’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.3’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.2’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.1’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out’: No such file or directory Starting secondary namenodes [hadoop02] hadoop02: secondarynamenode is running as process 5763. Stop it first and ensure /tmp/hadoop-root-secondarynamenode.pid file is empty before retry. Starting resourcemanager resourcemanager is running as process 15332. Stop it first and ensure /tmp/hadoop-root-resourcemanager.pid file is empty before retry. Starting nodemanagers hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.4’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.3’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.2’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.1’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out’: No such file or directory
最新发布
06-12
[root@node1 hadoop]# /sbin/start-all.sh -bash: /sbin/start-all.sh: No such file or directory [root@node1 hadoop]# start-all.sh Starting namenodes on [localhost] Last login: Wed Mar 26 21:28:34 CST 2025 from 192.168.88.1 on pts/0 Starting datanodes Last login: Wed Mar 26 21:30:49 CST 2025 on pts/0 node1: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-datanode-node1.out.4’: No such file or directory node1: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-datanode-node1.out.3’: No such file or directory node1: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-datanode-node1.out.2’: No such file or directory node1: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-datanode-node1.out.1’: No such file or directory node1: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-datanode-node1.out’: No such file or directory Starting secondary namenodes [node2] Last login: Wed Mar 26 21:30:51 CST 2025 on pts/0 Starting resourcemanager Last login: Wed Mar 26 21:31:00 CST 2025 on pts/0 Starting nodemanagers Last login: Wed Mar 26 21:31:05 CST 2025 on pts/0 localhost: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-nodemanager-node1.out.3’: No such file or directory localhost: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-nodemanager-node1.out.2’: No such file or directory localhost: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-nodemanager-node1.out.1’: No such file or directory localhost: mv: cannot stat ‘/opt/hadoop-3.3.6/logs/hadoop-root-nodemanager-node1.out’: No such file or directory [root@node1 hadoop]# jps 2689 ResourceManager 2018 NameNode 2232 DataNode 2906 NodeManager 3324 Jps
03-27
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值