hadoop常见问题-no datanode to stop

本文介绍了Hadoop停止时出现nodatanodetostop信息的原因及解决办法,包括删除特定目录内容、修改namespaceID、重新格式化Hadoop等步骤,避免集群数据丢失。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

现象:当停止Hadoop的时候发现no datanode to stop的信息。

原因1每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的idnamenode format清空了namenode下的数据,但是没有清空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下的所有目录。

这里有两种解决方案:

1)删除“/usr/hadoop/tmp”里面的内容

       rm -rf /usr/hadoop/tmp/*

2)删除“/tmp”下以“hadoop”开头的文件

       rm -rf /tmp/hadoop*

3)重新格式化hadoop

       hadoop namenode -format

4)启动hadoop

       start-all.sh

这种方案的缺点是原来集群上的重要数据全没有了。因此推荐第二种方案:

1)修改每个SlavenamespaceID,使其与MasternamespaceID一致。

或者

2)修改MasternamespaceID使其与SlavenamespaceID一致。

Master“namespaceID”位于“/usr/hadoop/tmp/dfs/name/current/VERSION”文件里面,Slave“namespaceID”位于“/usr/hadoop/tmp/dfs/data/current/VERSION”文件里面。

 

原因2问题的原因是hadoopstop的时候依据的是datanode上的mapreddfs进程号。而默认的进程号保存在/tmp下,linux 默认会每隔一段时间(一般是一个月或者7天左右)去删除这个目录下的文件。因此删掉hadoop-hadoop-jobtracker.pidhadoop-hadoop-namenode.pid两个文件后,namenode自然就找不到datanode上的这两个进程了。

在配置文件hadoop_env.sh中配置export HADOOP_PID_DIR可以解决这个问题。

在配置文件中,HADOOP_PID_DIR的默认路径是“/var/hadoop/pids”,我们手动在“/var”目录下创建一个“hadoop”文件夹,若已存在就不用创建,记得用chown将权限分配给hadoop用户。然后在出错的Slave上杀死DatanodeTasktracker的进程(kill -9 进程号),再重新start-all.shstop-all.sh时发现没有“no datanode to stop”出现,说明问题已经解决。

[root@hadoop hadoop]# start-all.sh Starting namenodes on [hadoop] hadoop: namenode is running as process 7644. Stop it first and ensure /tmp/hadoop-root-namenode.pid file is empty before retry. Starting datanodes hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.4’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.3’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.2’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out.1’: No such file or directory hadoop02: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-datanode-hadoop.out’: No such file or directory Starting secondary namenodes [hadoop02] hadoop02: secondarynamenode is running as process 5763. Stop it first and ensure /tmp/hadoop-root-secondarynamenode.pid file is empty before retry. Starting resourcemanager resourcemanager is running as process 15332. Stop it first and ensure /tmp/hadoop-root-resourcemanager.pid file is empty before retry. Starting nodemanagers hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.4’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.3’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.2’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out.1’: No such file or directory hadoop01: mv: cannot stat ‘/export/servers/hadoop-3.4.1/logs/hadoop-root-nodemanager-hadoop.out’: No such file or directory
最新发布
06-12
[bhy@master hadoop-2.8.3]$ start-dfs.sh Starting namenodes on [master] master: master: Authorized users only. All activities may be monitored and reported. master: namenode running as process 47702. Stop it first. master: master: Authorized users only. All activities may be monitored and reported. localhost: localhost: Authorized users only. All activities may be monitored and reported. slave: slave: Authorized users only. All activities may be monitored and reported. localhost: mv: 无法获取'/home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out.2' 的文件状态(stat): 没有那个文件或目录 localhost: mv: 无法获取'/home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out.1' 的文件状态(stat): 没有那个文件或目录 master: starting datanode, logging to /home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out localhost: mv: 无法获取'/home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out' 的文件状态(stat): 没有那个文件或目录 localhost: starting datanode, logging to /home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out slave: datanode running as process 35714. Stop it first. master: ulimit -a for user bhy master: core file size (blocks, -c) unlimited master: data seg size (kbytes, -d) unlimited master: scheduling priority (-e) 0 master: file size (blocks, -f) unlimited master: pending signals (-i) 11396 master: max locked memory (kbytes, -l) 64 master: max memory size (kbytes, -m) unlimited master: open files (-n) 1024 master: pipe size (512 bytes, -p) 8 Starting secondary namenodes [master] master: master: Authorized users only. All activities may be monitored and reported. master: secondarynamenode running as process 48219. Stop it first.
06-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值