Hadoop-2.5.2平台环境搭建遇到的问题

一、集群环境

java-1.8.0-openjdk-1.8.0.181-7.b13.el7
hadoop-2.5.2
spark-2.3.3
hbase-1.3.1
hbase-2.1.0
zookeeper-3.5.5-bin
janusgraph-0.2.0-hadoop2-gremlin
mysql-5.7.27
hive-2.1.1

这两天我配置了mysql和hive,本文记录遇到的问题。

二、MySQL

使用了arm架构下的mysql.tar.gz离线安装。

参考文章:ARM架构部署mysql-5.7.27
文章内容:

cd /usr/local

将部署包:mysql-5.7.27-aarch64.tar.gz 上传到 /usr/local 下

tar xvf mysql-5.7.27-aarch64.tar.gz

mv /usr/local/mysql-5.7.27-aarch64 /usr/local/mysql

mkdir -p /usr/local/mysql/logs

ln -sf /usr/local/mysql/my.cnf /etc/my.cnf

cp -rf /usr/local/mysql/extra/lib* /usr/lib64/

mv /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so.6.old

ln -s /usr/lib64/libstdc++.so.6.0.24 /usr/lib64/libstdc++.so.6

groupadd mysql

useradd -g mysql mysql

chown -R mysql:mysql /usr/local/mysql

cp -rf /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld

chmod +x /etc/init.d/mysqld

systemctl enable mysqld

vim /etc/profile

export MYSQL_HOME=/usr/local/mysql

export PATH=$PATH:$MYSQL_HOME/bin

source /etc/profile

mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data

systemctl start mysqld

systemctl status mysqld



移动文件 mv /usr/local/mysql-5.7.27-aarch64 /usr/local/mysql

创建logs目录 mkdir -p /usr/local/mysql/logs

ln -sf a b 建立软连接,b指向a:ln -sf /usr/local/mysql/my.cnf /etc/my.cnf

cp是linux里的拷贝命令-r 是用于目录拷贝时的递归操作-f 是强制覆盖:cp -rf /usr/local/mysql/extra/lib* /usr/lib64/

创建mysql组:ln -s /usr/lib64/libstdc++.so.6.0.24 /usr/lib64/libstdc++.so.6

创建mysql用户添加到mysql组:groupadd mysql && useradd -g mysql mysql

将/usr/loca/mysql目录包含所有的子目录和文件,所有者改变为root,所属组改变为mysql:chown -R mysql:mysql /usr/local/mysql

设置开机启动:

cp -rf /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld

chmod +x /etc/init.d/mysqld

systemctl enable mysqld

添加环境变量:

vim /etc/profile

export MYSQL_HOME=/usr/local/mysql

export PATH=PATH:PATH:PATH:MYSQL_HOME/bin

source /etc/profile

初始化mysql:mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data

开启mysql:systemctl start mysqld

查看状态:systemctl status mysqld

2.1 MySQL初始化失败

重点检查my.cnf文件,所有目录的创建,权限,初始化命令参数。
关于my.cnf的详细介绍参考:MySQL 配置文件 my.cnf / my.ini 逐行解析
原文内容:

MySQL 配置文件详解
文件位置: Windows、Linux、Mac 有细微区别,Windows 配置文件是 .ini,Mac/linux 是 .cnf

[Windows]
MySQL\MySQL Server 5.7\my.ini

[Linux / Mac]
/etc/my.cnf
/etc/mysql/my.cnf 
当然我们也可以使用命令来查看 MySQL 默认配置文件位置

mysql --help|grep 'cnf'

[client]
客户端设置。当前为客户端默认参数

port = 3306
默认连接端口为 3306

socket = /tmp/mysql.sock
本地连接的 socket 套接字

default_character_set = utf8
设置字符集,通常使用 uft8 即可

[mysqld_safe]
mysqld_safe 是服务器端工具,用于启动 mysqld,也是 mysqld 的守护进程。当 mysql 被 kill 时,mysqld_safe 负责重启启动它。

open_files_limit = 8192
此为 MySQL 打开的文件描述符限制,它是 MySQL 中的一个全局变量且不可动态修改。它控制着 mysqld 进程能使用的最大文件描述符数量。默认最小值为 1024

需要注意的是这个变量的值并不一定是你在这里设置的值,mysqld 会在系统允许的情况下尽量取最大值。

当 open_files_limit 没有被配置时,比较 max_connections*5 和 ulimit -n 的值,取最大值

当 open_file_limit 被配置时,比较 open_files_limit 和 max_connections*5 的值,取最大值

user = mysql
用户名

log-error  = error.log
错误 log 记录文件

[mysqld]
服务端基本配置

port = 3306
mysqld 服务端监听端口

socket = /tmp/mysql.sock
MySQL 客户端程序和服务器之间的本地通讯指定一个套接字文件

max_allowed_packet  = 16M
允许最大接收数据包的大小,防止服务器发送过大的数据包。

当发出长查询或 mysqld 返回较大结果时,mysqld 才会分配内存,所以增大这个值风险不大,默认 16M,也可以根据需求改大,但太大会有溢出风险。取较小值是一种安全措施,避免偶然出现但大数据包导致内存溢出。

default_storage_engine = InnoDB
创建数据表时,默认使用的存储引擎。这个变量还可以通过 –default-table-type 进行设置

max_connections  = 512
最大连接数,当前服务器允许多少并发连接。默认为 100,一般设置为小于 1000 即可。太高会导致内存占用过多,MySQL 服务器会卡死。作为参考,小型站设置 100 - 300

max_user_connections = 50
用户最大的连接数,默认值为 50 一般使用默认即可。

thread_cache_size = 64
线程缓存,用于缓存空闲的线程。这个数表示可重新使用保存在缓存中的线程数,当对方断开连接时,如果缓存还有空间,那么客户端的线程就会被放到缓存中,以便提高系统性能。我们可根据物理内存来对这个值进行设置,对应规则 1G 为 8;2G 为 16;3G 为 32;4G 为 64 等。

Query Cache
query_cache_type = 1
设置为 0 时,则禁用查询缓存(尽管仍分配query_cache_size个字节的缓冲区)。
设置为 1 时 ,除非指定SQL_NO_CACHE,否则所有SELECT查询都将被缓存。
设置为 2 时,则仅缓存带有SQL CACHE子句的查询。
请注意,如果在禁用查询缓存的情况下启动服务器,则无法在运行时启用服务器。

query_cache_size = 64M
缓存select语句和结果集大小的参数。

查询缓存会存储一个select查询的文本与被传送到客户端的相应结果。

如果之后接收到一个相同的查询,服务器会从查询缓存中检索结果,而不是再次分析和执行这个同样的查询。

如果你的环境中写操作很少,读操作频繁,那么打开query_cache_type=1,会对性能有明显提升。如果写操作频繁,则应该关闭它(query_cache_type=0)。

Session variables  sort_buffer_size = 2M
MySQL 执行排序时,使用的缓存大小。增大这个缓存,提高 group by,order by 的执行速度。

tmp_table_size = 32M
HEAP 临时数据表的最大长度,超过这个长度的临时数据表 MySQL 可根据需求自动将基于内存的 HEAP 临时表改为基于硬盘的 MyISAM 表。我们可通过调整 tmp_table_size 的参数达到提高连接查询速度的效果。

read_buffer_size  = 128k
MySQL 读入缓存的大小。如果对表对顺序请求比较频繁对话,可通过增加该变量值以提高性能。

read_rnd_buffer_size = 256k
用于表的随机读取,读取时每个线程分配的缓存区大小。默认为 256k ,一般在 128 - 256k之间。在做 order by 排序操作时,会用到 read_rnd_buffer_size 空间来暂做缓冲空间。

join_buffer_size  = 128k
程序中经常会出现一些两表或多表 Join (联表查询)的操作。为了减少参与 Join 连表的读取次数以提高性能,需要用到 Join Buffer 来协助 Join 完成操作。当 Join Buffer 太小时,MySQL 不会将它写入磁盘文件。和 sort_buffer_size 一样,此参数的内存分配也是每个连接独享。

table_definition_cache = 400
限制不使用文件描述符存储在缓存中的表定义的数量。

table_open_cache   = 400
限制为所有线程在内存中打开的表数量。

MySQL 错误日志设置
log_error = error.log log_warnings = 2
log_warnings 为0, 表示不记录告警信息。
log_warnings 为1, 表示告警信息写入错误日志。
log_warnings 大于1, 表示各类告警信息,例如有关网络故障的信息和重新连接信息写入错误日志。
慢查询记录
slow_query_log_file = slow.log slow_query_log  = 0 log_queries_not_using_indexes  = 1 long_query_time = 0.5 min_examined_row_limit = 100
slow_query_log :全局开启慢查询功能。
slow_query_log_file :指定慢查询日志存储文件的地址和文件名。
log_queries_not_using_indexes:无论是否超时,未被索引的记录也会记录下来。
long_query_time:慢查询阈值(秒),SQL 执行超过这个阈值将被记录在日志中。
min_examined_row_limit:慢查询仅记录扫描行数大于此参数的 SQL。

2.2 MySQL启动报错

报错内容:
Job for mysqld.service failed because the control process exited with error code. See “systemctl status mysqld.service” and “journalctl -xe” for details.
解决思路:
mysqld.pid目录权限问题,请把我们组群mysql:mysql给到权限,这个组群是我们安装mysql时创建的。
参考文章:
[1]:关于Job for mysqld.service failed because the control process exited with error code报错解决办法
[2]:启动mysql报错Job for mysqld.service failed because the control process exited with error code.

2.3 启动时报不能打开日志错

报错内容:
ERROR Could not open file ‘***/log/mysql/error.log‘ for error logging: Permission denied
**错误原因:**日志文件夹的权限问题,请重点用chmod检查权限是否够组群用户使用。
参考文档:
[1]:centos系统中MySQL无法启动的问题
[2]:Docker中mysql启动错误Could not open file ‘/var/log/mysqld.log‘ for error logging: Permission denied

2.4 mysql启动时pid报错

报错内容:
Starting MySQL... ERROR The server quit without updating PID file
参考文档:
启动mysql服务时一直提示ERROR The server quit without updating PID file
该文章分析了启动mysql的几个服务脚本源码,非常详细,介绍了用户权限对mysql初始化的影响以及pid文件在此的作用,包括mycnf的配置目录描述。再次重启时可以使用mysql.service或者mysqld_safe来启动mysql。

二、Hive

2.1 Hive修改core-site.xml文件后刷新权限

core-site.xml中的这两个配置名:
hadoop.proxyuser.root.hosts
hadoop.proxyuser.root.groups
要分发集群并且刷新权限,有的文档说yarn rmadmin也需要刷新超级用户组配置。
hdfs dfsadmin -refreshSuperUserGroupsConfiguration

2.2 Hive启动元数据时报错

根本原因来自于hive-site.xml文件中配置的路径信息。
hive.metastore.uris
hive.metastore.warehouse.dir
hive.exec.scratchdir
hdfs仓库的路径和元数据仓库拼写前是否带有mycluster,cluster,或者不写集群名,甚至是集群名和高可用的配置名拼写错。
还有的host名称没有配置正确的映射。
有较多的文档可以参考,具体需要结合集群是否有高可用配置。

2.3 Hive初始化MySQL报错

2.3.1 报错信息

执行初始化代码:schematool -dbType mysql -initSchema

Underlying cause: java.sql.SQLException : null,  message from server: "Host 'hadoop01' is not allowed to connect to this MySQL server"
SQL Error code: 1130

2.3.2 错误原因

MySQL数据权限问题。

2.3.3 参考文档

给权限然后flush刷新,看到root有%就可以了。
连接Mysql服务器提示:1130-Host XXX is not allowed to connect to this MySQL server的处理方法

2.4 mr shuffle不存在

2.4.1 报错信息:**org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist

以下报错日志是我在我的个人集群中更改配置后复现的报错结果,其中运行了mapreduce官方案例的wordcount和pi:

[root@hadoop11 data]# hadoop jar /opt/installs/hadoop3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.4.jar wordcount /wc.txt /out3
2023-10-09 15:42:20,213 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1696837267350_0001
2023-10-09 15:42:20,536 INFO input.FileInputFormat: Total input files to process : 1
2023-10-09 15:42:20,684 INFO mapreduce.JobSubmitter: number of splits:1
2023-10-09 15:42:20,955 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1696837267350_0001
2023-10-09 15:42:20,959 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-10-09 15:42:21,212 INFO conf.Configuration: resource-types.xml not found
2023-10-09 15:42:21,213 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-10-09 15:42:21,620 INFO impl.YarnClientImpl: Submitted application application_1696837267350_0001
2023-10-09 15:42:21,743 INFO mapreduce.Job: The url to track the job: http://hadoop13:8088/proxy/application_1696837267350_0001/
2023-10-09 15:42:21,745 INFO mapreduce.Job: Running job: job_1696837267350_0001
2023-10-09 15:42:30,000 INFO mapreduce.Job: Job job_1696837267350_0001 running in uber mode : false
2023-10-09 15:42:30,002 INFO mapreduce.Job:  map 0% reduce 0%
2023-10-09 15:42:32,053 INFO mapreduce.Job: Task Id : attempt_1696837267350_0001_m_000000_0, Status : FAILED
Container launch failed for container_e50_1696837267350_0001_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:163)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:394)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

2023-10-09 15:42:33,087 INFO mapreduce.Job: Task Id : attempt_1696837267350_0001_m_000000_1, Status : FAILED
Container launch failed for container_e50_1696837267350_0001_01_000003 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:163)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:394)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

2023-10-09 15:42:35,113 INFO mapreduce.Job: Task Id : attempt_1696837267350_0001_m_000000_2, Status : FAILED
Container launch failed for container_e50_1696837267350_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:163)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:394)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

2023-10-09 15:42:38,147 INFO mapreduce.Job:  map 100% reduce 100%
2023-10-09 15:42:39,167 INFO mapreduce.Job: Job job_1696837267350_0001 failed with state FAILED due to: Task failed task_1696837267350_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0

2023-10-09 15:42:39,245 INFO mapreduce.Job: Counters: 10
        Job Counters
                Failed map tasks=4
                Killed reduce tasks=1
                Launched map tasks=4
                Other local map tasks=3
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=5
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=5
                Total vcore-milliseconds taken by all map tasks=5
                Total megabyte-milliseconds taken by all map tasks=5120
[root@hadoop11 data]#

[root@hadoop11 data]# hadoop jar /opt/installs/hadoop3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.4.jar pi 1            10
Number of Maps  = 1
Samples per Map = 10
Wrote input for Map #0
Starting Job
2023-10-09 15:45:28,177 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/roo           t/.staging/job_1696837267350_0002
2023-10-09 15:45:28,335 INFO input.FileInputFormat: Total input files to process : 1
2023-10-09 15:45:28,441 INFO mapreduce.JobSubmitter: number of splits:1
2023-10-09 15:45:28,629 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1696837267350_0002
2023-10-09 15:45:28,631 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-10-09 15:45:28,856 INFO conf.Configuration: resource-types.xml not found
2023-10-09 15:45:28,857 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-10-09 15:45:28,939 INFO impl.YarnClientImpl: Submitted application application_1696837267350_0002
2023-10-09 15:45:29,009 INFO mapreduce.Job: The url to track the job: http://hadoop13:8088/proxy/application_1696837267350_           0002/
2023-10-09 15:45:29,011 INFO mapreduce.Job: Running job: job_1696837267350_0002
2023-10-09 15:45:36,147 INFO mapreduce.Job: Job job_1696837267350_0002 running in uber mode : false
2023-10-09 15:45:36,149 INFO mapreduce.Job:  map 0% reduce 0%
2023-10-09 15:45:37,190 INFO mapreduce.Job: Task Id : attempt_1696837267350_0002_m_000000_0, Status : FAILED
Container launch failed for container_e50_1696837267350_0002_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServic           eException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptio           nPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBI           mpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:           106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:16           3)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:           394)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

2023-10-09 15:45:39,235 INFO mapreduce.Job: Task Id : attempt_1696837267350_0002_m_000000_1, Status : FAILED
Container launch failed for container_e50_1696837267350_0002_01_000003 : org.apache.hadoop.yarn.exceptions.InvalidAuxServic           eException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptio           nPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBI           mpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:           106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:16           3)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:           394)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

2023-10-09 15:45:41,260 INFO mapreduce.Job: Task Id : attempt_1696837267350_0002_m_000000_2, Status : FAILED
Container launch failed for container_e50_1696837267350_0002_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServic           eException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptio           nPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBI           mpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:           106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:16           3)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:           394)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

2023-10-09 15:45:44,293 INFO mapreduce.Job:  map 100% reduce 100%
2023-10-09 15:45:44,307 INFO mapreduce.Job: Job job_1696837267350_0002 failed with state FAILED due to: Task failed task_16           96837267350_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0

2023-10-09 15:45:44,387 INFO mapreduce.Job: Counters: 10
        Job Counters
                Failed map tasks=4
                Killed reduce tasks=1
                Launched map tasks=4
                Other local map tasks=3
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=5
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=5
                Total vcore-milliseconds taken by all map tasks=5
                Total megabyte-milliseconds taken by all map tasks=5120
Job job_1696837267350_0002 failed!
[root@hadoop11 data]#

2.4.2 查看yarn任务:

在这里插入图片描述

2.4.3 问题描述:

最开始发现这个mr shuffle的报错是在beeline中执行走mr的查询代码时,发现程序不走mr,经过测试发现是hadoop中yarn的mr配置原因,因为我们服务器之前装hadoop的哥们只用spark,在服务器的yarn中只有spark的成功任务,从没跑过mr。问题已经很明确,yarn的环境配置缺少mapreduce_shuffle。

网上有比较多的方案,参考文档1这一篇的配置方法提到了spark和mr的shuffle。但是经过配置后仍然报mr shuffle不存在的错。

于是检查yarn配置,如下图:

YARN-1
只搜索到了一个mr_shuffle,而且不是我在yarn-site.xml中添加的。
原因再次定位到yarn配置文件没有生效。

而后经过检查发现,我的集群在重启yarn时,stop-yarn.shno resourcemanager to stop,由于集群数量太多,一直没有仔细看yarn的关机提示,导致我在分发yarn配置后,其实资源管理器并没有重启。

于是解决掉yarn无法重启的问题,原因见参考文档2

再次去8088检查yarn配置,这次查到了两个mr shuffle配置,其中一个正是我添加的。

在这里插入图片描述
在这里插入图片描述

no resourcemanager to stop

2.4.4 参考文档

[1] AWS EMR S3DistCp: The auxService:mapreduce_shuffle does not exist
[2] no resourcemanager to stop


  1. 1 ↩︎

  2. 2 ↩︎

DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 2025-06-18 16:50:39,734 INFO datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = LAPTOP-FK5QKFGQ/192.168.10.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.2.2 STARTUP_MSG: classpath = D:\pyspark\Hadoop\hadoop-3.2.2\etc\hadoop;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\accessors-smart-1.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\animal-sniffer-annotations-1.17.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\asm-5.0.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\audience-annotations-0.5.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\avro-1.7.7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\checker-qual-2.5.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-beanutils-1.9.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-cli-1.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-codec-1.11.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-collections-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-compress-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-configuration2-2.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-io-2.5.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-lang3-3.7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-logging-1.1.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-net-3.6.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\commons-text-1.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\curator-client-2.13.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\curator-framework-2.13.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\curator-recipes-2.13.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\dnsjava-2.1.7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\error_prone_annotations-2.2.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\failureaccess-1.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\gson-2.2.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\guava-27.0-jre.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\hadoop-annotations-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\hadoop-auth-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\htrace-core4-4.1.0-incubating.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\httpclient-4.5.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\httpcore-4.4.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\j2objc-annotations-1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-annotations-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-core-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-databind-2.9.10.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jackson-xc-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\javax.activation-api-1.2.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\javax.servlet-api-3.1.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jaxb-api-2.2.11.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jcip-annotations-1.0-1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jersey-core-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jersey-json-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jersey-server-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jersey-servlet-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jettison-1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-http-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-io-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-security-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-server-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-servlet-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-util-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-webapp-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jetty-xml-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jsch-0.1.55.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\json-smart-2.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jsp-api-2.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jsr305-3.0.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jsr311-api-1.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\jul-to-slf4j-1.7.25.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-admin-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-client-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-common-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-core-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-crypto-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-identity-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-server-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-simplekdc-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerb-util-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerby-asn1-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerby-config-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerby-pkix-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerby-util-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\kerby-xdr-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\log4j-1.2.17.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\metrics-core-3.2.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\netty-3.10.6.Final.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\nimbus-jose-jwt-7.9.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\paranamer-2.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\protobuf-java-2.5.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\re2j-1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\slf4j-api-1.7.25.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\slf4j-log4j12-1.7.25.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\snappy-java-1.0.5.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\stax2-api-3.1.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\token-provider-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\woodstox-core-5.0.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\lib\zookeeper-3.4.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\hadoop-common-3.2.2-tests.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\hadoop-common-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\hadoop-kms-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\common\hadoop-nfs-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\accessors-smart-1.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\animal-sniffer-annotations-1.17.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\asm-5.0.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\audience-annotations-0.5.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\avro-1.7.7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\checker-qual-2.5.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-beanutils-1.9.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-cli-1.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-codec-1.11.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-collections-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-compress-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-configuration2-2.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-io-2.5.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-lang3-3.7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-math3-3.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-net-3.6.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\commons-text-1.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\curator-client-2.13.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\curator-framework-2.13.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\curator-recipes-2.13.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\dnsjava-2.1.7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\error_prone_annotations-2.2.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\failureaccess-1.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\gson-2.2.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\guava-27.0-jre.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\hadoop-annotations-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\hadoop-auth-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\htrace-core4-4.1.0-incubating.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\httpclient-4.5.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\httpcore-4.4.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\j2objc-annotations-1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-annotations-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-core-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-databind-2.9.10.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-jaxrs-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jackson-xc-1.9.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\javax.activation-api-1.2.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\javax.servlet-api-3.1.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jaxb-api-2.2.11.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jaxb-impl-2.2.3-1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jcip-annotations-1.0-1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jersey-core-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jersey-json-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jersey-server-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jersey-servlet-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jettison-1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-http-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-io-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-security-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-server-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-servlet-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-util-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-util-ajax-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-webapp-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jetty-xml-9.4.20.v20190813.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jsch-0.1.55.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\json-simple-1.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\json-smart-2.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jsr305-3.0.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\jsr311-api-1.1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-admin-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-client-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-common-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-core-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-crypto-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-identity-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-server-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-simplekdc-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerb-util-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerby-asn1-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerby-config-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerby-pkix-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerby-util-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\kerby-xdr-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\leveldbjni-all-1.8.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\log4j-1.2.17.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\netty-3.10.6.Final.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\netty-all-4.1.48.Final.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\nimbus-jose-jwt-7.9.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\okhttp-2.7.5.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\okio-1.6.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\paranamer-2.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\re2j-1.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\snappy-java-1.0.5.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\stax2-api-3.1.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\token-provider-1.0.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\woodstox-core-5.0.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\lib\zookeeper-3.4.13.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-3.2.2-tests.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-client-3.2.2-tests.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-client-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-httpfs-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-native-client-3.2.2-tests.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-native-client-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-nfs-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-rbf-3.2.2-tests.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\hdfs\hadoop-hdfs-rbf-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\aopalliance-1.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\bcpkix-jdk15on-1.60.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\bcprov-jdk15on-1.60.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\ehcache-3.3.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\fst-2.50.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\geronimo-jcache_1.0_spec-1.0-alpha-1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\guice-4.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\guice-servlet-4.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\HikariCP-java7-2.4.12.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\jackson-jaxrs-base-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\jackson-jaxrs-json-provider-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\jackson-module-jaxb-annotations-2.9.10.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\java-util-1.9.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\javax.inject-1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\jersey-client-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\jersey-guice-1.19.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\json-io-2.5.1.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\metrics-core-3.2.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\mssql-jdbc-6.2.1.jre7.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\objenesis-1.0.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\snakeyaml-1.16.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\lib\swagger-annotations-1.5.4.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-api-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-client-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-common-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-registry-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-common-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-nodemanager-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-router-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-sharedcachemanager-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-tests-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-timeline-pluginstorage-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-server-web-proxy-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-services-api-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-services-core-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\yarn\hadoop-yarn-submarine-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\lib\junit-4.11.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-app-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-common-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-core-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-3.2.2-tests.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-nativetask-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-client-uploader-3.2.2.jar;D:\pyspark\Hadoop\hadoop-3.2.2\share\hadoop\mapreduce\hadoop-mapreduce-examples-3.2.2.jar STARTUP_MSG: build = Unknown -r 7a3bc90b05f257c8ace2f76d74264906f0f7a932; compiled by 'hexiaoqiao' on 2021-01-03T09:26Z STARTUP_MSG: java = 1.8.0_281 ************************************************************/ 2025-06-18 16:50:45,335 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/D:/hadoop-3.2.2/data/datanode 2025-06-18 16:50:45,420 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2025-06-18 16:50:45,483 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2025-06-18 16:50:45,484 INFO impl.MetricsSystemImpl: DataNode metrics system started 2025-06-18 16:50:46,677 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-06-18 16:50:46,689 INFO datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576 2025-06-18 16:50:46,692 INFO datanode.DataNode: Configured hostname is LAPTOP-FK5QKFGQ 2025-06-18 16:50:46,693 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-06-18 16:50:46,695 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2025-06-18 16:50:46,709 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:9866 2025-06-18 16:50:46,710 INFO datanode.DataNode: Balancing bandwidth is 10485760 bytes/s 2025-06-18 16:50:46,710 INFO datanode.DataNode: Number threads for balancing is 50 2025-06-18 16:50:46,741 INFO util.log: Logging initialized @7589ms to org.eclipse.jetty.util.log.Slf4jLog 2025-06-18 16:50:51,787 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2025-06-18 16:50:51,821 INFO http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2025-06-18 16:50:51,828 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2025-06-18 16:50:51,829 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2025-06-18 16:50:51,829 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-06-18 16:50:51,830 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-06-18 16:50:51,848 INFO http.HttpServer2: Jetty bound to port 38751 2025-06-18 16:50:51,849 INFO server.Server: jetty-9.4.20.v20190813; built: 2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 1.8.0_281-b09 2025-06-18 16:50:51,865 INFO server.session: DefaultSessionIdManager workerName=node0 2025-06-18 16:50:51,865 INFO server.session: No SessionScavenger set, using defaults 2025-06-18 16:50:51,867 INFO server.session: node0 Scavenging every 660000ms 2025-06-18 16:50:51,874 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@2421cc4{logs,/logs,file:///D:/pyspark/Hadoop/hadoop-3.2.2/logs/,AVAILABLE} 2025-06-18 16:50:51,874 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@21ba0741{static,/static,file:///D:/pyspark/Hadoop/hadoop-3.2.2/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2025-06-18 16:50:51,926 INFO util.TypeUtil: JVM Runtime does not support Modules 2025-06-18 16:50:51,932 INFO handler.ContextHandler: Started o.e.j.w.WebAppContext@43f82e78{datanode,/,file:///D:/pyspark/Hadoop/hadoop-3.2.2/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{file:/D:/pyspark/Hadoop/hadoop-3.2.2/share/hadoop/hdfs/webapps/datanode} 2025-06-18 16:50:51,939 INFO server.AbstractConnector: Started ServerConnector@1e097d59{HTTP/1.1,[http/1.1]}{localhost:38751} 2025-06-18 16:50:51,940 INFO server.Server: Started @12789ms 2025-06-18 16:50:52,540 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864 2025-06-18 16:50:52,545 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2025-06-18 16:50:52,545 INFO datanode.DataNode: dnUserName = aaa 2025-06-18 16:50:52,546 INFO datanode.DataNode: supergroup = supergroup 2025-06-18 16:50:52,573 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2025-06-18 16:50:52,583 INFO ipc.Server: Starting Socket Reader #1 for port 9867 2025-06-18 16:50:52,720 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:9867 2025-06-18 16:50:52,729 INFO datanode.DataNode: Refresh request received for nameservices: null 2025-06-18 16:50:52,735 INFO datanode.DataNode: Starting BPOfferServices for nameservices: <default> 2025-06-18 16:50:52,740 INFO datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service 2025-06-18 16:50:52,745 INFO ipc.Server: IPC Server Responder: starting 2025-06-18 16:50:52,745 INFO ipc.Server: IPC Server listener on 9867: starting 2025-06-18 16:50:52,954 INFO datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 2025-06-18 16:50:52,956 INFO common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1) 2025-06-18 16:50:52,965 INFO common.Storage: Lock on D:\hadoop-3.2.2\data\datanode\in_use.lock acquired by nodename 16808@LAPTOP-FK5QKFGQ 2025-06-18 16:50:52,970 WARN common.Storage: Failed to add storage directory [DISK]file:/D:/hadoop-3.2.2/data/datanode java.io.IOException: Incompatible clusterIDs in D:\hadoop-3.2.2\data\datanode: namenode clusterID = CID-0243def2-304c-4ffd-871c-57b2cdf0182f; datanode clusterID = CID-a6ff55fc-9daf-4605-8a53-edaae5a9f8de at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:744) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:559) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1748) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1684) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:392) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829) at java.lang.Thread.run(Thread.java:748) 2025-06-18 16:50:52,973 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid cd899db0-fd95-4996-8250-261d1d36dbda) service to localhost/127.0.0.1:9000. Exiting. java.io.IOException: All specified directories have failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:560) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1748) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1684) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:392) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829) at java.lang.Thread.run(Thread.java:748) 2025-06-18 16:50:52,973 WARN datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid cd899db0-fd95-4996-8250-261d1d36dbda) service to localhost/127.0.0.1:9000 2025-06-18 16:50:52,974 INFO datanode.DataNode: Removed Block pool <registering> (Datanode Uuid cd899db0-fd95-4996-8250-261d1d36dbda) 2025-06-18 16:50:54,974 WARN datanode.DataNode: Exiting Datanode 2025-06-18 16:50:54,976 INFO datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at LAPTOP-FK5QKFGQ/192.168.10.1 ************************************************************/
06-19
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

程序终结者

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值