hadoop-mr运行报错记录

在执行Hadoop MapReduce任务时遇到`java.lang.ClassCastException`,具体为`com.atguigu.reducer.groupomparator.OrderBean`。问题源于OrderBean类未正确实现`WritableComparable`接口。解决方法包括让OrderBean实现`WritableComparable`接口,并重写序列化和反序列化方法,以及`compareTo`方法来确保排序逻辑正确。

java.lang.ClassCastException: class com.atguigu.reducer.groupomparator.OrderBean
    at java.base/java.lang.Class.asSubclass(Class.java:3769)
    at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:887)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1004)
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402)
    at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
    at java.base/java.util.concurren

[root@slave1 ~]# cd $HADOOP_HOME [root@slave1 hadoop-3.1.4]# sbin/start-dfs.sh WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Starting namenodes on [master] 上一次登录:六 11月 8 01:49:01 CST 2025:0 上 WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: namenode is running as process 12174. Stop it first. Starting datanodes ERROR: Refusing to run as root: ROOT account is not found. Aborting. Starting secondary namenodes [master] 上一次登录:六 11月 8 01:51:22 CST 2025pts/0 上 WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: secondarynamenode is running as process 12365. Stop it first. WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. [root@slave1 hadoop-3.1.4]# sbin/start-yarn.sh Starting resourcemanager 上一次登录:六 11月 8 01:51:26 CST 2025pts/0 上 Starting nodemanagers 上一次登录:六 11月 8 01:51:32 CST 2025pts/0 上 slave2: ssh: connect to host slave2 port 22: No route to host slave3: ssh: connect to host slave3 port 22: No route to host [root@slave1 hadoop-3.1.4]# sbin/mr-jobhistory-daemon.sh start historyserver WARNING: Use of this script to start the MR JobHistory daemon is deprecated. WARNING: Attempting to execute replacement "mapred --daemon start" instead. [root@slave1 hadoop-3.1.4]# jps 3425 NodeManager 3713 Jps 11486 -- process information unavailable [root@slave1 hadoop-3.1.4]# ^C [root@slave1 hadoop-3.1.4]# jps 3425 NodeManager 4164 Jps 11486 -- process information unavailable [root@slave1 hadoop-3.1.4]# # 从root用户切换到普通用户localhost [root@slave1 hadoop-3.1.4]# su - localhost 上一次登录:六 11月 8 01:29:03 CST 2025从 slave1pts/3 上 [localhost@slave1 ~]$ # 进入Hadoop目录 [localhost@slave1 ~]$ cd $HADOOP_HOME [localhost@slave1 hadoop-3.1.4]$ [localhost@slave1 hadoop-3.1.4]$ # 先停止所有服务(避免残留进程) [localhost@slave1 hadoop-3.1.4]$ sbin/stop-all.sh WARNING: Stopping all Apache Hadoop daemons as localhost in 10 seconds. WARNING: Use CTRL-C to abort. WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Stopping namenodes on [master] WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Stopping datanodes WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. slave2: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). slave3: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). slave1: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. slave1: ERROR: User defined in HDFS_DATANODE_SECURE_USER (yarn) does not exist. Aborting. Stopping secondary namenodes [master] WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Stopping nodemanagers ERROR: nodemanager can only be executed by root. Stopping resourcemanager ERROR: resourcemanager can only be executed by root. [localhost@slave1 hadoop-3.1.4]$ [localhost@slave1 hadoop-3.1.4]$ # 重新启动HDFS(重点解决DataNode启动问题) [localhost@slave1 hadoop-3.1.4]$ sbin/start-dfs.sh WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Starting namenodes on [master] WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: ERROR: Unable to write in /usr/local/hadoop-3.1.4/logs. Aborting. Starting datanodes WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. slave3: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). slave2: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). slave1: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. slave1: ERROR: User defined in HDFS_DATANODE_SECURE_USER (yarn) does not exist. Aborting. Starting secondary namenodes [master] WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. master: ERROR: Unable to write in /usr/local/hadoop-3.1.4/logs. Aborting. WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. [localhost@slave1 hadoop-3.1.4]$ [localhost@slave1 hadoop-3.1.4]$ # 查看slave1节点的进程(在slave1的localhost用户下) [localhost@slave1 hadoop-3.1.4]$ jps 5726 Jps [localhost@slave1 hadoop-3.1.4]$ # 在slave1的localhost用户下,查看DataNode日志 [localhost@slave1 hadoop-3.1.4]$ cat $HADOOP_HOME/logs/hadoop-localhost-datanode-slave1.log cat: /usr/local/hadoop-3.1.4/logs/hadoop-localhost-datanode-slave1.log: 没有那个文件或目录
最新发布
11-09
### root 用户启动报错 root 用户启动 Hadoop 可能会因权限和配置问题报错。通常 Hadoop 不建议使用 root 用户启动,因为这可能会带来安全风险且不符合 Hadoop 的设计初衷。可以创建一个普通用户(如 hadoop 用户),并为其分配合适的权限来启动服务。例如,使用以下命令创建用户并设置密码: ```bash useradd hadoop passwd hadoop ``` 然后将该用户添加到 sudoers 文件中,使其具备一定的权限: ```bash visudo # 在文件中添加以下内容 hadoop ALL=(ALL) NOPASSWD: ALL ``` ### 普通用户权限问题 普通用户在启动和停止服务时可能会遇到权限不足的问题。需要确保普通用户对 Hadoop 安装目录及其子目录有足够的读写执行权限。可以使用以下命令修改权限: ```bash chown -R hadoop:hadoop /home/hadoop/app/hadoop-3.1.4 ``` 此命令将 `/home/hadoop/app/hadoop-3.1.4` 目录及其子目录的所有权赋予 hadoop 用户和 hadoop 用户组。 ### ssh 连接失败 Hadoop 集群依赖 ssh 进行节点间的通信。确保各节点之间可以通过 ssh 无密码登录。首先在主节点生成 ssh 密钥: ```bash ssh-keygen -t rsa ``` 然后将公钥复制到各个节点: ```bash ssh-copy-id hadoop@node1 ssh-copy-id hadoop@node2 # 依次类推,node1、node2 为集群中的节点名 ``` 同时,确保 ssh 服务在各节点正常运行,可以使用以下命令检查和启动 ssh 服务: ```bash systemctl status sshd systemctl start sshd ``` ### 日志写入权限不足 日志写入权限不足可能导致服务启动时无法正常记录日志。需要确保普通用户对日志目录有写入权限。例如,修改 Hadoop 日志目录的权限: ```bash chown -R hadoop:hadoop /home/hadoop/software/hadoop-3.1.4/logs ``` ### DataNode 启动失败 DataNode 启动失败可能有多种原因,常见的是数据目录权限问题或数据目录不一致。首先检查数据目录的权限,确保普通用户有读写权限: ```bash chown -R hadoop:hadoop /home/hadoop/software/hadoop-3.1.4/data ``` 如果数据目录不一致,可以格式化 NameNode 并重新启动服务。格式化 NameNode 前请备份重要数据: ```bash hdfs namenode -format ``` 然后启动 HDFS 服务: ```bash sbin/start-dfs.sh ``` ### 启动和停止服务的命令 启动 HDFS 服务: ```bash sbin/start-dfs.sh ``` 启动 YARN 服务: ```bash sbin/start-yarn.sh ``` 停止 HDFS 服务: ```bash sbin/stop-dfs.sh ``` 停止 YARN 服务: ```bash sbin/stop-yarn.sh ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值