hadoop中运行map-reduce程序时,java.net.connectionException

about云开发

标题: mapreduce报错 java.net.ConnectException: Connection refused [打印本页]


作者: Wyy_Ck    时间: 2016-10-31 15:13
标题: mapreduce报错 java.net.ConnectException: Connection refused
弄了半天,系统是centos 7,本想执行一个测试下,如下(文件已经上传到/data/input):
hadoop jar /opt/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar  wordcount /data/input /opt/test/result
报错如下,实在焦虑,帮忙看看:


16/10/30 10:47:11 INFO mapreduce.Job: Job job_1477704898495_0008 failed with state FAILED due to: Application application_1477704898495_0008 failed 2 times due to Error launching appattempt_1477704898495_0008_000002. Got exception: java.net.ConnectException: Call From master/10.162.30.129 to localhost:36109 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1480)
    at org.apache.hadoop.ipc.Client.call(Client.java:1407)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy83.startContainers(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
    at org.apache.hadoop.ipc.Client.call(Client.java:1446)`enter code here`



感谢各位!
 


作者: langke93    时间: 2016-10-31 15:23
原因太多了:
1.确保集群已经启动了,查看是否有僵尸进程
2.防火墙是否关闭
3.hdfs-site.xml文件里面的权限访问是否设置为false


4.hosts里面是如何配置的

以上四项如果不确定,最好都贴出图来


作者: Wyy_Ck    时间: 2016-10-31 15:28

langke93 发表于 2016-10-31 15:23
原因太多了:
1.确保集群已经启动了,查看是否有僵尸进程
2.防火墙是否关闭


感谢您的回复;稍后我把相关配置全部贴出来,  就是感觉问题很多,无从下手。。。。。。。。。
 


作者: Wyy_Ck    时间: 2016-10-31 17:31
1, master:

[Bash shell] 纯文本查看 复制代码

<span style="color:#000000"> [hadoop@master hadoop]$ jps

91651 NameNode

91891 SecondaryNameNode

111023 Jps

92078 ResourceManager</span>



slave:

[AppleScript] 纯文本查看 复制代码

<span style="color:#000000">[root@slave hdfs]# jps

28647 DataNode

28792 NodeManager

44376 Jps</span>



2. 防火墙使用systemcl stop firewall.service 已经关闭了centos 系统
   

[Shell] 纯文本查看 复制代码

<span style="color:#000000">[hadoop@master wordcount]$ systemctl status firewalld.service 

firewalld.service

   Loaded: masked (/dev/null)

   Active: inactive (dead) since Wed 2016-10-12 17:27:10 CST; 2 weeks 4 days ago

 Main PID: 816 (code=exited, status=0/SUCCESS)

   CGroup: /system.slice/firewalld.service</span>


3. 主从节点 hdfs-site.xml:
   

[XML] 纯文本查看 复制代码

<span style="color:#000000"><configuration>

     <property>

       <name>dfs.namenode.name.dir</name>

       <value>file:///data/hadoop/storage/hdfs/name</value>

    </property>

    <property>

       <name>dfs.datanode.data.dir</name>

       <value>file:///data/hadoop/storage/hdfs/data</value>

    </property>

    <property>

       <name>dfs.datanode.http-address</name>

       <value>10.162.30.162:50075</value>

    </property>

    <property>

       <name>dfs.permissions</name>

       <value>false</value>

    </property>

    <property>

      <name>dfs.secondary.http.address</name>

      <value>master:50090</value>

    </property>

    <property>

      <name>dfs.http.address</name>

      <value>master:50070</value>

    </property>

    <property>

       <name>dfs.datanode.ipc.address</name>

       <value>10.162.30.162:50020</value>

    </property>

    <property>

       <name>dfs.datanode.address</name>

       <value>10.162.30.162:50010</value>

    </property>

</configuration></span>



4. hosts文件:
   

[Shell] 纯文本查看 复制代码

<span style="color:#000000">

#127.0.0.1 localhost 

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.162.30.129 master

10.162.30.162 slave</span>

 


作者: Wyy_Ck    时间: 2016-10-31 17:39

Wyy_Ck 发表于 2016-10-31 15:28
感谢您的回复;稍后我把相关配置全部贴出来,  就是感觉问题很多,无从下手。。。。。。。。。


已经回复在下面, 这样的配置 还是报错  


作者: nextuser    时间: 2016-10-31 21:09

Wyy_Ck 发表于 2016-10-31 17:39
已经回复在下面, 这样的配置 还是报错


Call From master/10.162.30.129 to localhost:36109
上面端口是在哪配置的


作者: Wyy_Ck    时间: 2016-10-31 21:23

nextuser 发表于 2016-10-31 21:09
Call From master/10.162.30.129 to localhost:36109
上面端口是在哪配置的


这个端口我没有配置,  而且每一次hadoop namenode -format后  这个端口还不一样    我也没有找到这个端口哪里的


作者: nextuser    时间: 2016-11-1 08:30

Wyy_Ck 发表于 2016-10-31 21:23
这个端口我没有配置,  而且每一次hadoop namenode -format后  这个端口还不一样    我也没有找到这个端 ...


问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


作者: Wyy_Ck    时间: 2016-11-1 09:43

nextuser 发表于 2016-11-1 08:30
问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


所有配置信息:
core-site.xml:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/storage/tmp</value>
    </property>
</configuration>

hdfs-site.xml 上面已经贴出/////////////////////////

mapred-site.xml:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


yarn-site.xml:

       <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
</configuration>



当前报错信息端口又变了,是不是这个需要在哪里配置,否则会随机起端口?

16/11/01 09:29:53 INFO mapreduce.Job: Job job_1477905891965_0005 failed with state FAILED due to: Application application_1477905891965_0005 failed 2 times due to Error launching appattempt_1477905891965_0005_000002. Got exception: java.net.ConnectException: Call From master/10.162.30.129 to localhost:47222 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 


作者: Wyy_Ck    时间: 2016-11-1 09:46

nextuser 发表于 2016-11-1 08:30
问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


所有配置信息:
core-site.xml:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/storage/tmp</value>
    </property>
</configuration>

hdfs-site.xml 上面已经贴出/////////////////////////

mapred-site.xml:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


yarn-site.xml:

       <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
</configuration>



当前报错信息端口又变了,是不是这个需要在哪里配置,否则会随机起端口?

16/11/01 09:29:53 INFO mapreduce.Job: Job job_1477905891965_0005 failed with state FAILED due to: Application application_1477905891965_0005 failed 2 times due to Error launching appattempt_1477905891965_0005_000002. Got exception: java.net.ConnectException: Call From master/10.162.30.129 to localhost:47222 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 


作者: Wyy_Ck    时间: 2016-11-1 10:44

nextuser 发表于 2016-11-1 08:30
问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


Call From master/10.162.30.129 to localhost:50868 failed on connection exception
每次执行mapreduce任务的时候 出现的错误端口还不一样?


作者: Wyy_Ck    时间: 2016-11-1 11:29
稍等  我把所有xml 配置信息 都贴出来


作者: nextuser    时间: 2016-11-1 15:57

Wyy_Ck 发表于 2016-11-1 10:44
Call From master/10.162.30.129 to localhost:50868 failed on connection exception
每次执行mapr ...


数据副本,
<property>
               <name>dfs.replication</name>
               <value>3</value>
        </property>

没有配置。参考的什么文档,还是自己配置的。
还有你配置了两个节点??
最好配置三个。
配置信息,可以参考这个
hadoop(2.x)以hadoop2.2为例完全分布式最新高可靠安装文档
http://www.aboutyun.com/forum.php?mod=viewthread&tid=7684



 


作者: nextuser    时间: 2016-11-1 16:06

Wyy_Ck 发表于 2016-11-1 10:44
Call From master/10.162.30.129 to localhost:50868 failed on connection exception
每次执行mapr ...


还有跟你格式化什么关系。
格式化是否成功。
贴出来看下。
你出现的错误,贴出截图来 


作者: Wyy_Ck    时间: 2016-11-2 16:25

nextuser 发表于 2016-11-1 15:57
数据副本,

               dfs.replication


我增加了一个子节点, 现在2个子节点, 就这样没用修改什么  就OK了!  如下, 这样就OK了吗
 

[Bash shell] 纯文本查看 复制代码

<span style="color:#000000">hadoop@master ~]$ hadoop jar /opt/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar  wordcount /data/input /data/output

16/11/02 16:17:21 INFO client.RMProxy: Connecting to ResourceManager at master/10.162.30.129:8032

16/11/02 16:17:21 INFO input.FileInputFormat: Total input paths to process : 1

16/11/02 16:17:21 INFO mapreduce.JobSubmitter: number of splits:1

16/11/02 16:17:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1478068499295_0002

16/11/02 16:17:22 INFO impl.YarnClientImpl: Submitted application application_1478068499295_0002

16/11/02 16:17:22 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1478068499295_0002/

16/11/02 16:17:22 INFO mapreduce.Job: Running job: job_1478068499295_0002

16/11/02 16:17:27 INFO mapreduce.Job: Job job_1478068499295_0002 running in uber mode : false

16/11/02 16:17:27 INFO mapreduce.Job:  map 0% reduce 0%

16/11/02 16:17:33 INFO mapreduce.Job:  map 100% reduce 0%

16/11/02 16:17:39 INFO mapreduce.Job:  map 100% reduce 100%

16/11/02 16:17:39 INFO mapreduce.Job: Job job_1478068499295_0002 completed successfully

16/11/02 16:17:39 INFO mapreduce.Job: Counters: 49

        File System Counters

                FILE: Number of bytes read=57

                FILE: Number of bytes written=229629

                FILE: Number of read operations=0

                FILE: Number of large read operations=0

                FILE: Number of write operations=0

                HDFS: Number of bytes read=135

                HDFS: Number of bytes written=31

                HDFS: Number of read operations=6

                HDFS: Number of large read operations=0

                HDFS: Number of write operations=2

        Job Counters 

                Launched map tasks=1

                Launched reduce tasks=1

                Data-local map tasks=1

                Total time spent by all maps in occupied slots (ms)=4077

                Total time spent by all reduces in occupied slots (ms)=2946

                Total time spent by all map tasks (ms)=4077

                Total time spent by all reduce tasks (ms)=2946

                Total vcore-seconds taken by all map tasks=4077

                Total vcore-seconds taken by all reduce tasks=2946

                Total megabyte-seconds taken by all map tasks=4174848

                Total megabyte-seconds taken by all reduce tasks=3016704

        Map-Reduce Framework

                Map input records=1

                Map output records=7

                Map output bytes=59

                Map output materialized bytes=57

                Input split bytes=104

                Combine input records=7

                Combine output records=5

                Reduce input groups=5

                Reduce shuffle bytes=57

                Reduce input records=5

                Reduce output records=5

                Spilled Records=10

                Shuffled Maps =1

                Failed Shuffles=0

                Merged Map outputs=1

                GC time elapsed (ms)=131

                CPU time spent (ms)=1580

                Physical memory (bytes) snapshot=435048448

                Virtual memory (bytes) snapshot=4218990592

                Total committed heap usage (bytes)=311951360

        Shuffle Errors

                BAD_ID=0

                CONNECTION=0

                IO_ERROR=0

                WRONG_LENGTH=0

                WRONG_MAP=0

                WRONG_REDUCE=0

        File Input Format Counters 

                Bytes Read=31

        File Output Format Counters 

                Bytes Written=31</span>

 


作者: nextuser    时间: 2016-11-2 16:53

Wyy_Ck 发表于 2016-11-2 16:25
我增加了一个子节点, 现在2个子节点, 就这样没用修改什么  就OK了!  如下, 这样就OK了吗  

[ ...


对的,就是这样的


作者: Wyy_Ck    时间: 2016-11-2 18:12

nextuser 发表于 2016-11-2 16:53
对的,就是这样的


卡死了  

[Bash shell] 纯文本查看 复制代码

<span style="color:#000000">[hadoop@master ~]$ hadoop fs -put /opt/test/wordcount/wordcount /data/input

[hadoop@master ~]$ hadoop jar /opt/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar  wordcount /data/input /data/output

16/11/02 18:04:05 INFO client.RMProxy: Connecting to ResourceManager at master/10.162.30.129:8032

16/11/02 18:04:06 INFO input.FileInputFormat: Total input paths to process : 1

16/11/02 18:04:06 INFO mapreduce.JobSubmitter: number of splits:1

16/11/02 18:04:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1478080130552_0008

16/11/02 18:04:07 INFO impl.YarnClientImpl: Submitted application application_1478080130552_0008

16/11/02 18:04:07 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1478080130552_0008/

16/11/02 18:04:07 INFO mapreduce.Job: Running job: job_1478080130552_0008

16/11/02 18:04:24 INFO mapreduce.Job: Job job_1478080130552_0008 running in uber mode : false

16/11/02 18:04:24 INFO mapreduce.Job:  map 0% reduce 0%</span>

 

从hdfs到mysql报错Warning: /usr/local/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/sqoop/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 错误: 找不到或无法加载主类 org.apache.hadoop.hbase.util.GetJavaProperty 2025-06-05 17:41:55,556 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 2025-06-05 17:41:55,583 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2025-06-05 17:41:55,664 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2025-06-05 17:41:55,664 INFO tool.CodeGenTool: Using existing jar: /tmp/sqoop-hadoop/compile/8b77c7c2bee6c4dd7f3856618f5f8556/user_action.jar 2025-06-05 17:41:55,668 INFO mapreduce.ExportJobBase: Beginning export of user_action 2025-06-05 17:41:55,668 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2025-06-05 17:41:55,827 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2025-06-05 17:41:56,366 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false Thu Jun 05 17:41:56 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-06-05 17:41:56,900 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2025-06-05 17:41:56,902 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2025-06-05 17:41:56,904 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2025-06-05 17:41:56,967 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2025-06-05 17:41:57,035 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2025-06-05 17:41:57,035 INFO impl.MetricsSystemImpl: JobTracker metrics system started 2025-06-05 17:41:57,104 INFO input.FileInputFormat: Total input files to process : 1 2025-06-05 17:41:57,111 INFO input.FileInputFormat: Total input files to process : 1 2025-06-05 17:41:57,139 INFO mapreduce.JobSubmitter: number of splits:4 2025-06-05 17:41:57,159 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2025-06-05 17:41:57,236 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1705881514_0001 2025-06-05 17:41:57,236 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-06-05 17:41:57,374 INFO mapred.LocalDistributedCacheManager: Creating symlink: /usr/local/hadoop/tmp/mapred/local/1749116517294/libjars <- /usr/local/sqoop/libjars/* 2025-06-05 17:41:57,376 INFO mapred.LocalDistributedCacheManager: Localized file:/tmp/hadoop/mapred/staging/hadoop1705881514/.staging/job_local1705881514_0001/libjars as file:/usr/local/hadoop/tmp/mapred/local/1749116517294/libjars 2025-06-05 17:41:57,408 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 2025-06-05 17:41:57,409 INFO mapreduce.Job: Running job: job_local1705881514_0001 2025-06-05 17:41:57,413 INFO mapred.LocalJobRunner: OutputCommitter set in config null 2025-06-05 17:41:57,416 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter 2025-06-05 17:41:57,455 INFO mapred.LocalJobRunner: Waiting for map tasks 2025-06-05 17:41:57,456 INFO mapred.LocalJobRunner: Starting task: attempt_local1705881514_0001_m_000000_0 2025-06-05 17:41:57,492 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-05 17:41:57,499 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dblab.db/user_action/000000_0:11692413+1948736,/user/hive/warehouse/dblab.db/user_action/000000_0:13641149+1948737 2025-06-05 17:41:57,507 INFO Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file 2025-06-05 17:41:57,507 INFO Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start 2025-06-05 17:41:57,507 INFO Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length 2025-06-05 17:41:57,515 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false Thu Jun 05 17:41:57 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-06-05 17:41:57,543 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-05 17:41:57,548 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-05 17:41:57,551 INFO mapred.LocalJobRunner: Starting task: attempt_local1705881514_0001_m_000001_0 2025-06-05 17:41:57,552 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-05 17:41:57,553 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dblab.db/user_action/000000_0:0+3897471 2025-06-05 17:41:57,562 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false Thu Jun 05 17:41:57 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-06-05 17:41:57,574 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-05 17:41:57,577 INFO mapred.LocalJobRunner: Starting task: attempt_local1705881514_0001_m_000002_0 2025-06-05 17:41:57,578 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-05 17:41:57,579 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dblab.db/user_action/000000_0:3897471+3897471 2025-06-05 17:41:57,583 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false Thu Jun 05 17:41:57 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-06-05 17:41:57,603 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-05 17:41:57,607 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-05 17:41:57,614 INFO mapred.LocalJobRunner: Starting task: attempt_local1705881514_0001_m_000003_0 2025-06-05 17:41:57,616 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2025-06-05 17:41:57,617 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/dblab.db/user_action/000000_0:7794942+3897471 2025-06-05 17:41:57,623 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false Thu Jun 05 17:41:57 CST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2025-06-05 17:41:57,638 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false 2025-06-05 17:41:57,640 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2025-06-05 17:41:57,641 INFO mapred.LocalJobRunner: map task executor complete. 2025-06-05 17:41:57,642 WARN mapred.LocalJobRunner: job_local1705881514_0001 java.lang.Exception: java.io.IOException: java.lang.ClassNotFoundException: user_action at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) Caused by: java.io.IOException: java.lang.ClassNotFoundException: user_action at org.apache.sqoop.mapreduce.TextExportMapper.setup(TextExportMapper.java:70) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: user_action at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.sqoop.mapreduce.TextExportMapper.setup(TextExportMapper.java:66) ... 10 more 2025-06-05 17:41:58,413 INFO mapreduce.Job: Job job_local1705881514_0001 running in uber mode : false 2025-06-05 17:41:58,414 INFO mapreduce.Job: map 0% reduce 0% 2025-06-05 17:41:58,416 INFO mapreduce.Job: Job job_local1705881514_0001 failed with state FAILED due to: NA 2025-06-05 17:41:58,422 INFO mapreduce.Job: Counters: 0 2025-06-05 17:41:58,432 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2025-06-05 17:41:58,434 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 1.5124 seconds (0 bytes/sec) 2025-06-05 17:41:58,436 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2025-06-05 17:41:58,436 INFO mapreduce.ExportJobBase: Exported 0 records. 2025-06-05 17:41:58,437 ERROR tool.ExportTool: Error during export: Export job failed!
最新发布
06-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值