错误分析:OutConnectionProcessor:289] Error when read data from SocketChannel

本文讨论了在使用Socket通道进行数据读取时遇到的异常情况,包括`CancelledKeyException`和`NullPointerException`,并分析了在XMPP会话中加载VCard信息时遇到的超时问题。
2012-08-06 05:06:22 Out Connection Reader (18) ERROR [com.pica.gateway.connmng.OutConnectionProcessor:289] Error when read data from SocketChannel
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:69)
at java.nio.channels.SelectionKey.isConnectable(SelectionKey.java:318)
at com.pica.gateway.connmng.OutConnectionProcessor.readMessage(OutConnectionProcessor.java:261)
at com.pica.gateway.connmng.OutConnectionProcessor.access$2(OutConnectionProcessor.java:236)
at com.pica.gateway.connmng.OutConnectionProcessor$1.run(OutConnectionProcessor.java:95)
2012-08-06 05:06:22 Out Connection Reader (18) ERROR [com.pica.gateway.connmng.OutConnectionProcessor:298] Selector exception
java.lang.NullPointerException
at com.pica.gateway.connmng.OutConnectionProcessor.readMessage(OutConnectionProcessor.java:290)
at com.pica.gateway.connmng.OutConnectionProcessor.access$2(OutConnectionProcessor.java:236)
at com.pica.gateway.connmng.OutConnectionProcessor$1.run(OutConnectionProcessor.java:95)
2012-08-06 05:08:26 google transport (64) ERROR [com.pica.gateway.protocols.nxmpp.NioXMPPSession:771] Load vcard error!
Timeout getting VCard information,packet id is 9tibf-4083650: request-timeout(408) Timeout getting VCard information,packet id is 9tibf-4083650
at org.jivesoftware.smackx.packet.VCard.doLoad(VCard.java:553)
at org.jivesoftware.smackx.packet.VCard.load(VCard.java:539)
at com.pica.gateway.protocols.nxmpp.NioXMPPSession.getBuddyVCard(NioXMPPSession.java:766)
at com.pica.gateway.protocols.nxmpp.NioXMPPTransport.getUserVCard(NioXMPPTransport.java:89)
at com.pica.gateway.BaseTransport.handleRcsVcard(BaseTransport.java:1000)
at com.pica.gateway.BaseTransport.processPacket(BaseTransport.java:726)
at com.pica.gateway.BaseTransport.processPacketInThread(BaseTransport.java:278)
at com.pica.gateway.BaseTransport.access$0(BaseTransport.java:242)
at com.pica.gateway.BaseTransport$2.run(BaseTransport.java:234)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2025-06-22 02:14:04,462 ERROR cluster.YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details. 2025-06-22 02:14:04,463 ERROR cluster.YarnClientSchedulerBackend: Diagnostics message: Application application_1750558147603_0001 failed 2 times due to AM Container for appattempt_1750558147603_0001_000002 exited with exitCode: -103 Failing this attempt.Diagnostics: [2025-06-22 02:14:04.276]Container [pid=2268,containerID=container_1750558147603_0001_02_000001] is running 77679104B beyond the 'VIRTUAL' memory limit. Current usage: 265.6 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1750558147603_0001_02_000001 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 2268 2267 2268 2268 (bash) 0 0 15486976 366 /bin/bash -c /opt/module/jdk1.8.0_212/bin/java -server -Xmx512m -Djava.io.tmpdir=/opt/module/hadoop-3.1.3/data/dfs/nm-local-dir/usercache/root/appcache/application_1750558147603_0001/container_1750558147603_0001_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/module/hadoop-3.1.3/logs/userlogs/application_1750558147603_0001/container_1750558147603_0001_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'master:37846' --properties-file /opt/module/hadoop-3.1.3/data/dfs/nm-local-dir/usercache/root/appcache/application_1750558147603_0001/container_1750558147603_0001_02_000001/__spark_conf__/__spark_conf__.properties --dist-cache-conf /opt/module/hadoop-3.1.3/data/dfs/nm-local-dir/usercache/root/appcache/application_1750558147603_0001/container_1750558147603_0001_02_000001/__spark_conf__/__spark_dist_cache__.properties 1> /opt/module/hadoop-3.1.3/logs/userlogs/application_1750558147603_0001/container_1750558147603_0001_02_000001/stdout 2> /opt/module/hadoop-3.1.3/logs/userlogs/application_1750558147603_0001/container_1750558147603_0001_02_000001/stderr |- 2278 2268 2268 2268 (java) 473 29 2317049856 67628 /opt/module/jdk1.8.0_212/bin/java -server -Xmx512m -Djava.io.tmpdir=/opt/module/hadoop-3.1.3/data/dfs/nm-local-dir/usercache/root/appcache/application_1750558147603_0001/container_1750558147603_0001_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/module/hadoop-3.1.3/logs/userlogs/application_1750558147603_0001/container_1750558147603_0001_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg master:37846 --properties-file /opt/module/hadoop-3.1.3/data/dfs/nm-local-dir/usercache/root/appcache/application_1750558147603_0001/container_1750558147603_0001_02_000001/__spark_conf__/__spark_conf__.properties --dist-cache-conf /opt/module/hadoop-3.1.3/data/dfs/nm-local-dir/usercache/root/appcache/application_1750558147603_0001/container_1750558147603_0001_02_000001/__spark_conf__/__spark_dist_cache__.properties [2025-06-22 02:14:04.301]Container killed on request. Exit code is 143 [2025-06-22 02:14:04.302]Container exited with a non-zero exit code 143. For more detailed output, check the application tracking page: http://master:8088/cluster/app/application_1750558147603_0001 Then click on links to logs of each attempt. . Failing the application. 2025-06-22 02:14:04,483 INFO server.AbstractConnector: Stopped Spark@7728643a{HTTP/1.1, (http/1.1)}{0.0.0.0:4040} 2025-06-22 02:14:04,486 ERROR spark.SparkContext: Error initializing SparkContext. java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:1096) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:231) at org.apache.spark.SparkContext.<init>(SparkContext.scala:640) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2678) at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:942) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:936) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 2025-06-22 02:14:04,494 INFO spark.SparkContext: SparkContext already stopped. Exception in thread "main" java.lang.IllegalStateException: Spark context stopped while waiting for backend at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:1096) at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:231) at org.apache.spark.SparkContext.<init>(SparkContext.scala:640) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2678) at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:942) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:936) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 2025-06-22 02:14:04,489 INFO ui.SparkUI: Stopped Spark web UI at http://master:4040 2025-06-22 02:14:04,514 INFO storage.DiskBlockManager: Shutdown hook called 2025-06-22 02:14:04,534 INFO util.ShutdownHookManager: Shutdown hook called 2025-06-22 02:14:04,536 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-aa1d5e4b-40b9-4395-a25e-db587133f3c8 2025-06-22 02:14:04,566 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-d2e97756-5102-456f-9640-7a3a245981a9 2025-06-22 02:14:04,576 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-d2e97756-5102-456f-9640-7a3a245981a9/userFiles-38c65405-58f6-4600-ac43-a94d608c822a 2025-06-22 02:14:04,595 WARN server.TransportChannelHandler: Exception in connection from /172.18.0.2:60198 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2025-06-22 02:14:04,599 ERROR client.TransportResponseHandler: Still have 1 requests outstanding when connection from /172.18.0.2:60198 is closed 2025-06-22 02:14:04,602 ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(Map(),Map(),Map(),Set()) to AM was unsuccessful java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2025-06-22 02:14:04,608 ERROR util.Utils: Uncaught exception in thread YARN application state monitor org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:742) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:114) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:168) at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:881) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2365) at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2075) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1419) at org.apache.spark.SparkContext.stop(SparkContext.scala:2075) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:124) Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748)
最新发布
06-23
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值