Java如何读取数据,直到文件尾——while (sc.hasNext())

本文详细介绍Java中使用Scanner类进行不同数据类型的读取方法,包括单个数字、整行数据、空格分隔的数字及字符串数组转换,适用于初学者理解和实践。

 1 输入空格隔开的数字,然后求和

import java.util.Scanner;

public class Test1 {
	 public static void main(String[] args) {
	        Scanner scan = new Scanner(System.in);
	 
	        double sum = 0;
	        int m = 0;
	 
	        while (scan.hasNextDouble()) {
	            double x = scan.nextDouble();
	            m = m + 1;
	            sum = sum + x;
	        }
	 
	        System.out.println(m + "个数的和为" + sum);
	        System.out.println(m + "个数的平均值是" + (sum / m));
	        scan.close();
	    }
}

运行结果:

1 2 3
3个数的和为6.0
3个数的平均值是2.0
 

 注意当在键盘输入Ctrl+Z之后,程序才停止读入,开始执行求和。


2 每次读取一行数据

import java.util.Scanner;

public class Test2 {
	public static void main(String[] args) {
		Scanner sc = new Scanner(System.in);

		//是否还有下一行
		while (sc.hasNextLine()) {
			String str = sc.nextLine();//读取一行,保留空格
			System.out.println("输出:"+str);
		}
		
		sc.close();
	}
}

运行结果: 

123 456 789
输出:123 456 789
111   111   111
输出:111   111   111


3 每次读取一个数据

import java.util.Scanner;

public class Test2 {
	public static void main(String[] args) {
		Scanner sc = new Scanner(System.in);

		//是否还有下一个元素
		while (sc.hasNext()) {
			String str = sc.next();//读取一个元素,不保留空格
			System.out.println("输出:"+str);
		}
		
		sc.close();
	}
}

运行结果:

123 456 798
输出:123
输出:456
输出:798
111   111   111
输出:111
输出:111
输出:111

想退出循环可以参考https://mp.youkuaiyun.com/postedit/88424076 


4 读取用空格分隔开的数字字符串,然后将其存入数组(刷题时常用)

基础的实现方式:

import java.util.Scanner;

public class Test {
	public static void main(String args[]) {
		Scanner sc = new Scanner(System.in);

		while (sc.hasNextLine()) {
		
			String[] str = sc.nextLine().split(" ");
			int[] num = new int[str.length];

			for (int i = 0; i < str.length; i++)
				num[i] = Integer.parseInt(str[i]);// 将字符串的数字保存到整型数组里

			for (int i = 0; i < num.length; i++)
				System.out.println("输出:"+num[i]);

		}

		sc.close();

	}
}

兼容性更强的实现方式 :

import java.util.Scanner;

public class Test {
	public static void main(String args[]) {
		Scanner sc = new Scanner(System.in);

		while (sc.hasNextLine()) {
			// String.trim()可以删除字符串首尾的空格。
			// String.split("\\s+") //用正则表达式匹配多个空格,即使遇到多个空格也能分隔
			String[] str = sc.nextLine().trim().split("\\s+");
			for(String s:str) {
				System.out.println(s);
			}
			int[] num = new int[str.length];

			for (int i = 0; i < str.length; i++)
				num[i] = Integer.parseInt(str[i]);// 将字符串的数字保存到整型数组里

			for (int i = 0; i < num.length; i++)
				System.out.println("输出:"+num[i]);

		}

		sc.close();

	}
}

123 456
123
456
输出:123
输出:456
   123   456
123
456
输出:123
输出:456


 

笔试常用:提取"[22,33,55]"中的数字,然后将这些数字存入数组。

        String str = "[22,33,55]";
        String strTmp=str.substring(1,str.length()-1);//取子串,从而去除掉头尾的"["和"]",从而strTmp="22,33,55"
        String[] strArray = strTmp.split(",");//按“,”切分字符串
        int [] numArray=new int[strArray.length];
        /* 将字符串数组 转换成 int数组 */
        for (int i = 0; i < strArray.length; i++) {
            numArray[i]=Integer.parseInt(strArray[i]);
        }

        /* 测试结果 */
        for (int i = 0; i < numArray.length; i++) {
            System.out.println(numArray[i]);
        }

 

利用正则表达式,从字符串中取出所有数字

        /* 取出字符串中所有数字 */
        String s1 = "a9fil3dj11P0jAsf11j";
        String[] exprs = s1.split("\\D+"); //非数字字符匹配。等效于 [^0-9]。
        for (int i = 0; i < exprs.length; i++) {
            System.out.println(exprs[i]);
        }

 执行结果:


9
3
11
0
11

注意:9上面有一行空格。


 

 

利用正则表达式,从字符串中取出所有英文字母

        /* 取出字符串中所有的字母 */
        String s2 = "a9fil3dj11P0jAsf11j";
        String[] exprs2 = s2.split("[^A-Za-z]");
        for (int i = 0; i < exprs2.length; i++) {
            System.out.println(exprs2[i]);
        }
        // \w  匹配任何字类字符,包括下划线。与"[A-Za-z0-9_]"等效。
        // \W  与任何非单词字符匹配。与"[^A-Za-z0-9_]"等效。

执行结果:

a
fil
dj

P
jAsf

j

注意:中间有几个空格


跳出循环的方法

import java.util.Scanner;
 
public class ScannerKeyBoardTest1 {
 
	public static void main(String[] args) {
 
		// 例:当输入"0"时,循环结束
		Scanner sc = new Scanner(System.in);
		while (!sc.hasNext("0")) {
			System.out.println("键盘输入的内容是:"+sc.next());
		}
	}
}

 

[root@cdh guosi]# /bin/python3 /zhiyun/guosi/python/dwd/crm_user_base_info_his_full.py Warning: Ignoring non-Spark config property: hive.metastore.uris Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 25/11/03 02:49:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 25/11/03 02:49:30 WARN SparkConf: Note that spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone/kubernetes and LOCAL_DIRS in YARN). 25/11/03 02:49:38 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 19) 1] net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for datetime.timezone). This happens when an unsupported/unregistered class is being unpickled that requires construction arguments. Fix it by registering a custom IObjectConstructor for this class. at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23) at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:759) at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:199) at net.razorvine.pickle.Unpickler.load(Unpickler.java:109) at net.razorvine.pickle.Unpickler.loads(Unpickler.java:122) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.$anonfun$evaluate$2(BatchEvalPythonExec.scala:67) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage11.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:893) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:893) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367) at org.apache.spark.rdd.RDD.iterator(RDD.scala:331) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 25/11/03 02:49:38 WARN TaskSetManager: Lost task 0.0 in stage 9.0 (TID 19) (zhiyun executor driver): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for datetime.timezone). This happens when an unsupported/unregistered class is being unpickled that requires construction arguments. Fix it by registering a custom IObjectConstructor for this class. at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23) at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:759) at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:199) at net.razorvine.pickle.Unpickler.load(Unpickler.java:109) at net.razorvine.pickle.Unpickler.loads(Unpickler.java:122) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.$anonfun$evaluate$2(BatchEvalPythonExec.scala:67) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage11.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:893) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:893) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367) at org.apache.spark.rdd.RDD.iterator(RDD.scala:331) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 25/11/03 02:49:38 ERROR TaskSetManager: Task 0 in stage 9.0 failed 1 times; aborting job Traceback (most recent call last): File "/zhiyun/guosi/python/dwd/crm_user_base_info_his_full.py", line 70, in <module> df.show() File "/usr/include/openssl/lib/python3.9/site-packages/pyspark/sql/dataframe.py", line 945, in show print(self._show_string(n, truncate, vertical)) File "/usr/include/openssl/lib/python3.9/site-packages/pyspark/sql/dataframe.py", line 963, in _show_string return self._jdf.showString(n, 20, vertical) File "/usr/include/openssl/lib/python3.9/site-packages/py4j/java_gateway.py", line 1322, in __call__ return_value = get_return_value( File "/usr/include/openssl/lib/python3.9/site-packages/pyspark/errors/exceptions/captured.py", line 179, in deco return f(*a, **kw) File "/usr/include/openssl/lib/python3.9/site-packages/py4j/protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o73.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 (TID 19) (zhiyun executor driver): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for datetime.timezone). This happens when an unsupported/unregistered class is being unpickled that requires construction arguments. Fix it by registering a custom IObjectConstructor for this class. at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23) at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:759) at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:199) at net.razorvine.pickle.Unpickler.load(Unpickler.java:109) at net.razorvine.pickle.Unpickler.loads(Unpickler.java:122) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.$anonfun$evaluate$2(BatchEvalPythonExec.scala:67) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage11.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:893) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:893) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367) at org.apache.spark.rdd.RDD.iterator(RDD.scala:331) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2856) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2792) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2791) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2791) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1247) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1247) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1247) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3060) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2994) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2983) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:989) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2398) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2419) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2438) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:530) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:483) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:61) at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$executeCollect$1(AdaptiveSparkPlanExec.scala:390) at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:418) at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:390) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4332) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:3314) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4322) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4320) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4320) at org.apache.spark.sql.Dataset.head(Dataset.scala:3314) at org.apache.spark.sql.Dataset.take(Dataset.scala:3537) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:280) at org.apache.spark.sql.Dataset.showString(Dataset.scala:315) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Thread.java:748) Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for datetime.timezone). This happens when an unsupported/unregistered class is being unpickled that requires construction arguments. Fix it by registering a custom IObjectConstructor for this class. at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23) at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:759) at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:199) at net.razorvine.pickle.Unpickler.load(Unpickler.java:109) at net.razorvine.pickle.Unpickler.loads(Unpickler.java:122) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.$anonfun$evaluate$2(BatchEvalPythonExec.scala:67) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage11.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:893) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:893) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367) at org.apache.spark.rdd.RDD.iterator(RDD.scala:331) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more
最新发布
11-04
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值