划重点,大数据中的spark-core究竟有哪些含义?

本文详细介绍了SparkCore的主要功能,包括SparkConf、事件总线、RPC框架、SparkContext、SparkEnv、调度系统、计算引擎和度量系统等内容。SparkCore是Apache Spark的核心组件,为大数据处理提供强大的支持。

大数据作为当下做火热的新科技,其内容受到了来自各个领域的关注。在大数据的内涵中sapr是无法避免的重点,那么对于spark core你了解多少呢?其实,spark core包括各种spark的各种核心组件,它们能够对内存和硬盘进行操作,或者调用CPU进行计算。毕竟,SparkCore是Apache Spark的核心,是其他扩展模块的基础运行时环境,定义了RDD、DataFrame和DataSet。

相信很多人都知道spark是大数据不可获取的一部分,那么对于spark core你了解多少呢?下面我们就来具体的分析一下spark core的主要功能。

第一、SparkConf,用于管理Spark应用程序的各种配置信息。

第二、事件总线:SparkContext内部各组件之间使用事件——监听器模式异步调用的实现;

第三、内置的基于Netty的RPC框架,包括同步和异步的多种实现,RPC框架是Spark各组件之间进行通信的基础。

第四、SparkContext,用户开发的Spark应用程序的提交与执行都离不开SparkContex的支持。在正式提交应用程序之前,首先需要初始化SparkContext。SparkContext隐藏了网络通信、分布式部署、消息通信、存储体系、计算引擎、度量系统、文件服务、Web UI等内容,应用程序开发者只需要使用SparkContext提供的API完成功能开发;

第五、SparkEnv是Spark中的Task运行所必需的组件。

第六、调度系统,调度系统主要由DAGScheduler和TaskScheduler组成,它们都内置在SparkContext中。

第七、计算引擎,计算引擎由内存管理器(MemoryManager)、Tungsten、任务内存管理器(TaskMemory-Manager)、Task、外部排序器(ExternalSorter)、Shuffle管理器(ShuffleManager)等组成。

第八、度量系统:由Spark中的多种度量源(Source)和多种度量输出(Sink)构成,完成对整个Spark集群中各组件运行期状态的监控。

Spark作为大数据中的一个热点,一直都备受各个领域的关注,如今,随着各个行业对于大数据的认可和不断应用,大数据必将版样更加重要的角色。

Spark core作为大数据技术中的一个重点并不仅仅是以上这些内容,这里只是简单的总结出一些重点,希望对大家能够有所帮助。

在dataworks上执行scala编写的spark任务,获取hologres报错如下 2025-08-04 08:55:31,896 ERROR org.apache.spark.deploy.yarn.ApplicationMaster - User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 2364, 3010b5205.cloud.b7.am301, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1754267297905_2080247398_01_000011 on host: 3010b5205.cloud.b7.am301. Exit status: 137. Diagnostics: 3010b5205.cloud.b7.am301 Driver stacktrace: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 2364, 3010b5205.cloud.b7.am301, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1754267297905_2080247398_01_000011 on host: 3010b5205.cloud.b7.am301. Exit status: 137. Diagnostics: 3010b5205.cloud.b7.am301 Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) ~[scala-library-2.11.8.jar:?] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) ~[scala-library-2.11.8.jar:?] at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at scala.Option.foreach(Option.scala:257) ~[scala-library-2.11.8.jar:?] at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067) ~[spark-core_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:368) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.head(Dataset.scala:2484) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.take(Dataset.scala:2698) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.showString(Dataset.scala:254) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.show(Dataset.scala:725) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at org.apache.spark.sql.Dataset.show(Dataset.scala:702) ~[spark-sql_2.11-2.3.0-odps0.30.0.jar:?] at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.ConductorconductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:181) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.conductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:46) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.func(VsZConductorOverlimiteventPeriodTask.scala:29) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply$mcVJ$sp(LeoUtils.scala:453) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at scala.collection.immutable.NumericRange.foreach(NumericRange.scala:73) ~[scala-library-2.11.8.jar:?] at com.bmsoft.scala.utils.LeoUtils.package$.taskEntry_odps(LeoUtils.scala:431) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.main(VsZConductorOverlimiteventPeriodTask.scala:33) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask.main(VsZConductorOverlimiteventPeriodTask.scala) ~[_ed7dd58644e462c4a5e1b90cb86197da.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_65-AliJVM] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_65-AliJVM] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_65-AliJVM] at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_65-AliJVM] at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:708) [spark-yarn_2.11-2.3.0-odps0.30.0.jar:?] 2025-08-04 08:55:31,908 INFO org.apache.spark.deploy.yarn.ApplicationMaster - Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 4 times, most recent failure: Lost task 0.3 in stage 25.0 (TID 2364, 3010b5205.cloud.b7.am301, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1754267297905_2080247398_01_000011 on host: 3010b5205.cloud.b7.am301. Exit status: 137. Diagnostics: 3010b5205.cloud.b7.am301 Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:368) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) at org.apache.spark.sql.Dataset.head(Dataset.scala:2484) at org.apache.spark.sql.Dataset.take(Dataset.scala:2698) at org.apache.spark.sql.Dataset.showString(Dataset.scala:254) at org.apache.spark.sql.Dataset.show(Dataset.scala:725) at org.apache.spark.sql.Dataset.show(Dataset.scala:702) at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.ConductorconductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:181) at com.bmsoft.operate.VsZConductorOverlimiteventPeriod.conductorOverlimiteventInfo(VsZConductorOverlimiteventPeriod.scala:46) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.func(VsZConductorOverlimiteventPeriodTask.scala:29) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$$anonfun$main$1.apply(VsZConductorOverlimiteventPeriodTask.scala:33) at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply$mcVJ$sp(LeoUtils.scala:453) at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) at com.bmsoft.scala.utils.LeoUtils.package$$anonfun$taskEntry_odps$1.apply(LeoUtils.scala:431) at scala.collection.immutable.NumericRange.foreach(NumericRange.scala:73) at com.bmsoft.scala.utils.LeoUtils.package$.taskEntry_odps(LeoUtils.scala:431) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask$.main(VsZConductorOverlimiteventPeriodTask.scala:33) at com.bmsoft.task.VsZConductorOverlimiteventPeriodTask.main(VsZConductorOverlimiteventPeriodTask.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:708) ) 表中当日一共有1个亿的数据
08-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值