spark-core_30:Executor初始化过程env.blockManager.initialize(conf.getAppId)源码分析

本文详细剖析了Spark中BlockManager的初始化过程,包括NettyBlockTransferService的初始化、BlockManagerId的创建、BlockManagerMaster的注册流程等核心环节。

在(spark-core_28及spark-core_29:Executor初始化过程env.blockManager.initialize(conf.getAppId)-NettyBlockTransferService.init()源码分析)分析了NettyBlockTransferService.init()做了如下四件事

/**NettyBlockTransferService.init(this)做了如下事情:
  1.创建RpcServer:NettyBlockRpcServer,为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle;
  2.构建TransportContext:TransportContext:包含创建{TransportServer:nettyServer},{TransportClientFactory用来创建TransportClient}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道。
  3.客户端工厂TransportClientFactory:这个工厂实例通过使用createClient创建客户端{TransportClient},这个工厂实例维护一个到其他主机的连接池,并应为相同的远程主机返回相同的TransportClient。 它还为所有TransportClient共享单个线程池。
  4.创建Netty服务器TransportServer:包括编解码,还有入站事件都加到TransportServer这个nettySever中(上个各个类是围绕NettyServer的干活的)
  */

1,接着blockManager.initialize往后执行

def initialize(appId: String): Unit = {
 
//SparkEnv.create初始化进来的:BlockTransferService:NettyBlockTransferService,它是块传输服务
 
blockTransferService.init(this)
 
// 默认情况下动态资源分配被关闭掉,所以得到的是NettyBlockTransferService,而它是BlockTransferService的子类,
  // 而BlockTransferService是ShuffleClient子类,所以init(appId)没有做任何实现

  shuffleClient.init(appId)

 
/**
    * 初始化BlockManagerId是BlockManager的唯一标识
    * 该方法做了:实例化BlockManagerId之后放到ConcurrentHashMap中,这个map的key和value的类型都是BlockManagerId
    *
    * executorId:如果是driver创建SparkEnv:值driver,如果CoarseGrainedExecutorBackend则是具体的数值串
    * NettyBlockTransferService.hostName:是driver本机hostname,如果CoarseGrainedExecutorBackend则worker的ip
    * NettyBlockTransferService.port : 是nettySever的port
    */

  blockManagerId
= BlockManagerId(
   
executorId, blockTransferService.hostName, blockTransferService.port)

===》BlockManagerId的初始化过程:

private[spark] object BlockManagerId{

 
/**
   * Returns a
[[org.apache.spark.storage.BlockManagerId]] for the given configuration.
   *
   * @param execId ID of theexecutor.  如是driver值是"driver"串,如果CoarseGrainedExecutorBackend则是具体的数值串
   * @param host Host name of theblock manager.如果 CoarseGrainedExecutorBackend则worker的ip
   * @param port Port of the blockmanager. 是nettySever的port
   * @return A new
[[org.apache.spark.storage.BlockManagerId]]
.
   */

 
def apply(execId: String, host: String, port: Int):BlockManagerId =
   
//实例化BlockManagerId之后放到ConcurrentHashMap中,这个map的key和value的类型都是BlockManagerId
    getCachedBlockManagerId(new BlockManagerId(execId, host, port))

 
def apply(in: ObjectInput): BlockManagerId = {
   
val obj= new BlockManagerId()
   
obj.readExternal(in)
    getCachedBlockManagerId(obj)
  }

  val blockManagerIdCache= new ConcurrentHashMap[BlockManagerId, BlockManagerId]()

 
def getCachedBlockManagerId(id: BlockManagerId): BlockManagerId = {
   
blockManagerIdCache.putIfAbsent(id, id)
   
blockManagerIdCache.get(id)
 
}
}

2,再查看BlockManagerMaster.registerBlockManager

def initialize(appId: String): Unit = {
  //SparkEnv.create初始化进来的:BlockTransferService:NettyBlockTransferService,
它是块传输服务
    blockTransferService.init(this)
  // 默认情况下动态资源分配被关闭掉,所以得到的是NettyBlockTransferService,
而它是BlockTransferService的子类,
  // 而BlockTransferService是ShuffleClient子类,所以init(appId)没有做任何实现
  shuffleClient.init(appId)

  /**
    * 初始化BlockManagerId是BlockManager的唯一标识,每个BlockManagerId里面有
executorId、CoarseGrainedExecutorBackend所在的ip,port
    * 该方法做了:实例化BlockManagerId之后放到ConcurrentHashMap中,
这个map的key和value的类型都是BlockManagerId
    *
    * executorId:如果是driver创建SparkEnv:值driver,如果CoarseGrainedExecutorBackend
则是具体的数值串
    * NettyBlockTransferService.hostName:是driver本机hostname,
如果CoarseGrainedExecutorBackend则worker的ip
    * NettyBlockTransferService.port : 是nettySever的port
    */
  blockManagerId = BlockManagerId(
    executorId, blockTransferService.hostName, blockTransferService.port)
  //可优化的地方,当spark集群被多个应用程序共享时,开启资源动态分配非常有用,
默认是关闭的,在standalone模式中要开启很简单
  //先开启spark.dynamicAllocation.enabled为true, 再开启spark.shuffle.service.enabled
为true就可以
  shuffleServerId = if (externalShuffleServiceEnabled) {
    logInfo(s"external shuffle service port = $externalShuffleServicePort")
    BlockManagerId(executorId, blockTransferService.hostName, externalShuffleServicePort)
  } else {
    blockManagerId
  }
  /**    实参:
    *   master对象是BlockManagerMaster在diver、executor都会生成BlockManagerMaster: 
diver上的BlockManagerMaster负责对Executor上的BlockManager进行管理,
    *   它里面有BlockManagerMasterEndpoint引用,Executor上通过获取的它的引用,
然后给它发消息实现和Driver交互
        slaveEndpoint: BlockManagerSlaveEndpoint的作用是得到master命令来执行相关操作,
如从slave 的BlockManger中移除block
        blockManagerId:  BlockManagerId("driver或executor的数值", 
driver或worker的host, nettyserver的port)
    */
  master.registerBlockManager(blockManagerId, maxMemory, slaveEndpoint)

  // Register Executors' configuration with the local shuffle service, if one should exist.
  if (externalShuffleServiceEnabled && !blockManagerId.isDriver) {
    registerWithExternalShuffleServer()
  }
}

3,该方法生成BlockManagerInfo放到BlockManagerMasterEndpoint成员blockManagerInfo对应HashMap[BlockManagerId, BlockManagerInfo]集合中BlockManagerInfo管理所有BlockManagerId,而BlockManagerId是BlockManager的唯一标识,同时它还有BlockManagerSlaveEndpoint(是driver和slave交互用的)

/** Register theBlockManager's id with the driver.
  * 注册blockManagerId到driver上
  * 参数:
  * slaveEndpoint对应BlockManagerSlaveEndpoint,它的作用是得到master命令来执行相关操作,如从slave 的BlockManger中移除block 它里面有mapOutputTracker:
      //如果是executor:对应MapOutputTrackerWorker,会从driver中的MapOutputTrackerMaster得到map out 的信息
      // 如果是driver:对应MapOutputTrackerMaster,使用TimeStampedHashMap来跟踪 map的输出信息
  * blockManagerId:BlockManagerId("driver或executor的数值", driver或worker的host, nettyserver的port)
  *
  * 调用流程是
  * 1,sparkContext中或Executor初始化时被调用:blockManager.initialize(_applicationId)
  * 2,blockManager.initialize调用BlockManagerMaster.registerBlockManager
  * 3,tell会调用BlockManagerMasterEndpoint.receiveAndReply,将RegisterBlockManager这个case class放进去
  * */

def registerBlockManager(
   
blockManagerId: BlockManagerId, maxMemSize: Long, slaveEndpoint: RpcEndpointRef): Unit = {
 
logInfo("Trying to register BlockManager")
 
tell(RegisterBlockManager(blockManagerId, maxMemSize, slaveEndpoint))
 
logInfo("Registered BlockManager")
}

4,调用tell给BlockManagerMasterEndpoint发送信息

/** Send aone-way message to the master endpoint, to which we expect it to reply withtrue.
  * 发送一条信息给driverEndpoint:BlockManagerMasterEndpoint,并且必让回复信息是true
  * message的值如:RegisterBlockManager(blockManagerId,maxMemSize, slaveEndpoint)
  * */

private def tell(message: Any) {
 
//driverEndpoint : BlockManagerMasterEndpoint
 
if (!driverEndpoint.askWithRetry[Boolean](message)) {
   
throw new SparkException("BlockManagerMasterEndpointreturned false, expected true.")
 
}
}

5,Executor向driver 的BlockManagerMasterEndpoint进行交互,注册BlockManagerId

private[spark]
class BlockManagerMasterEndpoint(
   
override val rpcEnv: RpcEnv,
   
val isLocal: Boolean,
   
conf: SparkConf,
   
listenerBus: LiveListenerBus)
 
extends ThreadSafeRpcEndpoint withLogging {

 
// Mapping from block manager id to the block manager'sinformation.
  // 缓存所有的BlockManagerId及其BlockManagerInfo,而BlockManagerInfo存放的是它所在的Executor中所有Block的信息

  private val blockManagerInfo = new mutable.HashMap[BlockManagerId, BlockManagerInfo]

 
// Mapping from executor ID to block manager ID.
  // 缓存executorId与其拥有的BlockManagerId之间的映射关系

  private val blockManagerIdByExecutor = new mutable.HashMap[String, BlockManagerId]

 
// Mapping from block id to the set of block managersthat have the block.
  // 缓存Block与BlockManagerId的映射关系

  private val blockLocations = new JHashMap[BlockId, mutable.HashSet[BlockManagerId]]

 
private val askThreadPool = ThreadUtils.newDaemonCachedThreadPool("block-manager-ask-thread-pool")
 
private implicit val askExecutionContext = ExecutionContext.fromExecutorService(askThreadPool)

 
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] ={
 
   /**
      * 参数:
      * slaveEndpoint对应BlockManagerSlaveEndpoint,它的作用是得到master命令来执行相关操作,如从slave 的BlockManger中移除block
      * 调用流程是
      * 1,sparkContext中或executor初始化时被调用:_env.blockManager.initialize(_applicationId)
      * 2,blockManager.initialize调用BlockManagerMaster.registerBlockManager
      * 3,BlockManagerMaster的tell方法会调用BlockManagerMasterEndpoint.receiveAndReply,将RegisterBlockManager这个case class放进去
      * 4,BlockManagerMasterEndpoint再调用自身的register方法将blockManagerId注册到BlockMangerInfo
      *
      *  slaveEndpoint: BlockManagerSlaveEndpoint的作用是得到master命令来执行相关操作,如从slave 的BlockManger中移除block
          blockManagerId:BlockManagerId("driver", driver的host,nettyserver的port)
      */

   
case RegisterBlockManager(blockManagerId, maxMemSize, slaveEndpoint) =>
     
register(blockManagerId, maxMemSize, slaveEndpoint)
     
//要返回true不然BlockManagerMasterEndPoint会报错
      context.reply(true)

==>调用register注册BlockManagerId:该方法最终生成BlockManagerInfo放到BlockManagerMasterEndpoint成员blockManagerInfo对应HashMap[BlockManagerId, BlockManagerInfo]集合中  BlockManagerInfo管理所有BlockManagerId,而BlockManagerId是BlockManager的唯一标识,同时它还有BlockManagerSlaveEndpoint(是driver和slave交互用的)

/**
  *
  * 参数:
  * slaveEndpoint:BlockManagerSlaveEndpoint,它的作用是得到master命令来执行相关操作,如从slave 的BlockManger中移除block
  * 它里面有mapOutputTracker:
      //如果是executor:则是MapOutputTrackerWorker,会从driver中的MapOutputTrackerMaster得到map out 的信息
      // 如果是driver:则是MapOutputTrackerMaster,使用TimeStampedHashMap来跟踪 map的输出信息
  * blockManagerId: BlockManagerId("driver或executor的数值", driver或worker的host, nettyserver的port)
    */

private def register(id: BlockManagerId, maxMemSize:Long, slaveEndpoint: RpcEndpointRef) {
 
val time= System.currentTimeMillis()
 
//blockManagerInfo对应HashMap[BlockManagerId, BlockManagerInfo]缓存所有的BlockManagerId及其BlockManagerInfo,
  // 而BlockManagerInfo存放的是它所在的Executor中所有Block的信息

  if (!blockManagerInfo.contains(id)) {
   
// blockManagerIdByExecutor对应HashMap[String, BlockManagerId]缓存executorId与其拥有的BlockManagerId之间的映射关系
    blockManagerIdByExecutor.get(id.executorId) match {
     
case Some(oldId) =>
       
// A block manager of the same executor already exists,so remove it (assumed dead)
       
logError("Gottwo different block manager registrations on same executor - "
    
       + s"will replace old one $oldId with new one $id")
       
removeExecutor(id.executorId)
      case None=>
   
}
    logInfo("Registering block manager %s with %s RAM, %s".format(
     
id.hostPort, Utils.bytesToString(maxMemSize), id))
   
//将BlockMangerId放到HashMap[String, BlockManagerId],它的key就是executor的id
    blockManagerIdByExecutor(id.executorId) = id
   
//BlockManagerInfo管理所有BlockManagerId,而BlockManagerId是BlockManager的唯一标识,同时它还有BlockManagerSlaveEndpoint(是driver和slave交互用的)
    blockManagerInfo(id) = new BlockManagerInfo(
     
id, System.currentTimeMillis(), maxMemSize, slaveEndpoint)
 
}

//向LiveListenerBus发送SparkListenerBlockManagerAdded事件
  listenerBus.post(SparkListenerBlockManagerAdded(time, id, maxMemSize))
}

到此:BlockManager. initialize()方法执行结束


import findspark findspark.init() app_name = "zdf-jupyter" os.environ['HADOOP_USER_NAME'] = 'prod_sec_strategy_tech' os.environ['HADOOP_USER_PASSWORD'] = 'TbjnfqTIRFXFidmhfctN3ZT9QwncfQfY' # os.environ['QUEUE'] = 'root.sec_technology_sec_engine_tenant_prod' spark3conf = { "master": "yarn", "spark.submit.deployMode": "client", "driver-memory": "4g", "spark.dynamicAllocation.enabled": "true", "spark.dynamicAllocation.minExecutors": "100", "spark.dynamicAllocation.maxExecutors": "200", "spark.executor.cores": "3", "spark.executor.memory": "10g", "spark.executor.memoryOverhead": "8192", "spark.sql.hive.manageFilesourcePartitions": "false", "spark.default.parallelism": "1000", "spark.sql.shuffle.partitions": "1000", "spark.yarn.queue": "root.sec_technology_sec_engine_tenant_prod", "spark.sql.autoBroadcastJoinThreshold": "-1", "spark.sql.broadcastTimeout": "3000", "spark.driver.extraJavaOptions": "-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS", "spark.executor.extraJavaOptions": "-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS", "spark.yarn.dist.archives": "hdfs://difed/user/dm/ENV/nithavellir/v3.0/py3.10.16_lite.tgz", "spark.executorEnv.PYSPARK_PYTHON": "./py3.10.16_lite.tgz/py3.10.16_lite/bin/python3", "spark.extraListeners": "sparkmonitor.listener.JupyterSparkMonitorListener", "spark.jars": ( "hdfs://difed/user/dm/ENV/jars/spark_tfrecord/compile/spark-tfrecord-0.5.1_scala2.12-spark3.2.0-tfhp1.15.0.jar" ), } spark_app = SparkBaseApp() spark_app.initialize(app_name, spark3conf) spark = spark_app.spark hdfs = spark_app.hdfs 为什么会失败呢
08-09
2025-07-23 09:33:01 2025-07-23 09:33:01.128 -> [main] -> INFO com.fc.V2Application - Starting V2Application v0.0.1-SNAPSHOT using Java 1.8.0_212 on ddad2af35401 with PID 1 (/app.war started by root in /) 2025-07-23 09:33:01 2025-07-23 09:33:01.129 -> [main] -> DEBUG com.fc.V2Application - Running with Spring Boot v2.4.1, Spring v5.3.2 2025-07-23 09:33:01 2025-07-23 09:33:01.129 -> [main] -> INFO com.fc.V2Application - The following profiles are active: dev 2025-07-23 09:33:01 2025-07-23 09:33:01.176 -> [main] -> INFO o.s.b.d.env.DevToolsPropertyDefaultsPostProcessor - For additional web related logging consider setting the 'logging.level.web' property to 'DEBUG' 2025-07-23 09:33:02 2025-07-23 09:33:02.134 -> [main] -> INFO o.s.d.r.config.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode! 2025-07-23 09:33:02 2025-07-23 09:33:02.136 -> [main] -> INFO o.s.d.r.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data Redis repositories in DEFAULT mode. 2025-07-23 09:33:02 2025-07-23 09:33:02.168 -> [main] -> INFO o.s.d.r.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 20 ms. Found 0 Redis repository interfaces. 2025-07-23 09:33:02 2025-07-23 09:33:02.747 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'removeDruidAdConfig' of type [com.fc.v2.common.druid.RemoveDruidAdConfig$$EnhancerBySpringCGLIB$$c48737ab] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:02 2025-07-23 09:33:02.765 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'druidStatInterceptor' of type [com.alibaba.druid.support.spring.stat.DruidStatInterceptor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:02 2025-07-23 09:33:02.767 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'druidStatPointcut' of type [org.springframework.aop.support.JdkRegexpMethodPointcut] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:02 2025-07-23 09:33:02.769 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'druidStatAdvisor' of type [org.springframework.aop.support.DefaultPointcutAdvisor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:03 2025-07-23 09:33:03.096 -> [main] -> INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8080 (http) 2025-07-23 09:33:03 2025-07-23 09:33:03.110 -> [main] -> INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8080"] 2025-07-23 09:33:03 2025-07-23 09:33:03.111 -> [main] -> INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2025-07-23 09:33:03 2025-07-23 09:33:03.111 -> [main] -> INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.41] 2025-07-23 09:33:04 2025-07-23 09:33:04.184 -> [main] -> INFO o.a.c.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2025-07-23 09:33:04 2025-07-23 09:33:04.185 -> [main] -> INFO o.s.b.w.s.c.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 3008 ms 2025-07-23 09:33:04 2025-07-23 09:33:04.265 -> [main] -> INFO o.s.boot.web.servlet.RegistrationBean - Filter webStatFilter was not registered (possibly already registered?) 2025-07-23 09:33:04 2025-07-23 09:33:04.298 -> [main] -> DEBUG com.fc.v2.common.conf.PutFilter - Filter 'putFilter' configured for use 2025-07-23 09:33:05 2025-07-23 09:33:05.262 -> [main] -> INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor 2025-07-23 09:33:05 2025-07-23 09:33:05.264 -> [main] -> INFO org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main 2025-07-23 09:33:05 2025-07-23 09:33:05.276 -> [main] -> INFO org.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl 2025-07-23 09:33:05 2025-07-23 09:33:05.276 -> [main] -> INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.3.2 created. 2025-07-23 09:33:05 2025-07-23 09:33:05.277 -> [main] -> INFO org.quartz.simpl.RAMJobStore - RAMJobStore initialized. 2025-07-23 09:33:05 2025-07-23 09:33:05.278 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED' 2025-07-23 09:33:05 Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally. 2025-07-23 09:33:05 NOT STARTED. 2025-07-23 09:33:05 Currently in standby mode. 2025-07-23 09:33:05 Number of jobs executed: 0 2025-07-23 09:33:05 Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads. 2025-07-23 09:33:05 Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered. 2025-07-23 09:33:05 2025-07-23 09:33:05 2025-07-23 09:33:05.278 -> [main] -> INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 2025-07-23 09:33:05 2025-07-23 09:33:05.278 -> [main] -> INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.3.2 2025-07-23 09:33:05 2025-07-23 09:33:05.478 -> [main] -> INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited 2025-07-23 09:33:05 2025-07-23 09:33:05.723 -> [main] -> DEBUG c.f.v.m.auto.SysQuartzJobMapper.selectByExample - ==> Preparing: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:05 2025-07-23 09:33:05.941 -> [main] -> DEBUG c.f.v.m.auto.SysQuartzJobMapper.selectByExample - ==> Parameters: 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'quartzSchedulerUtil': Invocation of init method failed; nested exception is org.springframework.jdbc.BadSqlGrammarException: 2025-07-23 09:33:06 ### Error querying database. Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ### The error may exist in URL [jar:file:/app.war!/WEB-INF/classes!/mybatis/auto/SysQuartzJobMapper.xml] 2025-07-23 09:33:06 ### The error may involve com.fc.v2.mapper.auto.SysQuartzJobMapper.selectByExample-Inline 2025-07-23 09:33:06 ### The error occurred while setting parameters 2025-07-23 09:33:06 ### SQL: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:06 ### Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED shutting down. 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED paused. 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED shutdown complete. 2025-07-23 09:33:06 2025-07-23 09:33:06.006 -> [main] -> INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closing ... 2025-07-23 09:33:06 2025-07-23 09:33:06.014 -> [main] -> INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closed 2025-07-23 09:33:06 2025-07-23 09:33:06.016 -> [main] -> INFO org.apache.catalina.core.StandardService - Stopping service [Tomcat] 2025-07-23 09:33:06 2025-07-23 09:33:06.030 -> [main] -> INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - 2025-07-23 09:33:06 2025-07-23 09:33:06 Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2025-07-23 09:33:06 2025-07-23 09:33:06.047 -> [main] -> ERROR org.springframework.boot.SpringApplication - Application run failed 2025-07-23 09:33:06 org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'quartzSchedulerUtil': Invocation of init method failed; nested exception is org.springframework.jdbc.BadSqlGrammarException: 2025-07-23 09:33:06 ### Error querying database. Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ### The error may exist in URL [jar:file:/app.war!/WEB-INF/classes!/mybatis/auto/SysQuartzJobMapper.xml] 2025-07-23 09:33:06 ### The error may involve com.fc.v2.mapper.auto.SysQuartzJobMapper.selectByExample-Inline 2025-07-23 09:33:06 ### The error occurred while setting parameters 2025-07-23 09:33:06 ### SQL: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:06 ### Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:160) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:429) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1780) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:609) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:531) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:944) 2025-07-23 09:33:06 at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:923) 2025-07-23 09:33:06 at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:588) 2025-07-23 09:33:06 at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:767) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:426) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.run(SpringApplication.java:326) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.run(SpringApplication.java:1309) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.run(SpringApplication.java:1298) 2025-07-23 09:33:06 at com.fc.V2Application.main(V2Application.java:12) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) 2025-07-23 09:33:06 at org.springframework.boot.loader.Launcher.launch(Launcher.java:107) 2025-07-23 09:33:06 at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) 2025-07-23 09:33:06 at org.springframework.boot.loader.WarLauncher.main(WarLauncher.java:59) 2025-07-23 09:33:06 Caused by: org.springframework.jdbc.BadSqlGrammarException: 2025-07-23 09:33:06 ### Error querying database. Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ### The error may exist in URL [jar:file:/app.war!/WEB-INF/classes!/mybatis/auto/SysQuartzJobMapper.xml] 2025-07-23 09:33:06 ### The error may involve com.fc.v2.mapper.auto.SysQuartzJobMapper.selectByExample-Inline 2025-07-23 09:33:06 ### The error occurred while setting parameters 2025-07-23 09:33:06 ### SQL: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:06 ### Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239) 2025-07-23 09:33:06 at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) 2025-07-23 09:33:06 at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:73) 2025-07-23 09:33:06 at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:446) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy86.selectList(Unknown Source) 2025-07-23 09:33:06 at org.mybatis.spring.SqlSessionTemplate.selectList(SqlSessionTemplate.java:230) 2025-07-23 09:33:06 at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:139) 2025-07-23 09:33:06 at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:76) 2025-07-23 09:33:06 at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:59) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy114.selectByExample(Unknown Source) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) 2025-07-23 09:33:06 at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) 2025-07-23 09:33:06 at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 2025-07-23 09:33:06 at com.alibaba.druid.support.spring.stat.DruidStatInterceptor.invoke(DruidStatInterceptor.java:73) 2025-07-23 09:33:06 at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) 2025-07-23 09:33:06 at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy115.selectByExample(Unknown Source) 2025-07-23 09:33:06 at com.fc.v2.service.SysQuartzJobService.selectByExample(SysQuartzJobService.java:108) 2025-07-23 09:33:06 at com.fc.v2.service.SysQuartzJobService$$FastClassBySpringCGLIB$$8fc47ef9.invoke(<generated>) 2025-07-23 09:33:06 at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) 2025-07-23 09:33:06 at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:687) 2025-07-23 09:33:06 at com.fc.v2.service.SysQuartzJobService$$EnhancerBySpringCGLIB$$b174130e.selectByExample(<generated>) 2025-07-23 09:33:06 at com.fc.v2.common.quartz.QuartzSchedulerUtil.init(QuartzSchedulerUtil.java:42) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:389) 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:333) 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:157) 2025-07-23 09:33:06 ... 27 common frames omitted 2025-07-23 09:33:06 Caused by: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.ClientPreparedStatement.execute(ClientPreparedStatement.java:370) 2025-07-23 09:33:06 at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_execute(FilterChainImpl.java:3461) 2025-07-23 09:33:06 at com.alibaba.druid.filter.FilterEventAdapter.preparedStatement_execute(FilterEventAdapter.java:440) 2025-07-23 09:33:06 at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_execute(FilterChainImpl.java:3459) 2025-07-23 09:33:06 at com.alibaba.druid.proxy.jdbc.PreparedStatementProxyImpl.execute(PreparedStatementProxyImpl.java:167) 2025-07-23 09:33:06 at com.alibaba.druid.pool.DruidPooledPreparedStatement.execute(DruidPooledPreparedStatement.java:497) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.apache.ibatis.logging.jdbc.PreparedStatementLogger.invoke(PreparedStatementLogger.java:59) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy118.execute(Unknown Source) 2025-07-23 09:33:06 at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:63) 2025-07-23 09:33:06 at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:79) 2025-07-23 09:33:06 at org.apache.ibatis.executor.SimpleExecutor.doQuery(SimpleExecutor.java:63) 2025-07-23 09:33:06 at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:326) 2025-07-23 09:33:06 at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:156) 2025-07-23 09:33:06 at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:109) 2025-07-23 09:33:06 at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:83) 2025-07-23 09:33:06 at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:148) 2025-07-23 09:33:06 at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:141) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:433) 2025-07-23 09:33:06 ... 57 common frames omitted
07-24
逐行解释下面执行日志: [INFO] 2025-11-11 10:57:00.807 +0800 - Begin to pulling task [INFO] 2025-11-11 10:57:00.808 +0800 - Begin to initialize task [INFO] 2025-11-11 10:57:00.808 +0800 - Set task startTime: Tue Nov 11 10:57:00 GMT+08:00 2025 [INFO] 2025-11-11 10:57:00.808 +0800 - Set task envFile: /data/iuap/middleware/dolphinscheduler-6896/worker-server/conf/dolphinscheduler_env.sh [INFO] 2025-11-11 10:57:00.808 +0800 - Set task appId: 44327_45921 [INFO] 2025-11-11 10:57:00.808 +0800 - End initialize task [INFO] 2025-11-11 10:57:00.808 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'} [INFO] 2025-11-11 10:57:00.808 +0800 - TenantCode:dolphinscheduler check success [INFO] 2025-11-11 10:57:00.809 +0800 - ProcessExecDir:/data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 check success [INFO] 2025-11-11 10:57:00.809 +0800 - Resources:{} check success [INFO] 2025-11-11 10:57:00.809 +0800 - Task plugin: SPARK_KUBERNETES create success [INFO] 2025-11-11 10:57:00.809 +0800 - global params sqlParamsEncryp is empty [INFO] 2025-11-11 10:57:00.809 +0800 - Success initialized task plugin instance success [INFO] 2025-11-11 10:57:00.809 +0800 - Success set taskVarPool: null [INFO] 2025-11-11 10:57:00.809 +0800 - spark task global parameters: {ytenantid=v7jdalhf} [INFO] 2025-11-11 10:57:00.809 +0800 - spark task command: ${SPARK_HOME1}/bin/spark-submit --master local --class com.yonyou.datad.spark.app.SparkApp --driver-cores 1 --driver-memory 1g --num-executors 1 --executor-cores 1 --executor-memory 1g --name 市场周报BI数据存储过程_caky_dataease_ods_mm_18 ${SPARK_HOME1}/jars/spark-driver-app-1.0.0.jar -f flow.job -i 45921 [INFO] 2025-11-11 10:57:00.809 +0800 - command : cd /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 source /data/iuap/middleware/dolphinscheduler-6896/worker-server/conf/dolphinscheduler_env.sh ${SPARK_HOME1}/bin/spark-submit --master local --class com.yonyou.datad.spark.app.SparkApp --driver-cores 1 --driver-memory 1g --num-executors 1 --executor-cores 1 --executor-memory 1g --name 市场周报BI数据存储过程_caky_dataease_ods_mm_18 ${SPARK_HOME1}/jars/spark-driver-app-1.0.0.jar -f flow.job -i 45921 [INFO] 2025-11-11 10:57:00.809 +0800 - task param [{"resourceList":[],"localParams":[],"dependence":{},"waitStartTimeout":{},"conditionResult":{"successNode":[""],"failedNode":[""]},"switchResult":{},"numExecutors":1,"programType":"FLOW","driverMemory":"1g","executorMemory":"1g","driverCores":1,"deployMode":"local","executorCores":1,"dependencyFiles":[],"flowInfo":"{\"units\":[{\".id\":\"ff90600a\",\".name\":\"SQLInput\",\".class\":\"com.yonyou.datad.spark.io.source.jdbc.SQLQueryInput\",\"sourceId\":{\"url\":\"jdbc:postgresql://172.16.57.164:5432/dataease?currentSchema=public\",\"driver\":\"org.postgresql.Driver\",\"username\":\"dataease_etl\",\"password\":\"f2JuKkRGEdjoSnpvzOx-yg\",\"dbschema\":\"ods_mm\"},\"sql\":\"-- SQL数据源组件,仅支持查询语句;\\n-- 每次仅能输入一个查询语句;\\n-- 为了保证任务运行成功率,请在表名前输入数据库模式信息;\\n\\nselect dm_mm.p_dm_mm_box_weekly_insurance_wide();\"}],\"connections\":[],\"version\":\"1.0.0\",\"debug\":{\"debugModel\":\"100\",\"fsType\":\"S3\",\"minioUrl\":\"http://172.16.57.163:6895\",\"minioAccess\":\"minio\",\"minioSecret\":\"ystorage123\",\"dataCachePath\":\"s3a://apps/v7jdalhf/2401369301231599621\"}}"}] [INFO] 2025-11-11 10:57:00.811 +0800 - task run command: sudo -u dolphinscheduler sh -c cd /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921;source /data/iuap/middleware/dolphinscheduler-6896/worker-server/conf/dolphinscheduler_env.sh;${SPARK_HOME1}/bin/spark-submit --master local --class com.yonyou.datad.spark.app.SparkApp --driver-cores 1 --driver-memory 1g --num-executors 1 --executor-cores 1 --executor-memory 1g --name 市场周报BI数据存储过程_caky_dataease_ods_mm_18 ${SPARK_HOME1}/jars/spark-driver-app-1.0.0.jar -f flow.job -i 45921 [INFO] 2025-11-11 10:57:00.811 +0800 - process start, process id is: 3138055 [INFO] 2025-11-11 10:57:02.811 +0800 - -> 25/11/11 10:57:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable add the id(ff90600a) of the process unit(com.yonyou.datad.spark.io.source.jdbc.SQLQueryInput) 25/11/11 10:57:02 WARN SparkConf: Note that spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone/kubernetes and LOCAL_DIRS in YARN). [INFO] 2025-11-11 10:57:05.703 +0800 - application_id is [] [INFO] 2025-11-11 10:57:05.704 +0800 - process has exited, execute path:/data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921, processId:3138055 ,exitStatusCode:0 ,processWaitForStatus:true ,processExitValue:0 [INFO] 2025-11-11 10:57:05.704 +0800 - Send task execute result to master, the current task status: TaskExecutionStatus{code=7, desc='success'} [INFO] 2025-11-11 10:57:05.704 +0800 - Remove the current task execute context from worker cache [INFO] 2025-11-11 10:57:05.704 +0800 - The current execute mode isn't develop mode, will clear the task execute file: /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 [INFO] 2025-11-11 10:57:05.704 +0800 - Success clear the task execute file: /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 [INFO] 2025-11-11 10:57:05.813 +0800 - -> 25/11/11 10:57:04 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties [INFO] 2025-11-11 10:57:05.813 +0800 - FINALIZE_SESSION
11-12
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import time import logging import sys from os import path import xml.etree.ElementTree as ET from typing import List, Dict, Tuple from concurrent.futures import ThreadPoolExecutor, as_completed import datetime, os # ========== 基础模块 ========== from VigiIpcControl import VigiIpcControl from HKIpcControl import HKIpcControl from DHIpcControl import DHIpcControl from SmartBulbBatchCtrl import SmartBulbBatchCtrl from CarSerialCtrl import CarSerialCtrl from VigiVideo import VigiVideo from log_config import Log import xlrd # --------------------- 日志 ------------------- timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") log_filename = f"{timestamp}_diff_Lux.log" logger = Log(name=log_filename, log_level=logging.INFO) # ----------- 灯泡配置 ----------- bulb_config_path = r'E:\dcc\实验室资料\ControlScript\Light_Contrl\BulbConfigArea1.xls' bulb_config_table = 'config' nic_ip = '192.168.0.11' # 读取智能灯泡配置表格 workbook = xlrd.open_workbook(bulb_config_path) table = workbook.sheet_by_name(bulb_config_table) # 创建列值索引map row0_value = table.row_values(0) column_index_map = {} index = 0 for item in row0_value: column_index_map[item] = index index = index+1 # 创建智能灯泡配置map bulb_config_map = {} bulb_index_map = {} rowc = table.nrows for row in range(1, rowc): row_value = table.row_values(row) bulb_config_map[row_value[column_index_map['location']]] = {'ip':row_value[column_index_map['ip']], \ 'type':row_value[column_index_map['type']], \ 'username':row_value[column_index_map['username']], \ 'password':row_value[column_index_map['password']]} bulb_index_map[row_value[column_index_map['location']]] = int(row_value[column_index_map['index']]) def build_rows(start: int, end: int, j_start: int = 1, j_end: int = 5): """生成 ['i_j', ...] 列表,i∈[start,end), j∈[j_start,j_end)""" return ['%d_%d' % (i, j) for i in range(start, end) for j in range(j_start, j_end)] # 形成灯泡字典 def get_bulb_conf_list(bulb_loc_list=[], bulb_config_map=None): bulb_conf_list = [] for loc in bulb_loc_list: bulb_conf_list.append(bulb_config_map[loc]) return bulb_conf_list class DeviceManager: """IPC设备管理器,支持多品牌设备统一管理""" def __init__(self, config: Dict): self.config = config # 主VIGI设备 self.main_vigi = None # 其他VIGI设备列表 self.other_vigi_devices = [] # 海康设备列表 self.hk_devices = [] # 大华设备列表 self.dh_devices = [] # 视频控制对象 self.vigi_videos = {} # 初始化主VIGI设备 if config.get('main_vigi'): main_cfg = config['main_vigi'] self.main_vigi = VigiIpcControl.VigiIpcControl(main_cfg) self.vigi_videos['main_vigi'] = VigiVideo.VigiVideo(main_cfg) # 初始化其他VIGI设备 for idx, other_cfg in enumerate(config.get('other_vigi_devices', [])): device = VigiIpcControl.VigiIpcControl(other_cfg) self.other_vigi_devices.append(device) self.vigi_videos[f'other_vigi_{idx}'] = VigiVideo.VigiVideo(other_cfg) # 初始化海康设备 for ip in config.get('hk_ips', []): hk_ipc = HKIpcControl.HKIpcControl({ 'ip': ip, 'username': config['hk_user'], 'password': config['hk_key'] }) hk_ipc.hk_start_test_web() self.hk_devices.append(hk_ipc) # 初始化大华设备 for ip in config.get('dh_ips', []): dh_ipc = DHIpcControl.DHIpcControl({ 'ip': ip, 'username': config['dh_user'], 'password': config['dh_key'] }) dh_ipc.dh_start_test_web() self.dh_devices.append(dh_ipc) def all_devices(self) -> List: """返回所有设备实例列表""" devices = [] if self.main_vigi: devices.append(self.main_vigi) devices.extend(self.other_vigi_devices) devices.extend(self.hk_devices) devices.extend(self.dh_devices) return devices def initialize(self): """初始化所有设备""" # 初始化主VIGI设备 if self.main_vigi: self.main_vigi.vigi_start_test_tdcp() self.main_vigi.image_restore() self.main_vigi.close_light() # 初始化其他VIGI设备 for device in self.other_vigi_devices: device.vigi_start_test_tdcp() device.image_restore() device.close_light() # 初始化海康设备 for device in self.hk_devices: device.day_mode() time.sleep(1) device.close_white() # 初始化大华设备 for device in self.dh_devices: device.off_supplement(mode="Off", light_type="IR") def open_ir(self): """打开所有设备的红外模式""" # 主VIGI设备 if self.main_vigi: try: self.main_vigi.open_ir() except Exception as e: logger.warning(f"主VIGI设备打开红外模式失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: device.open_ir() except Exception as e: logger.warning(f"其他VIGI设备{idx}打开红外模式失败: {e}") # 海康设备 for device in self.hk_devices: try: device.night_mode() device.close_ir() except Exception as e: logger.warning(f"海康设备{device.config['ip']}切换夜间模式失败: {e}") # 大华设备 for device in self.dh_devices: try: device.day_night_mode(mode='IR') device.off_supplement() except Exception as e: logger.warning(f"大华设备{device.config['ip']}切换红外模式失败: {e}") def close_ir(self): """关闭所有设备的红外模式""" # 主VIGI设备 if self.main_vigi: try: self.main_vigi.close_light() except Exception as e: logger.warning(f"主VIGI设备关闭补光灯失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: device.close_light() except Exception as e: logger.warning(f"其他VIGI设备{idx}关闭补光灯失败: {e}") # 海康设备 for device in self.hk_devices: try: device.day_mode() device.close_ir() except Exception as e: logger.warning(f"海康设备{device.config['ip']}切换日间模式失败: {e}") # 大华设备 for device in self.dh_devices: try: device.day_night_mode(mode='Color') device.off_supplement() except Exception as e: logger.warning(f"大华设备{device.config['ip']}切换彩色模式失败: {e}") def capture(self, base_filename: str): """所有设备截图""" # 主VIGI设备 if self.main_vigi: try: self.vigi_videos['main_vigi'].capture(custom_name=f"{base_filename}_main_vigi") except Exception as e: logger.warning(f"主VIGI设备截图失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: self.vigi_videos[f'other_vigi_{idx}'].capture(custom_name=f"{base_filename}_other_vigi_{idx}") except Exception as e: logger.warning(f"其他VIGI设备{idx}截图失败: {e}") # 海康设备 for idx, device in enumerate(self.hk_devices): try: device.capture(filename=f"{base_filename}_hk_{idx}") except Exception as e: logger.warning(f"海康设备{idx}截图失败: {e}") # 大华设备 for idx, device in enumerate(self.dh_devices): try: device.capture(filename=f"{base_filename}_dh_{idx}") except Exception as e: logger.warning(f"大华设备{idx}截图失败: {e}") def record_start(self, base_filename: str): """所有设备开始录像""" # 主VIGI设备 if self.main_vigi: try: self.vigi_videos['main_vigi'].record_start(output_file=f"{base_filename}_main_vigi") except Exception as e: logger.warning(f"主VIGI设备开始录像失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: self.vigi_videos[f'other_vigi_{idx}'].record_start(output_file=f"{base_filename}_other_vigi_{idx}") except Exception as e: logger.warning(f"其他VIGI设备{idx}开始录像失败: {e}") # 海康设备 for idx, device in enumerate(self.hk_devices): try: device.record_start(filename=f"{base_filename}_hk_{idx}") except Exception as e: logger.warning(f"海康设备{idx}开始录像失败: {e}") # 大华设备 for idx, device in enumerate(self.dh_devices): try: device.record_start(filename=f"{base_filename}_dh_{idx}") except Exception as e: logger.warning(f"大华设备{idx}开始录像失败: {e}") def record_stop(self): """所有设备停止录像""" # 主VIGI设备 if self.main_vigi: try: self.vigi_videos['main_vigi'].record_stop() except Exception as e: logger.warning(f"主VIGI设备停止录像失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: self.vigi_videos[f'other_vigi_{idx}'].record_stop() except Exception as e: logger.warning(f"其他VIGI设备{idx}停止录像失败: {e}") # 海康设备 for device in self.hk_devices: try: device.record_stop() except Exception as e: logger.warning(f"海康设备停止录像失败: {e}") # 大华设备 for device in self.dh_devices: try: device.record_stop() except Exception as e: logger.warning(f"大华设备停止录像失败: {e}") def close_all(self): """关闭所有设备连接""" # 海康设备 for device in self.hk_devices: try: device.close_web() except Exception as e: logger.warning(f"关闭海康设备连接失败: {e}") # 大华设备 for device in self.dh_devices: try: device.close_web() except Exception as e: logger.warning(f"关闭大华设备连接失败: {e}") def execute_on_main_vigi(self, func, *args, **kwargs): """在主VIGI设备上执行特定操作""" if self.main_vigi: try: return func(self.main_vigi, *args, **kwargs) except Exception as e: logger.warning(f"主VIGI设备操作失败: {e}") else: logger.warning("无主VIGI设备") def execute_on_other_vigi(self, func, *args, **kwargs): """在所有其他VIGI设备上执行特定操作""" results = [] for device in self.other_vigi_devices: try: results.append(func(device, *args, **kwargs)) except Exception as e: logger.warning(f"其他VIGI设备操作失败: {e}") return results # --------------------------- 初始化IPC配置信息 ----------------------------------- def parse_config(xml_path: str) -> Dict: """解析XML配置文件""" config = {} try: tree = ET.parse(xml_path) root = tree.getroot() # 解析VIGI配置 vigi_config = root.find('vigi_config') if vigi_config is not None: # 主VIGI设备 main_ip = vigi_config.find('VIGI_IP').text.strip() config['main_vigi'] = { 'ip': main_ip, 'username': vigi_config.find('VIGI_USER').text.strip(), 'password': vigi_config.find('VIGI_KEY').text.strip(), 'stream_type': vigi_config.find('stream_type').text.strip() } # 其他VIGI设备 other_ips = vigi_config.find('VIGI_IPS').text.strip().split(',') config['other_vigi_devices'] = [] for ip in other_ips: config['other_vigi_devices'].append({ 'ip': ip, 'username': vigi_config.find('VIGI_USER').text.strip(), 'password': vigi_config.find('VIGI_KEY').text.strip(), 'stream_type': vigi_config.find('stream_type').text.strip() }) # 解析海康配置 hk_config = root.find('hk_config') if hk_config is not None: config['hk_ips'] = hk_config.find('HK_IP').text.strip().split(',') config['hk_user'] = hk_config.find('HK_USER').text.strip() config['hk_key'] = hk_config.find('HK_KEY').text.strip() # 解析大华配置 dh_config = root.find('dh_config') if dh_config is not None: config['dh_ips'] = dh_config.find('DH_IP').text.strip().split(',') config['dh_user'] = dh_config.find('DH_USER').text.strip() config['dh_key'] = dh_config.find('DH_KEY').text.strip() except Exception as e: logger.error(f"解析配置文件失败: {e}") raise return config def start_test_get_config(): """加载配置文件""" logger.debug('加载配置文件.........') config = {} if hasattr(sys, 'frozen'): py_path = path.dirname(sys.executable) else: py_path = path.dirname(__file__) config["py_path"] = py_path # 读取配置文件 xml_path = path.join(py_path, 'ipc_test_info.xml') config.update(parse_config(xml_path)) logger.debug('加载配置文件结束......') return config # 初始化设备管理器 config = start_test_get_config() device_manager = DeviceManager(config) # 初始化小车 car = CarSerialCtrl.CarControl(logger=logger, port="COM5") # 初始化所有设备 device_manager.initialize() time.sleep(2) # 等待设备初始化完成 # 小车归位 def car_move_to_home(): car.go_to_site(1) # 小车停止 def car_stop(): car.stop() # 小车运动 def car_video_track(): cur = car.get_car_status()['current_site'] if cur == 0: car.set_current_site(1) for i in range(2): car.set_speed(100) car.go_to_site(1) time.sleep(2) car.go_to_site(4) time.sleep(2) car.go_to_site(1) car.go_to_site(1) time.sleep(1) def main(): logger.info("---------------------------------------IPC day_mode_illumin_change测试---------------------------------------") # 截图和录像 device_manager.capture('initial_state') device_manager.record_start('ir_mode_illumin_change') time.sleep(15) # 打开红外模式 device_manager.open_ir() # 灯光控制逻辑保持不变 lx_2 = build_rows(1, 2, 1, 9) lx_list2 = get_bulb_conf_list(bulb_loc_list=lx_2, bulb_config_map=bulb_config_map) # ...原有的灯光控制代码... # 停止录像 device_manager.record_stop() logger.info("------------------红外模式下的曝光收敛测试完成------------------") if __name__ == "__main__": try: main() except KeyboardInterrupt: logger.info("用户中断,退出") finally: # 关闭所有设备连接 device_manager.close_all
最新发布
12-10
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值