spark-core_28:Executor初始化过程env.blockManager.initialize(conf.getAppId)- blockTransferService.init()分析

本文深入探讨了Spark中BlockManager的初始化过程,包括BlockManager如何与Driver通信、NettyBlockTransferService的工作原理及其内部组件的配置与交互。

查看(spark-core_25:Master通知Worker启动CoarseGrainedExecutorBackend进程及CoarseGrainedExecutorBackend初始化源码分析)

//sparkContext在初始化时也调用了_env.blockManager.initialize(_applicationId)执行的过程差不多

 

private[spark] class BlockManager(
   
executorId: String, //executorId:如是driver值是"driver"串,如果CoarseGrainedExecutorBackend则是具体的数值串
    rpcEnv: RpcEnv,
   
val master: BlockManagerMaster, //diver上的BlockManagerMaster负责对Executor上的BlockManager进行管理,
// 它里面有BlockManagerMasterEndpoint引用,Executor上通过获取的它的引用,然后给Endpoint发消息实现和Driver交互

    defaultSerializer: Serializer,
   
val conf: SparkConf,
   
memoryManager: MemoryManager, //默认使用UnifiedMemoryManager
    mapOutputTracker: MapOutputTracker, //如果是executor:MapOutputTrackerWorker,会从driver中的MapOutputTrackerMaster得到map out 的信息
                        // 如果是driver:MapOutputTrackerMaster,使用TimeStampedHashMap来跟踪 map的输出信息

    shuffleManager: ShuffleManager,
   
blockTransferService: BlockTransferService,
   
securityManager: SecurityManager,
   
numUsableCores: Int)
 
extends BlockDataManager withLogging {
…为了分析流程,将不太相关的代码去掉,读者可以对着源码跟我一块分析
   
/**
   * Initializes the BlockManager withthe given appId. This is not performed in the constructor as
   * the appId may not be known atBlockManager instantiation time (in particular for the driver,
   * where it is only learned afterregistration with the TaskScheduler).
   *
   * This method initializes theBlockTransferService and ShuffleClient, registers with the
   * BlockManagerMaster, starts theBlockManagerWorker endpoint, and registers with a local shuffle
   * service if configured.
    *
    * 该方法在sparkContext或在Executor初始化时被调用:_env.blockManager.initialize(_applicationId)
    * 该方法的作用:
    * 1,用给定的appId初始化BlockManager。 (特别是对于仅在向TaskScheduler注册之后的驱动程序)
    * 2,blockTransferService.init(this)创建一个NettyServer
    * 3,生成BlockManagerId("driver",driver的host, nettyserver的port),它是每个BlockManager的唯一标识
    * 3.1,实例化生成BlockManagerSlaveEndpoint的作用是得到master命令来执行相关操作,如从slave 的BlockManger中移除block
    * 4,master.registerBlockManager():生成BlockManagerInfo放到BlockManagerMasterEndpoint成员blockManagerInfo对应HashMap[BlockManagerId,BlockManagerInfo]集合中
    * BlockManagerInfo管理所有BlockManagerId,而BlockManagerId是BlockManager的唯一标识,同时它还有BlockManagerSlaveEndpoint(是driver和slave交互用的)
    *
    *参数appId的值类似: app-20180404172558-0000
   */

 
def initialize(appId: String): Unit = {
   
//SparkEnv.create初始化进来的:BlockTransferService:NettyBlockTransferService,它是块传输服务
   
/** NettyBlockTransferService.init(this)做了如下事情:
      1.创建RpcServer:NettyBlockRpcServer,为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle;
      2.构建TransportContext:TransportContext:包含创建{TransportServer:nettyServer},{TransportClientFactory用来创建TransportClient}的上下文,并使用{TransportChannelHandler}设置NettyChannel管道。
      3.客户端工厂TransportClientFactory:这个工厂实例通过使用createClient创建客户端{TransportClient}, 这个工厂实例维护一个到其他主机的连接池,并应为相同的远程主机返回相同的TransportClient。 它还为所有TransportClient共享单个线程池。

      4.创建Netty服务器TransportServer:包括编解码,还有入站事件都加到TransportServer这个nettySever中(上面各个类是围绕NettyServer来干活的)
      */

   
blockTransferService.init(this)
  
。。.

  }

1,NettyBlockTransferService这个块传输服务是如何初始化nettySever的,NettyBlockTransferService.init()做了上面注释说的四件事

/**
 * A BlockTransferService that uses Nettyto fetch a set of blocks at at time.
  * 它是由SparkEnv.create()方法初始化出来的
  * 在单位时间内使用netty取得block块集合,blockTransferService默认为NettyBlockTransferService(提供web服务及客户端,获取远程节点上的Block集合)
  * numCores:如果master是local模式会将driver对应节点cpu的线程数取出来,如果是集群模式则返回0
  * numCores如果是CoarseGrainedExecutorBackend创建的SparkEnv则它的值是:
  *     SparkConf的"spark.executor.cores"的值决定(我这设置了1所以是1),如果没有值,只启动一个CoarseGrainedExecutorBackend,把worker所有可用的core给它
 */

class NettyBlockTransferService(conf: SparkConf, securityManager: SecurityManager, numCores: Int) extendsBlockTransferService {

  //通过参数中的BlockDataManager(是BlockManager的父类)来初始化传输服务,通过BlockDataManager可以得到本地的Block(getBlockData)和put本地block(putBlockData)
  //该方法会被BlockManager中的initialize()方法调用

 
override def init(blockDataManager:BlockDataManager): Unit = {
   
/** conf.getAppId: app-20180508234845-0000
      * serializer:    JavaSerializer()
      * blockDataManager : BlockManager实例
      */

   
val rpcHandler = new NettyBlockRpcServer(conf.getAppId, serializer, blockDataManager)
  

  }

2,初始化NettyBlockRpcServer:作用为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle

class NettyBlockRpcServer(
   
appId: String, // app-20180508234845-0000
    serializer: Serializer, //JavaSerializer
    blockManager: BlockDataManager) //BlockManager实例
  extends RpcHandler with Logging {
 
//StreamManager允许注册Iterator<ManagedBuffer> ,通过TransportClient客户端可以得到各自的chunks,每个注册的buffer是一个chunk
  private val streamManager = new OneForOneStreamManager()
//通过openBlock和uploadBlock,可以打开和上传注册在BlockManager中的Block
 
override def receive(
     
client: TransportClient,
     
rpcMessage: ByteBuffer,
     
responseContext: RpcResponseCallback): Unit = {
   
val message= BlockTransferMessage.Decoder.fromByteBuffer(rpcMessage)
   
logTrace(s"Received request: $message")

   
message match {
     
case openBlocks:OpenBlocks =>
       
val blocks:Seq[ManagedBuffer] =
         
….

      case uploadBlock:UploadBlock =>
       
// StorageLevel is serialized as bytes using ourJavaSerializer.
       

   
}
  }
  override def getStreamManager(): StreamManager = streamManager

//再看一下OneForOneStreamManager,是由getStreamManager()进行调用的

/**
 * StreamManager which allowsregistration of an Iterator<ManagedBuffer>, which are individually fetched as chunks by the client.Each registered buffer is one chunk.
 *
 * StreamManager允许注册Iterator<ManagedBuffer> ,通过TransportClient客户端可以得到各自的chunks,每个注册的buffer是一个chunk
 */

public class OneForOneStreamManager extends StreamManager {
 
private final Logger logger = LoggerFactory.getLogger(OneForOneStreamManager.class);

 
private final AtomicLong nextStreamId;
 
private final ConcurrentHashMap<Long, StreamState>streams;

 
/** State of a single stream. */
 
private static class StreamState {
   
。。。
 
}
  //BlockManager.initialize==>NettyBlockTransferService.init()==>newNettyBlockRpcServer初始化时调用的
  //设置成员:nextStreamId:AtomicLong,赋小于Integer.MAX_VALUE*1000的值
  //streams:变成ConcurrentHashMap<Long,StreamState>()

  public OneForOneStreamManager(){
   
// For debugging purposes, start with a random stream idto help identifying different streams.
   
// This does not need to be globallyunique, only unique to this class.
    nextStreamId = new AtomicLong((long) new Random().nextInt(Integer.MAX_VALUE) * 1000);
   
streams = new ConcurrentHashMap<Long, StreamState>();
 
}

。。。。

3,再回到NettyBlockTransferService.init方法

override def init(blockDataManager: BlockDataManager): Unit = {
 
/** conf.getAppId: app-20180508234845-0000
    * serializer:    JavaSerializer()
    * blockDataManager : BlockManager实例
    * NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
    */

 
val rpcHandler= new NettyBlockRpcServer(conf.getAppId, serializer, blockDataManager)
 
var serverBootstrap:Option[TransportServerBootstrap] = None
 
var clientBootstrap:Option[TransportClientBootstrap] = None
 
if (authEnabled) {//默认是false,不开启认证
    serverBootstrap = Some(new SaslServerBootstrap(transportConf, securityManager))
   
clientBootstrap = Some(new SaslClientBootstrap(transportConf, conf.getAppId, securityManager,
     
securityManager.isSaslEncryptionEnabled()))
 
}
  //TransportContext:包含创建{TransportServer:nettyServer},{TransportClientFactory用来创建TransportClient}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道。
  //实例化TransportContext将给成员赋值 conf:TransportConf它可以通过成员子类ConfigProvider和sparkConf关联、rpcHandler:NettyBlockRpcServer、closeIdleConnections:false、同时给出站的编码器、入站解码器,赋具体实例

  transportContext = new TransportContext(transportConf, rpcHandler)

==》先看一下NettyBlockTransferService的成员transportConf

//numCores:如果master是local模式会将driver对应节点cpu的线程数取出来,如果是集群模式则返回0
/**  SparkTransportConf.fromSparkConf():会按numCores给sparkConf的spark.shuffle.io.serverThreads或spark.shuffle.io.clientThreads设置线程数,
     是给netty的server或client使用的,如果没有给sparkConf设置值则这个值是小于等于8
fromSparkConf方法:返回TransportConf实例:会按fromSparkConf第二个参数给TransportConf为module给它的成员变量设置key类型
  如:SPARK_NETWORK_IO_MODE_KEY:spark.shuffle.io.mode、 SPARK_NETWORK_IO_SERVERTHREADS_KEY: spark.shuffle.io.serverThreads
  */

private val transportConf = SparkTransportConf.fromSparkConf(conf, "shuffle", numCores)

===>SparkTransportConf按写的源码看是通过TransportConf成员ConfigProvider的子类,和SparkConf建立关系,同时按module参数,设置key的变量值,

==》同时将指定spark.shuffle.io.serverThreads、spark.shuffle.io.clientThreads的线程数是实参numUsableCores的值

/**
 * Provides a utility for transformingfrom a SparkConf inside a Spark JVM (e.g., Executor,
 * Driver, or a standalone shuffleservice) into a TransportConf with details on our environment
 * like the number of cores that areallocated to this JVM.
  * 提供一个工具类,将当前Spark JVM中的SparkConf(例如,Executor,Driver或独立shuffle服务)转换为TransportConf,
  * 并提供有关环境的详细信息,例如分配给此JVM的核心数。
 */

object SparkTransportConf {
 
    * spark默认8个netty 线程,在实践中2-4个cores需要10Gb/s的传输,每个core需要初始大于32MB的出堆内存这个线程值可以通过serverThreads和clientThreads手动匹配来更新这个值
   */

 
private val MAX_DEFAULT_NETTY_THREADS = 8

 
/**
   * Utility for creating a
[[TransportConf]] froma [[SparkConf]].
   * @param _conf the
[[SparkConf]]
   * @param module the module name  如shuffle
    *               如果numUsableCores是非0的值,将会限制server和client的线程,只能使用给定义的cores数,而不是整个服务器的cores
   * @param numUsableCores ifnonzero, this will restrict the server and client threads to only use the givennumber of cores, rather than all of the machine's cores.
   * This restriction will only occur ifthese properties are not already set.
    *

从sparkConf创建一个TransportConf
    * numUsableCores:如果master是local模式会将driver对应节点cpu的线程数取出来,如果是集群模式则返回0
    * numUsableCores:如果是CoarseGrainedExecutorBackend创建的SparkEnv则它的值是:
    *SparkConf的"spark.executor.cores"的值决定(我这设置了1所以是1),如果没有值,只启动一个CoarseGrainedExecutorBackend,把worker所有可用的core给它
   */

 
def fromSparkConf(_conf: SparkConf, module:String, numUsableCores:Int = 0): TransportConf = {
   
val conf= _conf.clone

   
// Specify thread configuration based on our JVM'sallocation of cores (rather than necessarily
    // assuming we have all the machine'scores).
    // NB: Only set ifserverThreads/clientThreads not already set.
    //defaultNumThreads:返回给netty的client和server的线程池对应core的数量,如果是numUsableCores是0 会返回当前小于等8的线程数

    val numThreads= defaultNumThreads(numUsableCores)
   
//我这边将spark.executor.cores设置成1,所以numThreads的值是1,即spark.shuffle.io.serverThreads、spark.shuffle.io.clientThreads的值是1
    conf.setIfMissing(s"spark.$module.io.serverThreads", numThreads.toString)
   
conf.setIfMissing(s"spark.$module.io.clientThreads", numThreads.toString)
   
//ConfigProvider是抽像类,需要实现抽像方法,它的作用就是帮助实例化TransportConf
    //按module变量值去设置它们成员变量的值如:SPARK_NETWORK_IO_MODE_KEY:spark.shuffle.io.mode、
    //SPARK_NETWORK_IO_SERVERTHREADS_KEY: spark.shuffle.io.serverThreads

    new TransportConf(module, new ConfigProvider {
   
  override def get(name: String): String = conf.get(name)
   
})
  }

  /**
   * Returns the default number ofthreads for both the Netty client and server thread pools.
   * If numUsableCores is 0, we will useRuntime get an approximate number of available cores.
    * 返回默认的线程数,给netty的client和server的线程池使用,如果是numUsableCores是0 会返回当前小于等8的线程数
   */

 
private def defaultNumThreads(numUsableCores:Int): Int = {
   
val availableCores=
     
if (numUsableCores> 0) numUsableCores else Runtime.getRuntime.availableProcessors()
   
math.min(availableCores, MAX_DEFAULT_NETTY_THREADS)
 
}
}

===》再回到NettyBlockTransferService.init方法看一下new TransportContext(transportConf, rpcHandler)

override def init(blockDataManager: BlockDataManager): Unit = {
 
。。。
 
//TransportContext:包含创建{TransportServer:nettyServer},{TransportClientFactory用来创建TransportClient}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道。
  //实例化TransportContext将给成员赋值 conf:TransportConf它可以通过成员子类ConfigProvider和sparkConf关联、rpcHandler:NettyBlockRpcServer、closeIdleConnections:false、同时给出站的编码器、入站解码器,赋具体实例

  transportContext = new TransportContext(transportConf, rpcHandler)

===》初始化TransportContext

* TransportContext包含创建{TransportServer},{TransportClientFactory}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道。
 *
 * TransportClient提供两种通信协议,控制平面RPC和数据平面“块取出”。
 * RPC的处理在TransportContext的范围之外(即,由用户提供的处理程序)执行,并且它负责设置可使用零拷贝IO以块形式流式传输通过数据平面的流。
 *
 * TransportServer和TransportClientFactory都为每个通道创建一个TransportChannelHandler。
 * 由于每个TransportChannelHandler都包含一个TransportClient,因此可以使服务器进程在现有通道上将消息发送回客户端。
 */

public class TransportContext {
 
private final Logger logger = LoggerFactory.getLogger(TransportContext.class);
//这此成员值看下面构造方法
  private final TransportConf conf;
 
private final RpcHandler rpcHandler;
 
private final boolean closeIdleConnections;

 
private final MessageEncoder encoder;
 
private final MessageDecoder decoder;

 
public TransportContext(TransportConf conf, RpcHandler rpcHandler) {
   
this(conf, rpcHandler, false);
 
}

 
/**
   *
   * @param conf TransportConf实例:会按fromSparkConf第二个参数给TransportConf为module给它的成员变量设置key类型。如:SPARK_NETWORK_IO_MODE_KEY:spark.shuffle.io.mode、 SPARK_NETWORK_IO_SERVERTHREADS_KEY: spark.shuffle.io.serverThreads
   * @param rpcHandler: NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
   * @param closeIdleConnections  如果是上面两个参数的构造方法,它的值是false
     */

 
public TransportContext(
     
TransportConf conf,
     
RpcHandler rpcHandler,
     
boolean closeIdleConnections) {
   
this.conf = conf; //TransportConf,它的成员ConfigProvider子类关联到SparkConf
    this.rpcHandler = rpcHandler; //NettyBlockRpcServer
    //是MessageToMessageEncoder编码器用于出站事件
    this.encoder = new MessageEncoder();
   
//是MessageToMessageDecoder解码器用于入站事件
    this.decoder = new MessageDecoder();
   
//默认是false
    this.closeIdleConnections = closeIdleConnections;
 
}

4,再回来再回到NettyBlockTransferService.init,看一下TransportContext. createClientFactory()创建TransportClientFactory

override def init(blockDataManager: BlockDataManager): Unit = {
 
/** conf.getAppId: app-20180508234845-0000
    * serializer:    JavaSerializer()
    * blockDataManager : BlockManager实例
    * NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
    */

 
val rpcHandler= new NettyBlockRpcServer(conf.getAppId, serializer, blockDataManager)
 
var serverBootstrap:Option[TransportServerBootstrap] = None
 
var clientBootstrap:Option[TransportClientBootstrap] = None
 
if (authEnabled) {//默认是false,不开启认证
    serverBootstrap = Some(new SaslServerBootstrap(transportConf, securityManager))
   
clientBootstrap = Some(new SaslClientBootstrap(transportConf, conf.getAppId, securityManager,
     
securityManager.isSaslEncryptionEnabled()))
 
}
  //TransportContext:包含创建{TransportServer:nettyServer},{TransportClientFactory用来创建TransportClient}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道。
  //实例化TransportContext将给成员赋值 conf:TransportConf它可以通过成员子类ConfigProvider和sparkConf关联
  // 、rpcHandler:NettyBlockRpcServer、closeIdleConnections:false、同时给出站的编码器、入站解码器,赋具体实例

  transportContext = new TransportContext(transportConf, rpcHandler)

 
/** 没有开启ssl所以clientBootstrap是空的
    *
    * TransportClientFactory:这个工厂实例通过使用createClient创建{TransportClient}。
    * 这个工厂实例维护一个到其他主机的连接池,并应为相同的远程主机返回相同的TransportClient。 它还为所有TransportClient共享单个线程池。
    *
    * 在返回新客户端之前初始化运行给定TransportClientBootstraps的ClientFactory。Bootstraps会被同步执行,并且必须运行成功才能创建Client
    * 给这个实例TransportClientFactory:赋成员
    * context:TransportContext,
    * conf:TransportConf会通过它成员:ConfigProvider子类关联SparkConf
    * 还有初始化netty的NioSocketChannel.class、NioEventLoopGroup线程组、ByteBuf分配器PooledByteBufAllocator
    */

  clientFactory
= transportContext.createClientFactory(clientBootstrap.toSeq.asJava)

==》TransportContext.createClientFactory就是将TransportClientFactory工厂初始化了一下

/**
 * Initializes a ClientFactory which runsthe given TransportClientBootstraps prior to returning  a new Client. Bootstraps will be executedsynchronously, and must run successfully in order  to create a Client.
 * 在返回新客户端之前初始化运行给定TransportClientBootstraps的ClientFactory。Bootstraps会被同步执行,并且必须运行成功才能创建Client 给这个实例TransportClientFactory:赋成员
 * context:TransportContext,
 * conf:TransportConf会通过它成员:ConfigProvider子类关联SparkConf
 * 还有初始化netty的NioSocketChannel.class、NioEventLoopGroup线程组、ByteBuf分配器PooledByteBufAllocator
 **/

public TransportClientFactory createClientFactory(List<TransportClientBootstrap> bootstraps) {
 
return new TransportClientFactory(this, bootstraps);
}

===》会将TransportContext、TransportConf,按IOMode的枚举类型,默认就是NIO, 得到NioSocketChannel.class

基于IOMode枚举创建Netty的EventLoopGroup线程组、创建一个池化的ByteBuf分配器PooledByteBufAllocator给它的在成员变量。

public TransportClientFactory(
   
TransportContext context,
   
List<TransportClientBootstrap> clientBootstraps) {
 
//保证TransportContext不为空
  this.context = Preconditions.checkNotNull(context);
 
//得到TransportConf实例:会按NettyBlockTransferService.fromSparkConf第二个参数给TransportConf为module给它的成员变量设置key类型
  //如:SPARK_NETWORK_IO_MODE_KEY:spark.shuffle.io.mode、 SPARK_NETWORK_IO_SERVERTHREADS_KEY: spark.shuffle.io.serverThreads

  this.conf = context.getConf();
 
//空的ArrayList
  this.clientBootstraps = Lists.newArrayList(Preconditions.checkNotNull(clientBootstraps));
 
//并发ConcurrentHashMap
  this.connectionPool = new ConcurrentHashMap<SocketAddress, ClientPool>();
 
//numConnectionsPerPeer: spark.shuffle.io.numConnectionsPerPeer这个key在sparkConf没有值所以是到默认值1
  this.numConnectionsPerPeer = conf.numConnectionsPerPeer();
 
this.rand= new Random();
 
//conf.ioMode(): SPARK.SHUFFLE.IO.MODE,会返回一个NIO, EPOLL的枚举值。会返回NIO串,然后变成NIO检举值
  IOMode ioMode = IOMode.valueOf(conf.ioMode());
 
//按NIO,返回如:NioSocketChannel.class
  this.socketChannelClass = NettyUtils.getClientChannelClass(ioMode);
 
// TODO:Make thread pool name configurable.
 
/**
   *  conf.clientThreads():对应spark.shuffle.io.clientThreads:是NettyBlockTransferService初始化时==>SparkTransportConf.fromSparkConf
   * 调用SparkTransportConf里面的对应ConfigProvider的get方法实现是:SparkConf.get(SPARK_NETWORK_IO_CLIENTTHREADS_KEY)
   * 而这个spark.shuffle.io.clientThreads,就是对应CoarseGrainedExecutorBackend的core的数量,我的案例设置成1了
   *
   * NettyUtils.createEventLoop:基于IOMode枚举创建Netty的EventLoopGroup线程组
   */

 
this.workerGroup = NettyUtils.createEventLoop(ioMode, conf.clientThreads(), "shuffle-client");
 
/**
   * conf.preferDirectBufs(): 找key是:spark.shuffle.io.preferDirectBufs,查看SparkConf没有这个key则返回true
   * conf.clientThreads()的值是1
   *NettyUtils.createPooledByteBufAllocator():创建一个池化的ByteBuf分配器PooledByteBufAllocator
   */

 
this.pooledAllocator = NettyUtils.createPooledByteBufAllocator(
   
conf.preferDirectBufs(), false /* allowCache */, conf.clientThreads());
}

 

再回到NettyBlockTransferService.init,创建NettyServer

(查看:spark-core_29:Executor初始化过程env.blockManager.initialize(conf.getAppId)-NettyBlockTransferService.init()-NettyServer创建源码分析)


2025-07-23 09:33:01 2025-07-23 09:33:01.128 -> [main] -> INFO com.fc.V2Application - Starting V2Application v0.0.1-SNAPSHOT using Java 1.8.0_212 on ddad2af35401 with PID 1 (/app.war started by root in /) 2025-07-23 09:33:01 2025-07-23 09:33:01.129 -> [main] -> DEBUG com.fc.V2Application - Running with Spring Boot v2.4.1, Spring v5.3.2 2025-07-23 09:33:01 2025-07-23 09:33:01.129 -> [main] -> INFO com.fc.V2Application - The following profiles are active: dev 2025-07-23 09:33:01 2025-07-23 09:33:01.176 -> [main] -> INFO o.s.b.d.env.DevToolsPropertyDefaultsPostProcessor - For additional web related logging consider setting the 'logging.level.web' property to 'DEBUG' 2025-07-23 09:33:02 2025-07-23 09:33:02.134 -> [main] -> INFO o.s.d.r.config.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode! 2025-07-23 09:33:02 2025-07-23 09:33:02.136 -> [main] -> INFO o.s.d.r.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data Redis repositories in DEFAULT mode. 2025-07-23 09:33:02 2025-07-23 09:33:02.168 -> [main] -> INFO o.s.d.r.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 20 ms. Found 0 Redis repository interfaces. 2025-07-23 09:33:02 2025-07-23 09:33:02.747 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'removeDruidAdConfig' of type [com.fc.v2.common.druid.RemoveDruidAdConfig$$EnhancerBySpringCGLIB$$c48737ab] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:02 2025-07-23 09:33:02.765 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'druidStatInterceptor' of type [com.alibaba.druid.support.spring.stat.DruidStatInterceptor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:02 2025-07-23 09:33:02.767 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'druidStatPointcut' of type [org.springframework.aop.support.JdkRegexpMethodPointcut] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:02 2025-07-23 09:33:02.769 -> [main] -> INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'druidStatAdvisor' of type [org.springframework.aop.support.DefaultPointcutAdvisor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-23 09:33:03 2025-07-23 09:33:03.096 -> [main] -> INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8080 (http) 2025-07-23 09:33:03 2025-07-23 09:33:03.110 -> [main] -> INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8080"] 2025-07-23 09:33:03 2025-07-23 09:33:03.111 -> [main] -> INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2025-07-23 09:33:03 2025-07-23 09:33:03.111 -> [main] -> INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.41] 2025-07-23 09:33:04 2025-07-23 09:33:04.184 -> [main] -> INFO o.a.c.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2025-07-23 09:33:04 2025-07-23 09:33:04.185 -> [main] -> INFO o.s.b.w.s.c.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 3008 ms 2025-07-23 09:33:04 2025-07-23 09:33:04.265 -> [main] -> INFO o.s.boot.web.servlet.RegistrationBean - Filter webStatFilter was not registered (possibly already registered?) 2025-07-23 09:33:04 2025-07-23 09:33:04.298 -> [main] -> DEBUG com.fc.v2.common.conf.PutFilter - Filter 'putFilter' configured for use 2025-07-23 09:33:05 2025-07-23 09:33:05.262 -> [main] -> INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor 2025-07-23 09:33:05 2025-07-23 09:33:05.264 -> [main] -> INFO org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main 2025-07-23 09:33:05 2025-07-23 09:33:05.276 -> [main] -> INFO org.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl 2025-07-23 09:33:05 2025-07-23 09:33:05.276 -> [main] -> INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.3.2 created. 2025-07-23 09:33:05 2025-07-23 09:33:05.277 -> [main] -> INFO org.quartz.simpl.RAMJobStore - RAMJobStore initialized. 2025-07-23 09:33:05 2025-07-23 09:33:05.278 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED' 2025-07-23 09:33:05 Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally. 2025-07-23 09:33:05 NOT STARTED. 2025-07-23 09:33:05 Currently in standby mode. 2025-07-23 09:33:05 Number of jobs executed: 0 2025-07-23 09:33:05 Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads. 2025-07-23 09:33:05 Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered. 2025-07-23 09:33:05 2025-07-23 09:33:05 2025-07-23 09:33:05.278 -> [main] -> INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 2025-07-23 09:33:05 2025-07-23 09:33:05.278 -> [main] -> INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.3.2 2025-07-23 09:33:05 2025-07-23 09:33:05.478 -> [main] -> INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited 2025-07-23 09:33:05 2025-07-23 09:33:05.723 -> [main] -> DEBUG c.f.v.m.auto.SysQuartzJobMapper.selectByExample - ==> Preparing: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:05 2025-07-23 09:33:05.941 -> [main] -> DEBUG c.f.v.m.auto.SysQuartzJobMapper.selectByExample - ==> Parameters: 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'quartzSchedulerUtil': Invocation of init method failed; nested exception is org.springframework.jdbc.BadSqlGrammarException: 2025-07-23 09:33:06 ### Error querying database. Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ### The error may exist in URL [jar:file:/app.war!/WEB-INF/classes!/mybatis/auto/SysQuartzJobMapper.xml] 2025-07-23 09:33:06 ### The error may involve com.fc.v2.mapper.auto.SysQuartzJobMapper.selectByExample-Inline 2025-07-23 09:33:06 ### The error occurred while setting parameters 2025-07-23 09:33:06 ### SQL: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:06 ### Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED shutting down. 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED paused. 2025-07-23 09:33:06 2025-07-23 09:33:06.005 -> [main] -> INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED shutdown complete. 2025-07-23 09:33:06 2025-07-23 09:33:06.006 -> [main] -> INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closing ... 2025-07-23 09:33:06 2025-07-23 09:33:06.014 -> [main] -> INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closed 2025-07-23 09:33:06 2025-07-23 09:33:06.016 -> [main] -> INFO org.apache.catalina.core.StandardService - Stopping service [Tomcat] 2025-07-23 09:33:06 2025-07-23 09:33:06.030 -> [main] -> INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - 2025-07-23 09:33:06 2025-07-23 09:33:06 Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2025-07-23 09:33:06 2025-07-23 09:33:06.047 -> [main] -> ERROR org.springframework.boot.SpringApplication - Application run failed 2025-07-23 09:33:06 org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'quartzSchedulerUtil': Invocation of init method failed; nested exception is org.springframework.jdbc.BadSqlGrammarException: 2025-07-23 09:33:06 ### Error querying database. Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ### The error may exist in URL [jar:file:/app.war!/WEB-INF/classes!/mybatis/auto/SysQuartzJobMapper.xml] 2025-07-23 09:33:06 ### The error may involve com.fc.v2.mapper.auto.SysQuartzJobMapper.selectByExample-Inline 2025-07-23 09:33:06 ### The error occurred while setting parameters 2025-07-23 09:33:06 ### SQL: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:06 ### Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:160) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:429) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1780) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:609) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:531) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) 2025-07-23 09:33:06 at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:944) 2025-07-23 09:33:06 at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:923) 2025-07-23 09:33:06 at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:588) 2025-07-23 09:33:06 at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:767) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:426) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.run(SpringApplication.java:326) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.run(SpringApplication.java:1309) 2025-07-23 09:33:06 at org.springframework.boot.SpringApplication.run(SpringApplication.java:1298) 2025-07-23 09:33:06 at com.fc.V2Application.main(V2Application.java:12) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) 2025-07-23 09:33:06 at org.springframework.boot.loader.Launcher.launch(Launcher.java:107) 2025-07-23 09:33:06 at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) 2025-07-23 09:33:06 at org.springframework.boot.loader.WarLauncher.main(WarLauncher.java:59) 2025-07-23 09:33:06 Caused by: org.springframework.jdbc.BadSqlGrammarException: 2025-07-23 09:33:06 ### Error querying database. Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ### The error may exist in URL [jar:file:/app.war!/WEB-INF/classes!/mybatis/auto/SysQuartzJobMapper.xml] 2025-07-23 09:33:06 ### The error may involve com.fc.v2.mapper.auto.SysQuartzJobMapper.selectByExample-Inline 2025-07-23 09:33:06 ### The error occurred while setting parameters 2025-07-23 09:33:06 ### SQL: select id, job_name, job_group, invoke_target, cron_expression, misfire_policy, concurrent, status from t_sys_quartz_job 2025-07-23 09:33:06 ### Cause: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 ; bad SQL grammar []; nested exception is java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239) 2025-07-23 09:33:06 at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) 2025-07-23 09:33:06 at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:73) 2025-07-23 09:33:06 at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:446) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy86.selectList(Unknown Source) 2025-07-23 09:33:06 at org.mybatis.spring.SqlSessionTemplate.selectList(SqlSessionTemplate.java:230) 2025-07-23 09:33:06 at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:139) 2025-07-23 09:33:06 at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:76) 2025-07-23 09:33:06 at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:59) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy114.selectByExample(Unknown Source) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) 2025-07-23 09:33:06 at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) 2025-07-23 09:33:06 at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 2025-07-23 09:33:06 at com.alibaba.druid.support.spring.stat.DruidStatInterceptor.invoke(DruidStatInterceptor.java:73) 2025-07-23 09:33:06 at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) 2025-07-23 09:33:06 at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy115.selectByExample(Unknown Source) 2025-07-23 09:33:06 at com.fc.v2.service.SysQuartzJobService.selectByExample(SysQuartzJobService.java:108) 2025-07-23 09:33:06 at com.fc.v2.service.SysQuartzJobService$$FastClassBySpringCGLIB$$8fc47ef9.invoke(<generated>) 2025-07-23 09:33:06 at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) 2025-07-23 09:33:06 at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:687) 2025-07-23 09:33:06 at com.fc.v2.service.SysQuartzJobService$$EnhancerBySpringCGLIB$$b174130e.selectByExample(<generated>) 2025-07-23 09:33:06 at com.fc.v2.common.quartz.QuartzSchedulerUtil.init(QuartzSchedulerUtil.java:42) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:389) 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:333) 2025-07-23 09:33:06 at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:157) 2025-07-23 09:33:06 ... 27 common frames omitted 2025-07-23 09:33:06 Caused by: java.sql.SQLSyntaxErrorException: Table 'springbootv2.t_sys_quartz_job' doesn't exist 2025-07-23 09:33:06 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953) 2025-07-23 09:33:06 at com.mysql.cj.jdbc.ClientPreparedStatement.execute(ClientPreparedStatement.java:370) 2025-07-23 09:33:06 at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_execute(FilterChainImpl.java:3461) 2025-07-23 09:33:06 at com.alibaba.druid.filter.FilterEventAdapter.preparedStatement_execute(FilterEventAdapter.java:440) 2025-07-23 09:33:06 at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_execute(FilterChainImpl.java:3459) 2025-07-23 09:33:06 at com.alibaba.druid.proxy.jdbc.PreparedStatementProxyImpl.execute(PreparedStatementProxyImpl.java:167) 2025-07-23 09:33:06 at com.alibaba.druid.pool.DruidPooledPreparedStatement.execute(DruidPooledPreparedStatement.java:497) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.apache.ibatis.logging.jdbc.PreparedStatementLogger.invoke(PreparedStatementLogger.java:59) 2025-07-23 09:33:06 at com.sun.proxy.$Proxy118.execute(Unknown Source) 2025-07-23 09:33:06 at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:63) 2025-07-23 09:33:06 at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:79) 2025-07-23 09:33:06 at org.apache.ibatis.executor.SimpleExecutor.doQuery(SimpleExecutor.java:63) 2025-07-23 09:33:06 at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:326) 2025-07-23 09:33:06 at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:156) 2025-07-23 09:33:06 at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:109) 2025-07-23 09:33:06 at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:83) 2025-07-23 09:33:06 at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:148) 2025-07-23 09:33:06 at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:141) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2025-07-23 09:33:06 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2025-07-23 09:33:06 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2025-07-23 09:33:06 at java.lang.reflect.Method.invoke(Method.java:498) 2025-07-23 09:33:06 at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:433) 2025-07-23 09:33:06 ... 57 common frames omitted
07-24
import findspark findspark.init() app_name = "zdf-jupyter" os.environ['HADOOP_USER_NAME'] = 'prod_sec_strategy_tech' os.environ['HADOOP_USER_PASSWORD'] = 'TbjnfqTIRFXFidmhfctN3ZT9QwncfQfY' # os.environ['QUEUE'] = 'root.sec_technology_sec_engine_tenant_prod' spark3conf = { "master": "yarn", "spark.submit.deployMode": "client", "driver-memory": "4g", "spark.dynamicAllocation.enabled": "true", "spark.dynamicAllocation.minExecutors": "100", "spark.dynamicAllocation.maxExecutors": "200", "spark.executor.cores": "3", "spark.executor.memory": "10g", "spark.executor.memoryOverhead": "8192", "spark.sql.hive.manageFilesourcePartitions": "false", "spark.default.parallelism": "1000", "spark.sql.shuffle.partitions": "1000", "spark.yarn.queue": "root.sec_technology_sec_engine_tenant_prod", "spark.sql.autoBroadcastJoinThreshold": "-1", "spark.sql.broadcastTimeout": "3000", "spark.driver.extraJavaOptions": "-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS", "spark.executor.extraJavaOptions": "-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS", "spark.yarn.dist.archives": "hdfs://difed/user/dm/ENV/nithavellir/v3.0/py3.10.16_lite.tgz", "spark.executorEnv.PYSPARK_PYTHON": "./py3.10.16_lite.tgz/py3.10.16_lite/bin/python3", "spark.extraListeners": "sparkmonitor.listener.JupyterSparkMonitorListener", "spark.jars": ( "hdfs://difed/user/dm/ENV/jars/spark_tfrecord/compile/spark-tfrecord-0.5.1_scala2.12-spark3.2.0-tfhp1.15.0.jar" ), } spark_app = SparkBaseApp() spark_app.initialize(app_name, spark3conf) spark = spark_app.spark hdfs = spark_app.hdfs 为什么会失败呢
08-09
逐行解释下面执行日志: [INFO] 2025-11-11 10:57:00.807 +0800 - Begin to pulling task [INFO] 2025-11-11 10:57:00.808 +0800 - Begin to initialize task [INFO] 2025-11-11 10:57:00.808 +0800 - Set task startTime: Tue Nov 11 10:57:00 GMT+08:00 2025 [INFO] 2025-11-11 10:57:00.808 +0800 - Set task envFile: /data/iuap/middleware/dolphinscheduler-6896/worker-server/conf/dolphinscheduler_env.sh [INFO] 2025-11-11 10:57:00.808 +0800 - Set task appId: 44327_45921 [INFO] 2025-11-11 10:57:00.808 +0800 - End initialize task [INFO] 2025-11-11 10:57:00.808 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'} [INFO] 2025-11-11 10:57:00.808 +0800 - TenantCode:dolphinscheduler check success [INFO] 2025-11-11 10:57:00.809 +0800 - ProcessExecDir:/data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 check success [INFO] 2025-11-11 10:57:00.809 +0800 - Resources:{} check success [INFO] 2025-11-11 10:57:00.809 +0800 - Task plugin: SPARK_KUBERNETES create success [INFO] 2025-11-11 10:57:00.809 +0800 - global params sqlParamsEncryp is empty [INFO] 2025-11-11 10:57:00.809 +0800 - Success initialized task plugin instance success [INFO] 2025-11-11 10:57:00.809 +0800 - Success set taskVarPool: null [INFO] 2025-11-11 10:57:00.809 +0800 - spark task global parameters: {ytenantid=v7jdalhf} [INFO] 2025-11-11 10:57:00.809 +0800 - spark task command: ${SPARK_HOME1}/bin/spark-submit --master local --class com.yonyou.datad.spark.app.SparkApp --driver-cores 1 --driver-memory 1g --num-executors 1 --executor-cores 1 --executor-memory 1g --name 市场周报BI数据存储过程_caky_dataease_ods_mm_18 ${SPARK_HOME1}/jars/spark-driver-app-1.0.0.jar -f flow.job -i 45921 [INFO] 2025-11-11 10:57:00.809 +0800 - command : cd /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 source /data/iuap/middleware/dolphinscheduler-6896/worker-server/conf/dolphinscheduler_env.sh ${SPARK_HOME1}/bin/spark-submit --master local --class com.yonyou.datad.spark.app.SparkApp --driver-cores 1 --driver-memory 1g --num-executors 1 --executor-cores 1 --executor-memory 1g --name 市场周报BI数据存储过程_caky_dataease_ods_mm_18 ${SPARK_HOME1}/jars/spark-driver-app-1.0.0.jar -f flow.job -i 45921 [INFO] 2025-11-11 10:57:00.809 +0800 - task param [{"resourceList":[],"localParams":[],"dependence":{},"waitStartTimeout":{},"conditionResult":{"successNode":[""],"failedNode":[""]},"switchResult":{},"numExecutors":1,"programType":"FLOW","driverMemory":"1g","executorMemory":"1g","driverCores":1,"deployMode":"local","executorCores":1,"dependencyFiles":[],"flowInfo":"{\"units\":[{\".id\":\"ff90600a\",\".name\":\"SQLInput\",\".class\":\"com.yonyou.datad.spark.io.source.jdbc.SQLQueryInput\",\"sourceId\":{\"url\":\"jdbc:postgresql://172.16.57.164:5432/dataease?currentSchema=public\",\"driver\":\"org.postgresql.Driver\",\"username\":\"dataease_etl\",\"password\":\"f2JuKkRGEdjoSnpvzOx-yg\",\"dbschema\":\"ods_mm\"},\"sql\":\"-- SQL数据源组件,仅支持查询语句;\\n-- 每次仅能输入一个查询语句;\\n-- 为了保证任务运行成功率,请在表名前输入数据库模式信息;\\n\\nselect dm_mm.p_dm_mm_box_weekly_insurance_wide();\"}],\"connections\":[],\"version\":\"1.0.0\",\"debug\":{\"debugModel\":\"100\",\"fsType\":\"S3\",\"minioUrl\":\"http://172.16.57.163:6895\",\"minioAccess\":\"minio\",\"minioSecret\":\"ystorage123\",\"dataCachePath\":\"s3a://apps/v7jdalhf/2401369301231599621\"}}"}] [INFO] 2025-11-11 10:57:00.811 +0800 - task run command: sudo -u dolphinscheduler sh -c cd /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921;source /data/iuap/middleware/dolphinscheduler-6896/worker-server/conf/dolphinscheduler_env.sh;${SPARK_HOME1}/bin/spark-submit --master local --class com.yonyou.datad.spark.app.SparkApp --driver-cores 1 --driver-memory 1g --num-executors 1 --executor-cores 1 --executor-memory 1g --name 市场周报BI数据存储过程_caky_dataease_ods_mm_18 ${SPARK_HOME1}/jars/spark-driver-app-1.0.0.jar -f flow.job -i 45921 [INFO] 2025-11-11 10:57:00.811 +0800 - process start, process id is: 3138055 [INFO] 2025-11-11 10:57:02.811 +0800 - -> 25/11/11 10:57:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable add the id(ff90600a) of the process unit(com.yonyou.datad.spark.io.source.jdbc.SQLQueryInput) 25/11/11 10:57:02 WARN SparkConf: Note that spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone/kubernetes and LOCAL_DIRS in YARN). [INFO] 2025-11-11 10:57:05.703 +0800 - application_id is [] [INFO] 2025-11-11 10:57:05.704 +0800 - process has exited, execute path:/data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921, processId:3138055 ,exitStatusCode:0 ,processWaitForStatus:true ,processExitValue:0 [INFO] 2025-11-11 10:57:05.704 +0800 - Send task execute result to master, the current task status: TaskExecutionStatus{code=7, desc='success'} [INFO] 2025-11-11 10:57:05.704 +0800 - Remove the current task execute context from worker cache [INFO] 2025-11-11 10:57:05.704 +0800 - The current execute mode isn't develop mode, will clear the task execute file: /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 [INFO] 2025-11-11 10:57:05.704 +0800 - Success clear the task execute file: /data/iuap/middleware/dolphinscheduler-6896/data/exec/process/dolphinscheduler/929876595154944/19625710980608_20/44327/45921 [INFO] 2025-11-11 10:57:05.813 +0800 - -> 25/11/11 10:57:04 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties [INFO] 2025-11-11 10:57:05.813 +0800 - FINALIZE_SESSION
11-12
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import time import logging import sys from os import path import xml.etree.ElementTree as ET from typing import List, Dict, Tuple from concurrent.futures import ThreadPoolExecutor, as_completed import datetime, os # ========== 基础模块 ========== from VigiIpcControl import VigiIpcControl from HKIpcControl import HKIpcControl from DHIpcControl import DHIpcControl from SmartBulbBatchCtrl import SmartBulbBatchCtrl from CarSerialCtrl import CarSerialCtrl from VigiVideo import VigiVideo from log_config import Log import xlrd # --------------------- 日志 ------------------- timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") log_filename = f"{timestamp}_diff_Lux.log" logger = Log(name=log_filename, log_level=logging.INFO) # ----------- 灯泡配置 ----------- bulb_config_path = r'E:\dcc\实验室资料\ControlScript\Light_Contrl\BulbConfigArea1.xls' bulb_config_table = 'config' nic_ip = '192.168.0.11' # 读取智能灯泡配置表格 workbook = xlrd.open_workbook(bulb_config_path) table = workbook.sheet_by_name(bulb_config_table) # 创建列值索引map row0_value = table.row_values(0) column_index_map = {} index = 0 for item in row0_value: column_index_map[item] = index index = index+1 # 创建智能灯泡配置map bulb_config_map = {} bulb_index_map = {} rowc = table.nrows for row in range(1, rowc): row_value = table.row_values(row) bulb_config_map[row_value[column_index_map['location']]] = {'ip':row_value[column_index_map['ip']], \ 'type':row_value[column_index_map['type']], \ 'username':row_value[column_index_map['username']], \ 'password':row_value[column_index_map['password']]} bulb_index_map[row_value[column_index_map['location']]] = int(row_value[column_index_map['index']]) def build_rows(start: int, end: int, j_start: int = 1, j_end: int = 5): """生成 ['i_j', ...] 列表,i∈[start,end), j∈[j_start,j_end)""" return ['%d_%d' % (i, j) for i in range(start, end) for j in range(j_start, j_end)] # 形成灯泡字典 def get_bulb_conf_list(bulb_loc_list=[], bulb_config_map=None): bulb_conf_list = [] for loc in bulb_loc_list: bulb_conf_list.append(bulb_config_map[loc]) return bulb_conf_list class DeviceManager: """IPC设备管理器,支持多品牌设备统一管理""" def __init__(self, config: Dict): self.config = config # 主VIGI设备 self.main_vigi = None # 其他VIGI设备列表 self.other_vigi_devices = [] # 海康设备列表 self.hk_devices = [] # 大华设备列表 self.dh_devices = [] # 视频控制对象 self.vigi_videos = {} # 初始化主VIGI设备 if config.get('main_vigi'): main_cfg = config['main_vigi'] self.main_vigi = VigiIpcControl.VigiIpcControl(main_cfg) self.vigi_videos['main_vigi'] = VigiVideo.VigiVideo(main_cfg) # 初始化其他VIGI设备 for idx, other_cfg in enumerate(config.get('other_vigi_devices', [])): device = VigiIpcControl.VigiIpcControl(other_cfg) self.other_vigi_devices.append(device) self.vigi_videos[f'other_vigi_{idx}'] = VigiVideo.VigiVideo(other_cfg) # 初始化海康设备 for ip in config.get('hk_ips', []): hk_ipc = HKIpcControl.HKIpcControl({ 'ip': ip, 'username': config['hk_user'], 'password': config['hk_key'] }) hk_ipc.hk_start_test_web() self.hk_devices.append(hk_ipc) # 初始化大华设备 for ip in config.get('dh_ips', []): dh_ipc = DHIpcControl.DHIpcControl({ 'ip': ip, 'username': config['dh_user'], 'password': config['dh_key'] }) dh_ipc.dh_start_test_web() self.dh_devices.append(dh_ipc) def all_devices(self) -> List: """返回所有设备实例列表""" devices = [] if self.main_vigi: devices.append(self.main_vigi) devices.extend(self.other_vigi_devices) devices.extend(self.hk_devices) devices.extend(self.dh_devices) return devices def initialize(self): """初始化所有设备""" # 初始化主VIGI设备 if self.main_vigi: self.main_vigi.vigi_start_test_tdcp() self.main_vigi.image_restore() self.main_vigi.close_light() # 初始化其他VIGI设备 for device in self.other_vigi_devices: device.vigi_start_test_tdcp() device.image_restore() device.close_light() # 初始化海康设备 for device in self.hk_devices: device.day_mode() time.sleep(1) device.close_white() # 初始化大华设备 for device in self.dh_devices: device.off_supplement(mode="Off", light_type="IR") def open_ir(self): """打开所有设备的红外模式""" # 主VIGI设备 if self.main_vigi: try: self.main_vigi.open_ir() except Exception as e: logger.warning(f"主VIGI设备打开红外模式失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: device.open_ir() except Exception as e: logger.warning(f"其他VIGI设备{idx}打开红外模式失败: {e}") # 海康设备 for device in self.hk_devices: try: device.night_mode() device.close_ir() except Exception as e: logger.warning(f"海康设备{device.config['ip']}切换夜间模式失败: {e}") # 大华设备 for device in self.dh_devices: try: device.day_night_mode(mode='IR') device.off_supplement() except Exception as e: logger.warning(f"大华设备{device.config['ip']}切换红外模式失败: {e}") def close_ir(self): """关闭所有设备的红外模式""" # 主VIGI设备 if self.main_vigi: try: self.main_vigi.close_light() except Exception as e: logger.warning(f"主VIGI设备关闭补光灯失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: device.close_light() except Exception as e: logger.warning(f"其他VIGI设备{idx}关闭补光灯失败: {e}") # 海康设备 for device in self.hk_devices: try: device.day_mode() device.close_ir() except Exception as e: logger.warning(f"海康设备{device.config['ip']}切换日间模式失败: {e}") # 大华设备 for device in self.dh_devices: try: device.day_night_mode(mode='Color') device.off_supplement() except Exception as e: logger.warning(f"大华设备{device.config['ip']}切换彩色模式失败: {e}") def capture(self, base_filename: str): """所有设备截图""" # 主VIGI设备 if self.main_vigi: try: self.vigi_videos['main_vigi'].capture(custom_name=f"{base_filename}_main_vigi") except Exception as e: logger.warning(f"主VIGI设备截图失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: self.vigi_videos[f'other_vigi_{idx}'].capture(custom_name=f"{base_filename}_other_vigi_{idx}") except Exception as e: logger.warning(f"其他VIGI设备{idx}截图失败: {e}") # 海康设备 for idx, device in enumerate(self.hk_devices): try: device.capture(filename=f"{base_filename}_hk_{idx}") except Exception as e: logger.warning(f"海康设备{idx}截图失败: {e}") # 大华设备 for idx, device in enumerate(self.dh_devices): try: device.capture(filename=f"{base_filename}_dh_{idx}") except Exception as e: logger.warning(f"大华设备{idx}截图失败: {e}") def record_start(self, base_filename: str): """所有设备开始录像""" # 主VIGI设备 if self.main_vigi: try: self.vigi_videos['main_vigi'].record_start(output_file=f"{base_filename}_main_vigi") except Exception as e: logger.warning(f"主VIGI设备开始录像失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: self.vigi_videos[f'other_vigi_{idx}'].record_start(output_file=f"{base_filename}_other_vigi_{idx}") except Exception as e: logger.warning(f"其他VIGI设备{idx}开始录像失败: {e}") # 海康设备 for idx, device in enumerate(self.hk_devices): try: device.record_start(filename=f"{base_filename}_hk_{idx}") except Exception as e: logger.warning(f"海康设备{idx}开始录像失败: {e}") # 大华设备 for idx, device in enumerate(self.dh_devices): try: device.record_start(filename=f"{base_filename}_dh_{idx}") except Exception as e: logger.warning(f"大华设备{idx}开始录像失败: {e}") def record_stop(self): """所有设备停止录像""" # 主VIGI设备 if self.main_vigi: try: self.vigi_videos['main_vigi'].record_stop() except Exception as e: logger.warning(f"主VIGI设备停止录像失败: {e}") # 其他VIGI设备 for idx, device in enumerate(self.other_vigi_devices): try: self.vigi_videos[f'other_vigi_{idx}'].record_stop() except Exception as e: logger.warning(f"其他VIGI设备{idx}停止录像失败: {e}") # 海康设备 for device in self.hk_devices: try: device.record_stop() except Exception as e: logger.warning(f"海康设备停止录像失败: {e}") # 大华设备 for device in self.dh_devices: try: device.record_stop() except Exception as e: logger.warning(f"大华设备停止录像失败: {e}") def close_all(self): """关闭所有设备连接""" # 海康设备 for device in self.hk_devices: try: device.close_web() except Exception as e: logger.warning(f"关闭海康设备连接失败: {e}") # 大华设备 for device in self.dh_devices: try: device.close_web() except Exception as e: logger.warning(f"关闭大华设备连接失败: {e}") def execute_on_main_vigi(self, func, *args, **kwargs): """在主VIGI设备上执行特定操作""" if self.main_vigi: try: return func(self.main_vigi, *args, **kwargs) except Exception as e: logger.warning(f"主VIGI设备操作失败: {e}") else: logger.warning("无主VIGI设备") def execute_on_other_vigi(self, func, *args, **kwargs): """在所有其他VIGI设备上执行特定操作""" results = [] for device in self.other_vigi_devices: try: results.append(func(device, *args, **kwargs)) except Exception as e: logger.warning(f"其他VIGI设备操作失败: {e}") return results # --------------------------- 初始化IPC配置信息 ----------------------------------- def parse_config(xml_path: str) -> Dict: """解析XML配置文件""" config = {} try: tree = ET.parse(xml_path) root = tree.getroot() # 解析VIGI配置 vigi_config = root.find('vigi_config') if vigi_config is not None: # 主VIGI设备 main_ip = vigi_config.find('VIGI_IP').text.strip() config['main_vigi'] = { 'ip': main_ip, 'username': vigi_config.find('VIGI_USER').text.strip(), 'password': vigi_config.find('VIGI_KEY').text.strip(), 'stream_type': vigi_config.find('stream_type').text.strip() } # 其他VIGI设备 other_ips = vigi_config.find('VIGI_IPS').text.strip().split(',') config['other_vigi_devices'] = [] for ip in other_ips: config['other_vigi_devices'].append({ 'ip': ip, 'username': vigi_config.find('VIGI_USER').text.strip(), 'password': vigi_config.find('VIGI_KEY').text.strip(), 'stream_type': vigi_config.find('stream_type').text.strip() }) # 解析海康配置 hk_config = root.find('hk_config') if hk_config is not None: config['hk_ips'] = hk_config.find('HK_IP').text.strip().split(',') config['hk_user'] = hk_config.find('HK_USER').text.strip() config['hk_key'] = hk_config.find('HK_KEY').text.strip() # 解析大华配置 dh_config = root.find('dh_config') if dh_config is not None: config['dh_ips'] = dh_config.find('DH_IP').text.strip().split(',') config['dh_user'] = dh_config.find('DH_USER').text.strip() config['dh_key'] = dh_config.find('DH_KEY').text.strip() except Exception as e: logger.error(f"解析配置文件失败: {e}") raise return config def start_test_get_config(): """加载配置文件""" logger.debug('加载配置文件.........') config = {} if hasattr(sys, 'frozen'): py_path = path.dirname(sys.executable) else: py_path = path.dirname(__file__) config["py_path"] = py_path # 读取配置文件 xml_path = path.join(py_path, 'ipc_test_info.xml') config.update(parse_config(xml_path)) logger.debug('加载配置文件结束......') return config # 初始化设备管理器 config = start_test_get_config() device_manager = DeviceManager(config) # 初始化小车 car = CarSerialCtrl.CarControl(logger=logger, port="COM5") # 初始化所有设备 device_manager.initialize() time.sleep(2) # 等待设备初始化完成 # 小车归位 def car_move_to_home(): car.go_to_site(1) # 小车停止 def car_stop(): car.stop() # 小车运动 def car_video_track(): cur = car.get_car_status()['current_site'] if cur == 0: car.set_current_site(1) for i in range(2): car.set_speed(100) car.go_to_site(1) time.sleep(2) car.go_to_site(4) time.sleep(2) car.go_to_site(1) car.go_to_site(1) time.sleep(1) def main(): logger.info("---------------------------------------IPC day_mode_illumin_change测试---------------------------------------") # 截图和录像 device_manager.capture('initial_state') device_manager.record_start('ir_mode_illumin_change') time.sleep(15) # 打开红外模式 device_manager.open_ir() # 灯光控制逻辑保持不变 lx_2 = build_rows(1, 2, 1, 9) lx_list2 = get_bulb_conf_list(bulb_loc_list=lx_2, bulb_config_map=bulb_config_map) # ...原有的灯光控制代码... # 停止录像 device_manager.record_stop() logger.info("------------------红外模式下的曝光收敛测试完成------------------") if __name__ == "__main__": try: main() except KeyboardInterrupt: logger.info("用户中断,退出") finally: # 关闭所有设备连接 device_manager.close_all
最新发布
12-10
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值