spark-core_29:blockManager.initialize()=》NettyBlockTransferService.init()-NettyServer创建源码分析

本文解析了Spark中NettyServer的启动流程,包括TransportConf配置、TransportContext创建及初始化、TransportServer绑定端口等关键步骤。

上一节(spark-core_28:Executor初始化过程env.blockManager.initialize(conf.getAppId)- NettyBlockTransferService.init()源码分析)

分析了

a、NettyBlockRpcServer是用于打开的上传注册在BlockManager中的Block块

b,TansportConf:会通过它成员:ConfigProvider子类关联SparkConf,并按SparkTransportConf.fromSparkConf(conf,"shuffle", numCores)设置nettyserver和client的线程池的大小,并按fromSparkConf第二个参数设置key的变量值

3,TransportContext:包含创建{TransportServer:nettyServer},{TransportClientFactory用来创建TransportClient}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道。实例化TransportContext将给成员赋值 conf:TransportConf它可以通过成员子类ConfigProvider和sparkConf关联、rpcHandler:NettyBlockRpcServer、closeIdleConnections:false、同时给出站的编码器、入站解码器,赋具体实例

4,构造了TransportClientFactory:会将TransportContext、TransportConf,按IOMode的枚举类型,默认就是NIO, 得到NioSocketChannel.class基于IOMode枚举创建Netty的EventLoopGroup线程组、创建一个池化的ByteBuf分配器PooledByteBufAllocator给它的在成员变量

override def init(blockDataManager: BlockDataManager): Unit = {
  /** conf.getAppId: app-20180508234845-0000
    * serializer:    JavaSerializer()
    * blockDataManager : BlockManager实例
    * NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
    */

 
val rpcHandler= new NettyBlockRpcServer(conf.getAppId, serializer, blockDataManager)
 
var serverBootstrap:Option[TransportServerBootstrap] = None
 
var clientBootstrap:Option[TransportClientBootstrap] = None
 
if (authEnabled) {//默认是false,不开启认证
    serverBootstrap = Some(new SaslServerBootstrap(transportConf, securityManager))
   
clientBootstrap = Some(new SaslClientBootstrap(transportConf, conf.getAppId, securityManager,
     
securityManager.isSaslEncryptionEnabled()))
 
}
 
 
transportContext = new TransportContext(transportConf, rpcHandler)

 
/** 没有开启ssl所以clientBootstrap是空的
    *
    * TransportClientFactory:这个工厂实例通过使用createClient创建{TransportClient}。
    * 这个工厂实例维护一个到其他主机的连接池,并应为相同的远程主机返回相同的TransportClient。 它还为所有TransportClient共享单个线程池。
    *
    * 在返回新客户端之前初始化运行给定TransportClientBootstraps的ClientFactory。Bootstraps会被同步执行,并且必须运行成功才能创建Client
    * 给这个实例TransportClientFactory:赋成员
    * context:TransportContext,
    * conf:TransportConf会通过它成员:ConfigProvider子类关联SparkConf
    * 还有初始化netty的NioSocketChannel.class、NioEventLoopGroup线程组、ByteBuf分配器PooledByteBufAllocator
    */

  clientFactory
= transportContext.createClientFactory(clientBootstrap.toSeq.asJava)
 
//创建一个nettySever,包括编解码,还有入站事件都加到TransportServer这个nettySever中
  server =createServer(serverBootstrap.toList)
 
appId =conf.getAppId //值类似app-20180404172558-0000
  logInfo("Servercreated on " + server.getPort)

1,先看一下NettyBlockTransferService.createServer()

/** Creates andbinds the TransportServer, possibly trying multiple ports.
  * 创建并绑定TransportServer,可能尝试多个端口。*/

private def createServer(bootstraps: List[TransportServerBootstrap]):TransportServer = {
 
def startService(port: Int):(TransportServer, Int) = {
   
//实例化TransportContext将给成员赋值conf:TransportConf它可以通过成员子类ConfigProvider和sparkConf关联 、rpcHandler:NettyBlockRpcServer、closeIdleConnections:false、同时给出站的编码器、入站解码器,赋具体实例
    val server= transportContext.createServer(port, bootstraps.asJava)
   
(server, server.getPort)
 
}

  val portToTry= conf.getInt("spark.blockManager.port", 0)
 
//CoarseGrainedExecutorBackend的所有日志在%SPARK_HOME%/work下面
  //18/05/16 16:33:47 INFO util.Utils:Successfully started service'org.apache.spark.network.netty.NettyBlockTransferService' on port 57010

//分配端口同时启动NettyServer
  Utils.startServiceOnPort(portToTry, startService, conf, getClass.getName)._1
}

2,TransportContext.createServer(),先实例化TransportServer,它就是NettyServer

/** Create aserver which will attempt to bind to a specific port. */
public TransportServer createServer(int port, List<TransportServerBootstrap>bootstraps) {
 
/**
   * context:就是当前TransportContext,可以创建{TransportServer},{TransportClientFactory}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道
   * hostToBind:null
   * port: 是Utils.startServiceOnPort方法随机分配的
   * rpcHandler:NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
   * bootstraps:是一个空的ArrayList
   */

 
return new TransportServer(this, null, port, rpcHandler, bootstraps);
}

3,给TransportServer赋成员变量的同时,在当前节点上绑定端口

/**
 * Creates a TransportServer that bindsto the given host and the given port, or to any available
 * if 0. If you don't want to bind to anyspecial host, set "hostToBind" to null.
 * 创建一个传输服务,绑定指定主机和端口,如果端口是0,则返回任何值,如果不想绑定主机,可以设置hostToBind为null
 * context:就是当前TransportContext,可以创建{TransportServer},{TransportClientFactory}的上下文,并使用{TransportChannelHandler}设置Netty Channel管道
 * hostToBind:null
 * port: 是Utils.startServiceOnPort方法随机分配的
 * rpcHandler:NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
 * bootstraps:因为没有认证是一个空的List
 * */

public TransportServer(
   
TransportContext context,
   
String hostToBind,
   
int portToBind,
   
RpcHandler appRpcHandler, //NettyBlockRpcServer
    List<TransportServerBootstrap> bootstraps) {
 
this.context = context;
 
//conf: TransportConf会根据它的成员ConfigProvider子类关联到SparkConf
  this.conf = context.getConf();
 
this.appRpcHandler= appRpcHandler;
 
this.bootstraps= Lists.newArrayList(Preconditions.checkNotNull(bootstraps));

 
try {
   
init(hostToBind, portToBind);
 
} catch(RuntimeException e) {
   
JavaUtils.closeQuietly(this);
   
throw e;
 
}
}

4,nettyserver的引导服务,可以查看netty官网

//hostToBind: null, portToBind:是Utils.startServiceOnPort方法随机分配的
private void init(String hostToBind, int portToBind){
 
//默认得到的NIO枚举值
  IOMode ioMode = IOMode.valueOf(conf.ioMode());
 
//conf.serverThreads():CoarseGrainedExecutorBackend的core数相同
  //该方法创建一个Netty的NioEventLoopGroup线程组

  EventLoopGroup bossGroup =
   
NettyUtils.createEventLoop(ioMode, conf.serverThreads(), "shuffle-server");
 
EventLoopGroup workerGroup = bossGroup;
 
/**
   * conf.preferDirectBufs(): 找key是:spark.shuffle.io.preferDirectBufs,查看SparkConf没有这个key则返回true
   * conf.clientThreads()的值是1
   *NettyUtils.createPooledByteBufAllocator():创建一个池化的ByteBuf分配器PooledByteBufAllocator
   */

 
PooledByteBufAllocator allocator = NettyUtils.createPooledByteBufAllocator(
   
conf.preferDirectBufs(), true /* allowCache */, conf.serverThreads());
 
//ServerBootstrap用于NIO服务端辅助启动类,说是降低服务端的开发复杂度
  bootstrap = new ServerBootstrap()
   
.group(bossGroup, workerGroup)
   
.channel(NettyUtils.getServerChannelClass(ioMode))//NioServerSocketChannel.class
    .option(ChannelOption.ALLOCATOR, allocator)
   
.childOption(ChannelOption.ALLOCATOR, allocator);

 
if (conf.backLog() > 0) {
   
//tcp通信队列的大小
    bootstrap.option(ChannelOption.SO_BACKLOG, conf.backLog());
 
}

 
if (conf.receiveBuf() > 0) {
   
bootstrap.childOption(ChannelOption.SO_RCVBUF, conf.receiveBuf());
 
}

 
if (conf.sendBuf() > 0) {
   
bootstrap.childOption(ChannelOption.SO_SNDBUF, conf.sendBuf());
 
}

 
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
   
//ChannelInitializer.initChannel()会在Channel注册后被调用,该方法结束后该ChannelInitializer拦截器会在ChannelPipeline移除
    @Override
   
protected void initChannel(SocketChannelch) throws Exception {
     
//NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
      RpcHandler rpcHandler = appRpcHandler;
     
for (TransportServerBootstrap bootstrap : bootstraps) {
       
rpcHandler =bootstrap.doBootstrap(ch, rpcHandler);
     
}
     
//context:TransportContext, 将SocketChannel和NettyBlockRpcServer放进去,给ChannelPipeline设置ChannelHander拦截器
      //返回TransportChannelHandler是SimpleChannelInboundHandler子类,用于处理RequestMessages和ResponseMessages(这两个都是入站要处理的对象)的服务器端和客户端处理程序

      context.initializePipeline(ch, rpcHandler);
   
}
 
});
 
//也就是说每个Worker对应的CoarseGrainedExecutorBackend的创建Executor时都会有一个Netty的server
  //由于hostToBind是Null所以就是当前worker节点,用portToBind端口

  InetSocketAddress address = hostToBind == null ?
     
new InetSocketAddress(portToBind):new InetSocketAddress(hostToBind, portToBind);
 
channelFuture = bootstrap.bind(address);
 
channelFuture.syncUninterruptibly();

 
port =((InetSocketAddress) channelFuture.channel().localAddress()).getPort();
 
logger.debug("Shuffle server started on port:" + port);
}

 

5,TransportContext.initializePipeline(ch, rpcHandler)初始化ChannelPipeline的ChannelHandler拦截器

 

/**
 * Initializes a client or server NettyChannel Pipeline which encodes/decodes messages and
 * has a {@link org.apache.spark.network.server.TransportChannelHandler}to handle request or
 * response messages.
 *
 * @param channel The channel to initialize.
 * @param channelRpcHandler The RPC handler to use for the channel.
 *
 * @return Returns the createdTransportChannelHandler, which includes a TransportClient that can
 * be used to communicate on thischannel. The TransportClient is directly associated with a
 * ChannelHandler to ensure all users ofthe same channel get the same TransportClient object.
 *
 * 初始化客户端或服务器Netty Channel Pipeline里面编码/解码消息,并返回{TransportChannelHandler:它是一个SimpleChannelInboundHandler子类}来处理请求或响应消息。
 *
 * initializePipeline()返回创建的TransportChannelHandler,它包含可用于在此通道上通信的TransportClient。
 *  TransportClient直接与ChannelHandler相关联,以确保同一通道的所有用户都获得相同的TransportClient对象。
 *
 *==》TransportClient的作用:客户端获取预先协商的流的连续块。此API旨在允许大量数据的高效传输,将其分解为大小从几百KB到几MB的块。
 *   该TransportClient类用于向服务器发送请求,而{@link TransportResponseHandler}负责处理来自服务器的响应,以响应[[TransportClient]]发出的请求。
 *参数:
 * channel:SocketChannel,通过的pipeline得到ChannelPipeline
 * channelRpcHandler:NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
 */

public TransportChannelHandler initializePipeline(
   
SocketChannel channel,
   
RpcHandler channelRpcHandler) {
 
try {
   
//TransportChannelHandler是SimpleChannelInboundHandler子类,用于处理RequestMessages和ResponseMessages(这两个都是入站要处理的对象)的服务器端和客户端处理程序
    TransportChannelHandler channelHandler =createChannelHandler(channel, channelRpcHandler);
   
channel.pipeline()
     
.addLast("encoder", encoder)

//spark自定义了tcp半包解码器
     
.addLast(TransportFrameDecoder.HANDLER_NAME, NettyUtils.createFrameDecoder())
     
.addLast("decoder", decoder)
     
.addLast("idleStateHandler", new IdleStateHandler(0, 0, conf.connectionTimeoutMs()/ 1000))
     
// NOTE: Chunks are currently guaranteed to be returnedin the order of request, but this
     
// would require more logic toguarantee if this were not part of the same event loop.
      .addLast("handler", channelHandler);
   
return channelHandler;
 
} catch(RuntimeException e) {
   
logger.error("Error while initializing Nettypipeline", e);
   
throw e;
 
}
}

 

===>创建TransportContext.createChannelHandler()用于处理RequestMessages和ResponseMessages(这两个都是入站要处理的对象)的服务器端和客户端处理程序

/**
 * Creates the server- and client-sidehandler which is used to handle both RequestMessages and ResponseMessages. Thechannel is expected to have been successfully created, though certain properties(such as the remoteAddress()) may not be available yet.
 * 创建用于处理RequestMessages和ResponseMessages的服务器端和客户端处理程序。 预计该channel已成功创建,但某些属性(如remoteAddress())可能还不可用。
 * 参数:
 * channel:SocketChannel,通过的pipeline得到ChannelPipeline
 * channelRpcHandler:NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
 */

private TransportChannelHandler createChannelHandler(Channel channel, RpcHandlerrpcHandler) {
//TransportResponseHandler负责处理来自服务器的响应。以响应[[TransportClient]]发出的请求
  TransportResponseHandler responseHandler = new TransportResponseHandler(channel);
 
//该TransportClient类用于向服务器发送请求,TransportResponseHandler负责响应它的内容
  TransportClient client = new TransportClient(channel, responseHandler);
 
//TransportRequestHandler:处理来自客户端的请求并将块数据写回的处理程序。
  TransportRequestHandler requestHandler = new TransportRequestHandler(channel, client,
   
rpcHandler);
 
//{TransportChannelHandler:它是一个SimpleChannelInboundHandler子类}来处理请求或响应消息
  return new TransportChannelHandler(client, responseHandler, requestHandler,
   
conf.connectionTimeoutMs(), closeIdleConnections);
}

===》返回TransportServer.init(),通过serverBootStrap.bind()将nettyServer的端口绑定起来

//hostToBind: null, portToBind:是Utils.startServiceOnPort方法随机分配的
private void init(String hostToBind, int portToBind){
 
。。。。。
 
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
  
  //ChannelInitializer.initChannel()会在Channel注册后被调用,该方法结束后该ChannelInitializer拦截器会在ChannelPipeline移除
    @Override
   
protected void initChannel(SocketChannelch) throws Exception {
   
   //NettyBlockRpcServer作用: 为每个请求打开或上传注册在BlockManager中的任意Block块,每一次Chunk的传输相当于一次shuffle
      RpcHandler rpcHandler = appRpcHandler;
     
for (TransportServerBootstrap bootstrap : bootstraps) {
       
rpcHandler =bootstrap.doBootstrap(ch, rpcHandler);
     
}
      //context:TransportContext, 将SocketChannel和NettyBlockRpcServer放进去,给ChannelPipeline设置ChannelHander拦截器
      //返回TransportChannelHandler是SimpleChannelInboundHandler子类,用于处理RequestMessages和ResponseMessages(这两个都是入站要处理的对象)的服务器端和客户端处理程序

      context.initializePipeline(ch, rpcHandler);
   
}
 
});
 
//也就是说每个Worker对应的CoarseGrainedExecutorBackend的创建Executor时都会有一个Netty的server由于hostToBind是Null所以就是当前worker节点,用portToBind端口
  InetSocketAddress address = hostToBind == null ?
     
new InetSocketAddress(portToBind):new InetSocketAddress(hostToBind, portToBind);
 
channelFuture = bootstrap.bind(address);
 
channelFuture.syncUninterruptibly();

 
port =((InetSocketAddress) channelFuture.channel().localAddress()).getPort();
 
logger.debug("Shuffle server started on port:" + port);
}

==》到此NettyServer启动成功,同时整个NettyBlockTransferService.init()方法体也执行完成


import findspark findspark.init() app_name = "zdf-jupyter" os.environ['HADOOP_USER_NAME'] = 'prod_sec_strategy_tech' os.environ['HADOOP_USER_PASSWORD'] = 'TbjnfqTIRFXFidmhfctN3ZT9QwncfQfY' # os.environ['QUEUE'] = 'root.sec_technology_sec_engine_tenant_prod' spark3conf = { "master": "yarn", "spark.submit.deployMode": "client", "driver-memory": "4g", "spark.dynamicAllocation.enabled": "true", "spark.dynamicAllocation.minExecutors": "100", "spark.dynamicAllocation.maxExecutors": "200", "spark.executor.cores": "3", "spark.executor.memory": "10g", "spark.executor.memoryOverhead": "8192", "spark.sql.hive.manageFilesourcePartitions": "false", "spark.default.parallelism": "1000", "spark.sql.shuffle.partitions": "1000", "spark.yarn.queue": "root.sec_technology_sec_engine_tenant_prod", "spark.sql.autoBroadcastJoinThreshold": "-1", "spark.sql.broadcastTimeout": "3000", "spark.driver.extraJavaOptions": "-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS", "spark.executor.extraJavaOptions": "-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS", "spark.yarn.dist.archives": "hdfs://difed/user/dm/ENV/nithavellir/v3.0/py3.10.16_lite.tgz", "spark.executorEnv.PYSPARK_PYTHON": "./py3.10.16_lite.tgz/py3.10.16_lite/bin/python3", "spark.extraListeners": "sparkmonitor.listener.JupyterSparkMonitorListener", "spark.jars": ( "hdfs://difed/user/dm/ENV/jars/spark_tfrecord/compile/spark-tfrecord-0.5.1_scala2.12-spark3.2.0-tfhp1.15.0.jar" ), } spark_app = SparkBaseApp() spark_app.initialize(app_name, spark3conf) spark = spark_app.spark hdfs = spark_app.hdfs 为什么会失败呢
08-09
(sparktts) D:\Amusic\Spark-TTS-main\Spark-TTS-main>python webui.py 2025-03-14 16:25:24.113987: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-03-14 16:25:26.539595: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Traceback (most recent call last): File "D:\Amusic\Spark-TTS-main\Spark-TTS-main\webui.py", line 260, in <module> demo = build_ui( ^^^^^^^^^ File "D:\Amusic\Spark-TTS-main\Spark-TTS-main\webui.py", line 97, in build_ui model = initialize_model(model_dir, device=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Amusic\Spark-TTS-main\Spark-TTS-main\webui.py", line 47, in initialize_model model = SparkTTS(model_dir, device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Amusic\Spark-TTS-main\Spark-TTS-main\cli\SparkTTS.py", line 42, in __init__ self.configs = load_config(f"{model_dir}/config.yaml") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Amusic\Spark-TTS-main\Spark-TTS-main\sparktts\utils\file.py", line 123, in load_config config = OmegaConf.load(config_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\14550\.conda\envs\sparktts\Lib\site-packages\omegaconf\omegaconf.py", line 189, in load with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'D:\\Amusic\\Spark-TTS-main\\Spark-TTS-main\\pretrained_models\\Spark-TTS-0.5B\\config.yaml'
03-15
[root@yfw ~]# cd /opt/openfire/enterprise/spark/Spark [root@yfw Spark]# # 1. 创建提取目录 [root@yfw Spark]# cd /tmp [root@yfw tmp]# mkdir -p xvfb-extract && cd xvfb-extract [root@yfw xvfb-extract]# rm -rf * [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 2. 解压 RPM 内容到当前目录 [root@yfw xvfb-extract]# rpm2cpio ../xorg-x11-server-Xvfb-1.20.11-2.el8.x86_64.rpm | cpio -idmv ./usr/bin/Xvfb ./usr/bin/xvfb-run ./usr/lib/.build-id ./usr/lib/.build-id/90 ./usr/lib/.build-id/90/ef040f79a6dc2a63bd97c6f8615551a9918520 ./usr/share/man/man1/Xvfb.1.gz 4104 blocks [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 3. 查看是否成功提取出 Xvfb [root@yfw xvfb-extract]# ls -l usr/bin/Xvfb -rwxr-xr-x 1 root root 2091144 Jun 11 2021 usr/bin/Xvfb [root@yfw xvfb-extract]# # 输出应类似: [root@yfw xvfb-extract]# # -rwxr-xr-x 1 root root 298808 Jan 1 2021 usr/bin/Xvfb [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 4. 复制到 openfire 用户 bin 目录 [root@yfw xvfb-extract]# su - openfire -c 'mkdir -p ~/bin' This account is currently not available. [root@yfw xvfb-extract]# cp usr/bin/Xvfb /home/openfire/bin/ cp: cannot create regular file '/home/openfire/bin/': Not a directory [root@yfw xvfb-extract]# chown openfire:openfire /home/openfire/bin/Xvfb chown: cannot access '/home/openfire/bin/Xvfb': No such file or directory [root@yfw xvfb-extract]# chmod +x /home/openfire/bin/Xvfb chmod: cannot access '/home/openfire/bin/Xvfb': No such file or directory [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 5. 验证可执行性 [root@yfw xvfb-extract]# su - openfire -s /bin/bash -c 'which Xvfb; Xvfb -help | head -5' /usr/bin/which: no Xvfb in (/usr/local/ffmpeg/bin:/opt/spark/bin:=en_US.UTF-8:/usr/local/ffmpeg/bin) -bash: head: command not found -bash: Xvfb: command not found [root@yfw xvfb-extract]# grep openfire /etc/passwd openfire:x:990:985:Openfire Jabber Server:/opt/openfire:/sbin/nologin [root@yfw xvfb-extract]# # 删除之前错误创建的内容(可选) [root@yfw xvfb-extract]# rm -rf /home/openfire/bin [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 在真正的 home 目录下创建 bin [root@yfw xvfb-extract]# mkdir -p /opt/openfire/bin [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 复制 Xvfb 到正确位置 [root@yfw xvfb-extract]# cp /tmp/xvfb-extract/usr/bin/Xvfb /opt/openfire/bin/ [root@yfw xvfb-extract]# cp /tmp/xvfb-extract/usr/bin/xvfb-run /opt/openfire/bin/ # 可选 [root@yfw xvfb-extract]# chown -R openfire:openfire /opt/openfire/bin [root@yfw xvfb-extract]# chmod +x /opt/openfire/bin/Xvfb [root@yfw xvfb-extract]# [ -f /opt/openfire/bin/xvfb-run ] && chmod +x /opt/openfire/bin/xvfb-run [root@yfw xvfb-extract]# su -s /bin/bash openfire -c ' > export PATH=/opt/openfire/bin:$PATH > export DISPLAY=:99 > > # 检查是否能找到 Xvfb > which Xvfb > Xvfb -help | head -5 > ' ~/bin/Xvfb use: X [:<display>] [option] -a # default pointer acceleration (factor) -ac disable access control restrictions -audit int set audit trail level -auth file select authorization file -br create root window with black background +bs enable any backing store support -bs disable any backing store support -c turns off key-click c # key-click volume (0-100) -cc int default color visual class -nocursor disable the cursor -core generate core dump on fatal error -displayfd fd file descriptor to write display number to when ready to connect -dpi int screen resolution in dots per inch -dpms disables VESA DPMS monitor control -deferglyphs [none|all|16] defer loading of [no|all|16-bit] glyphs -f # bell base (0-100) -fc string cursor font -fn string default font name -fp string default font path -help prints message with these options +iglx Allow creating indirect GLX contexts -iglx Prohibit creating indirect GLX contexts (default) -I ignore all remaining arguments -ld int limit data space to N Kb -lf int limit number of open files to N -ls int limit stack space to N Kb -nolock disable the locking mechanism -maxclients n set maximum number of clients (power of two) -nolisten string don't listen on protocol -listen string listen on protocol -noreset don't reset after last client exists -background [none] create root window with no background -reset reset after last client exists -p # screen-saver pattern duration (minutes) -pn accept failure to listen on all ports -nopn reject failure to listen on all ports -r turns off auto-repeat r turns on auto-repeat -render [default|mono|gray|color] set render color alloc policy -retro start with classic stipple and cursor -s # screen-saver timeout (minutes) -seat string seat to run on -t # default pointer threshold (pixels/t) -terminate terminate at server reset -to # connection time out -tst disable testing extensions ttyxx server started from init on /dev/ttyxx v video blanking for screen-saver -v screen-saver without video blanking -wm WhenMapped default backing-store -wr create root window with white background -maxbigreqsize set maximal bigrequest size +xinerama Enable XINERAMA extension -xinerama Disable XINERAMA extension -dumbSched Disable smart scheduling and threaded input, enable old behavior -schedInterval int Set scheduler interval in msec -sigstop Enable SIGSTOP based startup +extension name Enable extension -extension name Disable extension -query host-name contact named host for XDMCP -broadcast broadcast for XDMCP -multicast [addr [hops]] IPv6 multicast for XDMCP -indirect host-name contact named host for indirect XDMCP -port port-num UDP port number to send messages to -from local-address specify the local address to connect from -once Terminate server after one session -class display-class specify display class to send in manage -cookie xdm-auth-bits specify the magic cookie for XDMCP -displayID display-id manufacturer display ID for request [+-]accessx [ timeout [ timeout_mask [ feedback [ options_mask] ] ] ] enable/disable accessx key sequences -ardelay set XKB autorepeat delay -arinterval set XKB autorepeat interval -screen scrn WxHxD set screen's width, height, depth -pixdepths list-of-int support given pixmap depths +/-render turn on/off RENDER extension support(default on) -linebias n adjust thin line pixelization -blackpixel n pixel value for black -whitepixel n pixel value for white -fbdir directory put framebuffers in mmap'ed files in directory -shmem put framebuffers in shared memory [root@yfw xvfb-extract]# su -s /bin/bash openfire -c ' > export PATH=/opt/openfire/bin:$PATH > export DISPLAY=:99 > > # 启动 Xvfb(如果尚未运行) > if ! pgrep -f "Xvfb :99" > /dev/null; then > echo "🚀 Starting Xvfb on DISPLAY=:99" > Xvfb :99 -screen 0 1024x768x24 -nolisten tcp & > sleep 3 > fi > > # 进入 Spark 目录并启动客户端 > cd /opt/openfire/enterprise/spark/Spark || { echo "❌ Error: Spark directory not found!"; exit 1; } > echo "🎮 Launching Spark client..." > exec ./Spark & > ' 🎮 Launching Spark client... [root@yfw xvfb-extract]# java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jivesoftware.launcher.Startup.start(Startup.java:75) at org.jivesoftware.launcher.Startup.main(Startup.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.exe4j.runtime.LauncherEngine.launch(LauncherEngine.java:84) at com.install4j.runtime.launcher.UnixLauncher.start(UnixLauncher.java:69) at install4j.org.jivesoftware.launcher.Startup.main(Unknown Source) Caused by: java.awt.AWTError: Can't connect to X11 window server using ':99' as the value of the DISPLAY variable. at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method) at sun.awt.X11GraphicsEnvironment.access$200(X11GraphicsEnvironment.java:65) at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:115) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.X11GraphicsEnvironment.<clinit>(X11GraphicsEnvironment.java:74) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.GraphicsEnvironment.createGE(GraphicsEnvironment.java:103) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:82) at sun.awt.X11.XToolkit.<clinit>(XToolkit.java:131) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.Toolkit$2.run(Toolkit.java:860) at java.awt.Toolkit$2.run(Toolkit.java:855) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:854) at sun.swing.SwingUtilities2.getSystemMnemonicKeyMask(SwingUtilities2.java:2032) at javax.swing.plaf.basic.BasicLookAndFeel.initComponentDefaults(BasicLookAndFeel.java:1158) at javax.swing.plaf.metal.MetalLookAndFeel.initComponentDefaults(MetalLookAndFeel.java:431) at javax.swing.plaf.basic.BasicLookAndFeel.getDefaults(BasicLookAndFeel.java:148) at javax.swing.plaf.metal.MetalLookAndFeel.getDefaults(MetalLookAndFeel.java:1577) at javax.swing.UIManager.setLookAndFeel(UIManager.java:539) at javax.swing.UIManager.setLookAndFeel(UIManager.java:579) at javax.swing.UIManager.initializeDefaultLAF(UIManager.java:1349) at javax.swing.UIManager.initialize(UIManager.java:1459) at javax.swing.UIManager.maybeInitialize(UIManager.java:1426) at javax.swing.UIManager.getInstalledLookAndFeels(UIManager.java:419) at javax.swing.UIManager.installLookAndFeel(UIManager.java:462) at javax.swing.UIManager.installLookAndFeel(UIManager.java:481) at org.jivesoftware.spark.ui.themes.LookAndFeelManager.<clinit>(LookAndFeelManager.java:40) at org.jivesoftware.Spark.startup(Spark.java:163) ... 13 more [root@yfw xvfb-extract]# # 查看 Xvfb 是否在运行 [root@yfw xvfb-extract]# pgrep -f Xvfb [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 查看 Spark Java 进程 [root@yfw xvfb-extract]# pgrep -f Spark | xargs ps -fp error: list of process IDs must follow -p Usage: ps [options] Try 'ps --help <simple|list|output|threads|misc|all>' or 'ps --help <s|l|o|t|m|a>' for additional help text. For more details see ps(1). [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# # 或查看是否有 Java 进程属于 openfire [root@yfw xvfb-extract]# ps aux | grep java | grep openfire openfire 1328 0.2 8.5 3934888 327252 ? Sl Oct16 41:50 /usr/lib/jvm/java-11/bin/java -server -Djdk.tls.ephemeralDHKeySize=matched -Djsse.SSLEngine.acceptLargeFragments=true -Dlog4j.configurationFile=/opt/openfire/bin/../lib/log4j2.xml -Dlog4j2.formatMsgNoLookups=true -Dlog4j.skipJansi=false -DopenfireHome=/opt/openfire/bin/../ -Dopenfire.lib.dir=/opt/openfire/lib -classpath /opt/openfire/.install4j/i4jruntime.jar:/opt/openfire/.install4j/launchere44106de.jar:/opt/openfire/lib/* install4j.org.jivesoftware.openfire.starter.ServerStarter start openfire 4101 0.1 5.0 3746016 192120 ? Sl Oct16 28:12 /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dconfig.file=/opt/openfire/bin/../plugins/pade/classes/jvb/application.conf -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/opt/openfire/bin/../plugins/pade/classes/jvb -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=./logging.properties -Djdk.tls.ephemeralDHKeySize=2048 -cp ./jitsi-videobridge-2.1-SNAPSHOT.jar:./jitsi-videobridge-2.1-SNAPSHOT-jar-with-dependencies.jar org.jitsi.videobridge.MainKt --apis=rest openfire 4124 0.0 2.8 3737208 107320 ? Sl Oct16 14:17 /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dconfig.file=/opt/openfire/bin/../plugins/pade/classes/jicofo/application.conf -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/opt/openfire/bin/../plugins/pade/classes/jicofo -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=./logging.properties -Djdk.tls.ephemeralDHKeySize=2048 -cp ./jicofo-1.1-SNAPSHOT.jar:./jicofo-1.1-SNAPSHOT-jar-with-dependencies.jar org.jitsi.jicofo.Main --host=localhost --port=5275 --domain=localhost --secret=ElaQPVPKh05Zy4pb3Rgrvmm5XFoXVadRneWHyRqr --user_domain=localhost --user_name=focus --user_password=yra84zlHMIbfpL8CQpJDYDz5ODmtQXPpoRVds1WN [root@yfw xvfb-extract]# su -s /bin/bash openfire -c ' > export PATH=/opt/openfire/bin:$PATH > export DISPLAY=:99 > > # 检查是否已有 Xvfb 运行,避免重复启动 > if ! pgrep -f "Xvfb :99" > /dev/null; then > echo "🚀 Starting Xvfb on DISPLAY=:99" > Xvfb :99 -screen 0 1024x768x24 -nolisten tcp & > sleep 3 > fi > > # 检查 Xvfb 是否真的起来了 > if ! ps aux | grep Xvfb | grep -q :99; then > echo "❌ Failed to start Xvfb!" > exit 1 > fi > > # 进入 Spark 目录并启动客户端 > cd /opt/openfire/enterprise/spark/Spark || { echo "❌ Spark directory not found!"; exit 1; } > > echo "🎮 Launching Spark client..." > exec ./Spark & > ' 🎮 Launching Spark client... [root@yfw xvfb-extract]# java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jivesoftware.launcher.Startup.start(Startup.java:75) at org.jivesoftware.launcher.Startup.main(Startup.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.exe4j.runtime.LauncherEngine.launch(LauncherEngine.java:84) at com.install4j.runtime.launcher.UnixLauncher.start(UnixLauncher.java:69) at install4j.org.jivesoftware.launcher.Startup.main(Unknown Source) Caused by: java.awt.AWTError: Can't connect to X11 window server using ':99' as the value of the DISPLAY variable. at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method) at sun.awt.X11GraphicsEnvironment.access$200(X11GraphicsEnvironment.java:65) at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:115) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.X11GraphicsEnvironment.<clinit>(X11GraphicsEnvironment.java:74) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.GraphicsEnvironment.createGE(GraphicsEnvironment.java:103) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:82) at sun.awt.X11.XToolkit.<clinit>(XToolkit.java:131) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.Toolkit$2.run(Toolkit.java:860) at java.awt.Toolkit$2.run(Toolkit.java:855) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:854) at sun.swing.SwingUtilities2.getSystemMnemonicKeyMask(SwingUtilities2.java:2032) at javax.swing.plaf.basic.BasicLookAndFeel.initComponentDefaults(BasicLookAndFeel.java:1158) at javax.swing.plaf.metal.MetalLookAndFeel.initComponentDefaults(MetalLookAndFeel.java:431) at javax.swing.plaf.basic.BasicLookAndFeel.getDefaults(BasicLookAndFeel.java:148) at javax.swing.plaf.metal.MetalLookAndFeel.getDefaults(MetalLookAndFeel.java:1577) at javax.swing.UIManager.setLookAndFeel(UIManager.java:539) at javax.swing.UIManager.setLookAndFeel(UIManager.java:579) at javax.swing.UIManager.initializeDefaultLAF(UIManager.java:1349) at javax.swing.UIManager.initialize(UIManager.java:1459) at javax.swing.UIManager.maybeInitialize(UIManager.java:1426) at javax.swing.UIManager.getInstalledLookAndFeels(UIManager.java:419) at javax.swing.UIManager.installLookAndFeel(UIManager.java:462) at javax.swing.UIManager.installLookAndFeel(UIManager.java:481) at org.jivesoftware.spark.ui.themes.LookAndFeelManager.<clinit>(LookAndFeelManager.java:40) at org.jivesoftware.Spark.startup(Spark.java:163) ... 13 more [root@yfw xvfb-extract]# # 查看是否有新的 Java 进程(属于 Spark) [root@yfw xvfb-extract]# ps aux | grep java | grep openfire openfire 1328 0.2 8.5 3934888 327272 ? Sl Oct16 41:51 /usr/lib/jvm/java-11/bin/java -server -Djdk.tls.ephemeralDHKeySize=matched -Djsse.SSLEngine.acceptLargeFragments=true -Dlog4j.configurationFile=/opt/openfire/bin/../lib/log4j2.xml -Dlog4j2.formatMsgNoLookups=true -Dlog4j.skipJansi=false -DopenfireHome=/opt/openfire/bin/../ -Dopenfire.lib.dir=/opt/openfire/lib -classpath /opt/openfire/.install4j/i4jruntime.jar:/opt/openfire/.install4j/launchere44106de.jar:/opt/openfire/lib/* install4j.org.jivesoftware.openfire.starter.ServerStarter start openfire 4101 0.1 5.0 3746016 194592 ? Sl Oct16 28:13 /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dconfig.file=/opt/openfire/bin/../plugins/pade/classes/jvb/application.conf -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/opt/openfire/bin/../plugins/pade/classes/jvb -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=./logging.properties -Djdk.tls.ephemeralDHKeySize=2048 -cp ./jitsi-videobridge-2.1-SNAPSHOT.jar:./jitsi-videobridge-2.1-SNAPSHOT-jar-with-dependencies.jar org.jitsi.videobridge.MainKt --apis=rest openfire 4124 0.0 2.8 3737208 107320 ? Sl Oct16 14:17 /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dconfig.file=/opt/openfire/bin/../plugins/pade/classes/jicofo/application.conf -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/opt/openfire/bin/../plugins/pade/classes/jicofo -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=./logging.properties -Djdk.tls.ephemeralDHKeySize=2048 -cp ./jicofo-1.1-SNAPSHOT.jar:./jicofo-1.1-SNAPSHOT-jar-with-dependencies.jar org.jitsi.jicofo.Main --host=localhost --port=5275 --domain=localhost --secret=ElaQPVPKh05Zy4pb3Rgrvmm5XFoXVadRneWHyRqr --user_domain=localhost --user_name=focus --user_password=yra84zlHMIbfpL8CQpJDYDz5ODmtQXPpoRVds1WN [root@yfw xvfb-extract]# # 查看是否有 Xvfb 进程 [root@yfw xvfb-extract]# pgrep -f "Xvfb :99" [root@yfw xvfb-extract]# su -s /bin/bash openfire -c ' > export PATH=/opt/openfire/bin:$PATH > export DISPLAY=:99 > > echo "🔧 Testing Xvfb on :99..." > > # 先杀掉旧的(如果有) > pkill -f "Xvfb :99" > > # 启动 Xvfb,并前台运行以便看错误 > Xvfb :99 -screen 0 1024x768x24 -nolisten tcp > ' 🔧 Testing Xvfb on :99... pkill: killing pid 265229 failed: Operation not permitted Terminated [root@yfw xvfb-extract]# ps aux | grep Xvfb root 265542 0.0 0.0 9208 1100 pts/0 S+ 08:46 0:00 grep --color=auto Xvfb [root@yfw xvfb-extract]# su -s /bin/bash openfire << 'EOF' > export DISPLAY=:99 > export PATH=/opt/openfire/bin:$PATH > > # 清理可能的旧锁文件(虽然现在没有,但保险起见) > rm -f /tmp/.X99-lock /tmp/.X11-unix/X99 2>/dev/null > > # 启动 Xvfb 虚拟显示器 > echo "🚀 Starting Xvfb on :99..." > Xvfb :99 -screen 0 1024x768x24 -nolisten tcp -novtswitch -nocursor & > > # 等待几秒让服务初始化 > sleep 3 > > # 验证是否成功启动 > if pgrep -f "Xvfb :99" > /dev/null; then > echo "✅ Xvfb is running" > else > echo "❌ Failed to start Xvfb!" > exit 1 > fi > > # 测试 DISPLAY 是否可用(需要 xorg-x11-utils 包) > if command -v xdpyinfo >/dev/null; then > if xdpyinfo -display :99 >/dev/null 2>&1; then > echo "🟢 xdpyinfo OK: DISPLAY :99 is accessible" > else > echo "🔴 xdpyinfo FAILED: Cannot connect to DISPLAY :99" > echo "💡 Check Xvfb logs or missing fonts" > fi > else > echo "🟡 Tip: Install xorg-x11-utils for better debugging:" > echo " yum install -y xorg-x11-utils" > fi > > # 进入 Spark 目录并启动 > cd /opt/openfire/enterprise/spark/Spark || { echo "❌ Spark directory not found!"; exit 1; } > > echo "🎮 Launching Spark client..." > exec ./Spark & > EOF 🚀 Starting Xvfb on :99... Unrecognized option: -novtswitch use: X [:<display>] [option] -a # default pointer acceleration (factor) -ac disable access control restrictions -audit int set audit trail level -auth file select authorization file -br create root window with black background +bs enable any backing store support -bs disable any backing store support -c turns off key-click c # key-click volume (0-100) -cc int default color visual class -nocursor disable the cursor -core generate core dump on fatal error -displayfd fd file descriptor to write display number to when ready to connect -dpi int screen resolution in dots per inch -dpms disables VESA DPMS monitor control -deferglyphs [none|all|16] defer loading of [no|all|16-bit] glyphs -f # bell base (0-100) -fc string cursor font -fn string default font name -fp string default font path -help prints message with these options +iglx Allow creating indirect GLX contexts -iglx Prohibit creating indirect GLX contexts (default) -I ignore all remaining arguments -ld int limit data space to N Kb -lf int limit number of open files to N -ls int limit stack space to N Kb -nolock disable the locking mechanism -maxclients n set maximum number of clients (power of two) -nolisten string don't listen on protocol -listen string listen on protocol -noreset don't reset after last client exists -background [none] create root window with no background -reset reset after last client exists -p # screen-saver pattern duration (minutes) -pn accept failure to listen on all ports -nopn reject failure to listen on all ports -r turns off auto-repeat r turns on auto-repeat -render [default|mono|gray|color] set render color alloc policy -retro start with classic stipple and cursor -s # screen-saver timeout (minutes) -seat string seat to run on -t # default pointer threshold (pixels/t) -terminate terminate at server reset -to # connection time out -tst disable testing extensions ttyxx server started from init on /dev/ttyxx v video blanking for screen-saver -v screen-saver without video blanking -wm WhenMapped default backing-store -wr create root window with white background -maxbigreqsize set maximal bigrequest size +xinerama Enable XINERAMA extension -xinerama Disable XINERAMA extension -dumbSched Disable smart scheduling and threaded input, enable old behavior -schedInterval int Set scheduler interval in msec -sigstop Enable SIGSTOP based startup +extension name Enable extension -extension name Disable extension -query host-name contact named host for XDMCP -broadcast broadcast for XDMCP -multicast [addr [hops]] IPv6 multicast for XDMCP -indirect host-name contact named host for indirect XDMCP -port port-num UDP port number to send messages to -from local-address specify the local address to connect from -once Terminate server after one session -class display-class specify display class to send in manage -cookie xdm-auth-bits specify the magic cookie for XDMCP -displayID display-id manufacturer display ID for request [+-]accessx [ timeout [ timeout_mask [ feedback [ options_mask] ] ] ] enable/disable accessx key sequences -ardelay set XKB autorepeat delay -arinterval set XKB autorepeat interval -screen scrn WxHxD set screen's width, height, depth -pixdepths list-of-int support given pixmap depths +/-render turn on/off RENDER extension support(default on) -linebias n adjust thin line pixelization -blackpixel n pixel value for black -whitepixel n pixel value for white -fbdir directory put framebuffers in mmap'ed files in directory -shmem put framebuffers in shared memory (EE) Fatal server error: (EE) Unrecognized option: -novtswitch (EE) ❌ Failed to start Xvfb! [root@yfw xvfb-extract]# [root@yfw xvfb-extract]# ps aux | grep java | grep openfire openfire 1328 0.2 8.5 3934888 325616 ? Sl Oct16 41:51 /usr/lib/jvm/java-11/bin/java -server -Djdk.tls.ephemeralDHKeySize=matched -Djsse.SSLEngine.acceptLargeFragments=true -Dlog4j.configurationFile=/opt/openfire/bin/../lib/log4j2.xml -Dlog4j2.formatMsgNoLookups=true -Dlog4j.skipJansi=false -DopenfireHome=/opt/openfire/bin/../ -Dopenfire.lib.dir=/opt/openfire/lib -classpath /opt/openfire/.install4j/i4jruntime.jar:/opt/openfire/.install4j/launchere44106de.jar:/opt/openfire/lib/* install4j.org.jivesoftware.openfire.starter.ServerStarter start openfire 4101 0.1 5.0 3746016 193364 ? Sl Oct16 28:13 /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dconfig.file=/opt/openfire/bin/../plugins/pade/classes/jvb/application.conf -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/opt/openfire/bin/../plugins/pade/classes/jvb -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=./logging.properties -Djdk.tls.ephemeralDHKeySize=2048 -cp ./jitsi-videobridge-2.1-SNAPSHOT.jar:./jitsi-videobridge-2.1-SNAPSHOT-jar-with-dependencies.jar org.jitsi.videobridge.MainKt --apis=rest openfire 4124 0.0 2.8 3737208 107344 ? Sl Oct16 14:17 /usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dconfig.file=/opt/openfire/bin/../plugins/pade/classes/jicofo/application.conf -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/opt/openfire/bin/../plugins/pade/classes/jicofo -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=./logging.properties -Djdk.tls.ephemeralDHKeySize=2048 -cp ./jicofo-1.1-SNAPSHOT.jar:./jicofo-1.1-SNAPSHOT-jar-with-dependencies.jar org.jitsi.jicofo.Main --host=localhost --port=5275 --domain=localhost --secret=ElaQPVPKh05Zy4pb3Rgrvmm5XFoXVadRneWHyRqr --user_domain=localhost --user_name=focus --user_password=yra84zlHMIbfpL8CQpJDYDz5ODmtQXPpoRVds1WN [root@yfw xvfb-extract]#
最新发布
10-31
{"@timestamp":"2025-07-07T16:14:04.863+08:00","message":"Registering BlockManager BlockManagerId(driver, CN-FVFC402GL416, 52031, None)","loggerName":"org.apache.spark.storage.BlockManagerMaster","logLevel":"INFO"} {"@timestamp":"2025-07-07T16:14:04.863+08:00","message":"Error initializing SparkContext.","loggerName":"org.apache.spark.SparkContext","logLevel":"ERROR","stack_trace":"java.lang.NullPointerException: null\n\tat org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:80)\n\tat org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:540)\n\tat org.apache.spark.SparkContext.<init>(SparkContext.scala:625)\n\tat org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2888)\n\tat org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:1099)\n\tat scala.Option.getOrElse(Option.scala:189)\n\tat org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:1093)\n\tat com.rakuten.point.opp.bulk.recovery.batch.spark.BulkRecoveryBatchSparkContext.<init>(BulkRecoveryBatchSparkContext.kt:11)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkRecoveryBatchSparkContext$2.invoke(BulkAutoRecoveryVerticle.kt:117)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkRecoveryBatchSparkContext$2.invoke(BulkAutoRecoveryVerticle.kt:93)\n\tat kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.getBulkRecoveryBatchSparkContext(BulkAutoRecoveryVerticle.kt:93)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.access$getBulkRecoveryBatchSparkContext(BulkAutoRecoveryVerticle.kt:25)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkRecoveryDataFetcher$2.invoke(BulkAutoRecoveryVerticle.kt:120)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkRecoveryDataFetcher$2.invoke(BulkAutoRecoveryVerticle.kt:119)\n\tat kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.getBulkRecoveryDataFetcher(BulkAutoRecoveryVerticle.kt:119)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.access$getBulkRecoveryDataFetcher(BulkAutoRecoveryVerticle.kt:25)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkAutoRecoveryService$2.invoke(BulkAutoRecoveryVerticle.kt:127)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkAutoRecoveryService$2.invoke(BulkAutoRecoveryVerticle.kt:123)\n\tat kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.getBulkAutoRecoveryService(BulkAutoRecoveryVerticle.kt:123)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.access$getBulkAutoRecoveryService(BulkAutoRecoveryVerticle.kt:25)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$start$5.invokeSuspend(BulkAutoRecoveryVerticle.kt:220)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$start$5.invoke(BulkAutoRecoveryVerticle.kt)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$start$5.invoke(BulkAutoRecoveryVerticle.kt)\n\tat kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:89)\n\tat kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:169)\n\tat kotlinx.coroutines.BuildersKt.withContext(Unknown Source)\n\tat com.rakuten.point.opp.common.vertx.logging.OppLoggingContextKt.withOppContext(OppLoggingContext.kt:84)\n\tat com.rakuten.point.opp.common.vertx.logging.OppLoggingContextKt.withOppContext$default(OppLoggingContext.kt:40)\n\tat com.rakuten.point.opp.common.vertx.logging.OppLoggingContextKt.withTraceId(OppLoggingContext.kt:20)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle.start(BulkAutoRecoveryVerticle.kt:219)\n\tat io.vertx.kotlin.coroutines.CoroutineVerticle$start$1.invokeSuspend(CoroutineVerticle.kt:53)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)\n\tat io.vertx.kotlin.coroutines.VertxCoroutineExecutor.execute(VertxCoroutine.kt:229)\n\tat kotlinx.coroutines.ExecutorCoroutineDispatcherImpl.dispatch(Executors.kt:128)\n\tat kotlinx.coroutines.internal.DispatchedContinuationKt.resumeCancellableWith(DispatchedContinuation.kt:322)\n\tat kotlinx.coroutines.intrinsics.CancellableKt.startCoroutineCancellable(Cancellable.kt:30)\n\tat kotlinx.coroutines.intrinsics.CancellableKt.startCoroutineCancellable$default(Cancellable.kt:25)\n\tat kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:110)\n\tat kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:126)\n\tat kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:56)\n\tat kotlinx.coroutines.BuildersKt.launch(Unknown Source)\n\tat kotlinx.coroutines.BuildersKt__Builders_commonKt.launch$default(Builders.common.kt:47)\n\tat kotlinx.coroutines.BuildersKt.launch$default(Unknown Source)\n\tat io.vertx.kotlin.coroutines.CoroutineVerticle.start(CoroutineVerticle.kt:51)\n\tat io.vertx.core.impl.DeploymentManager.lambda$doDeploy$5(DeploymentManager.java:196)\n\tat io.vertx.core.impl.ContextInternal.dispatch(ContextInternal.java:264)\n\tat io.vertx.core.impl.WorkerContext.lambda$run$3(WorkerContext.java:106)\n\tat io.vertx.core.impl.WorkerContext.lambda$null$1(WorkerContext.java:92)\n\tat io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\n"} {"@timestamp":"2025-07-07T16:14:04.864+08:00","message":"SparkContext is stopping with exitCode 0.","loggerName":"org.apache.spark.SparkContext","logLevel":"INFO"} {"@timestamp":"2025-07-07T16:14:04.864+08:00","message":"SparkContext already stopped.","loggerName":"org.apache.spark.SparkContext","logLevel":"INFO"} {"@timestamp":"2025-07-07T16:14:04.864+08:00","message":"Successfully stopped SparkContext","loggerName":"org.apache.spark.SparkContext","logLevel":"INFO"} {"@timestamp":"2025-07-07T16:14:04.951+08:00","message":"Bulk auto recovery batch failed.","loggerName":"SERVICE.ErrorHandler","logLevel":"ERROR","error":{"code":"K2S9500","message":"The server encountered an unexpected error.","cause":"java.lang.NullPointerException: null","stacktrace":"java.lang.NullPointerException\n\tat org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:80)\n\tat org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:540)\n\tat org.apache.spark.SparkContext.<init>(SparkContext.scala:625)\n\tat org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2888)\n\tat org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:1099)\n\tat scala.Option.getOrElse(Option.scala:189)\n\tat org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:1093)\n\tat com.rakuten.point.opp.bulk.recovery.batch.spark.BulkRecoveryBatchSparkContext.<init>(BulkRecoveryBatchSparkContext.kt:11)\n\tat com.rakuten.point.opp.bulk.auto.recovery.batch.verticle.BulkAutoRecoveryVerticle$bulkRecoveryBatchSparkContext$2.invoke(BulkAutoRecoveryVerticle.kt:117) [...truncated]"},"traceId":"09ac9cb4-14d8-44d5-94fe-bf705c45bd81","file":"BulkAutoRecoveryVerticle.kt","line":234,"thread":"vert.x-worker-thread-1"} {"@timestamp":"2025-07-07T16:14:05.020+08:00","message":"Application stopped by exception ex = com.rakuten.point.opp.common.exceptions.OnePointException$ServiceSystemException: java.lang.NullPointerException","loggerName":"SERVICE.Main","logLevel":"ERROR","traceId":"09ac9cb4-14d8-44d5-94fe-bf705c45bd81","file":"Main.kt","line":52,"thread":"vert.x-eventloop-thread-16"}
07-08
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值