TaskExecutor统一内存配置(FLink FLIP-49翻译)

本文提出了Flink TaskExecutor内存配置的改进方案,旨在解决流批配置差异大、RocksDB配置复杂以及配置不易理解的问题。建议包括统一流批内存管理、统一显式和隐式内存申请,以及将托管的堆上和堆外内存池分离。通过这些改变,用户可以更方便地管理和控制内存,同时简化配置流程。

目的:

该提案目的是解决Flink 1.9 TaskExecutor内存配置的几个缺点。

(1)解决流、批配置差异大

目前,流和批作业TaskExecutor内存的配置各不相同。

  • Streaming(流处理)
    • 内存是隐式消耗的,要么在堆上由Memory State Backend后端消耗,要么在堆外由RocksDB消耗。
    • 用户必须手动调整堆大小和手动选择后端(state backend)。
    • 用户必须手动配置RocksDB,以使用足够的内存来实现良好的性能,但又不能超出预算。
    • 内存消耗无法预测,包括on-heap(堆内)的memory后端,以及off-heap(堆外)的RocksDB后端
  • Batch(批处理)
    • 用户手动配置总内存大小,以及在Operator(算子)中使用堆上内存还是堆外内存。
    • Flink将总内存的一部分保留为managed memory托管内存。它自动调整heap大小和“max direct memory”参数,以适应堆内、堆外内存的管理。
    • Flink为Operators申请托管的Memory Segments。并保证不会超过剩余的Memory Segments(内存段)。

(2)解决Streaming方式RocksDB配置复杂

  • 用户必须手动减少JVM堆大小,或者将Flink设置为使用堆外内存。
  • 用户必须手动配置RocksDB内存。
  • 用户无法尽可能多地使用可用内存,因为RocksDB内存大小必须配置得足够低,以确保不会超出内存预算。

(3)去掉复杂、不确定、难以理解的配置

  • 在配置container、进程的内存大小时,有一些“magic魔法”。其中一些是不容易推理的,例如yarn container“保留的内存”。
  • 配置一个像RocksDB这样的堆外状态后端意味着要么将托管内存设置为堆外,要么调整截止比,从而减少为JVM堆提供的内存。
  • TaskExecutor依赖于瞬时的JVM内存使用来确定不同内存池的大小,首先触发GC,然后获得JVM空闲内存大小,这给不同内存池的大小带来了不确定性。

公共接口

TaskExecutor内存配置选项。以及向后兼容性

修改建议

统一流处理和批处理内存管理

基本思想是将状态后端使用的内存视为托管内存的一部分,并扩展MemoryManager(内存管理器),以便状态后端可以简单地从MemoryManager那里保留一定量的内存,但并不是必须从MemoryManager那里分配内存。

通过这种方式,用户能够不修改集群配置的情况下,切换流作业和批作业。

内存使用场景及特点

  • 使用Memory/FsStateBackend的流作业(特点):
    • JVM堆内存
    • 由状态后端隐式申请内存
    • 对整体的内存消耗没有控制
  • 使用RocksDBStateBackend的流作业(特点):
    • 堆外内存
    • 由状态后端隐式申请内存
    • 不能超过初始化期间配置的总内存大小
  • 批处理作业(特点):
    • 堆内存
    • 从内存管理器显式分配
    • 不能超过从内存管理器分配的总内存

统一显式和隐式内存申请

  • Memory Consumer可以通过两种方式获取内存
    • 以MemorySegment的形式,显式地从MemoryManager中获取。
    • 从MemoryManager中预先保留,再使用,在这种情况下应该返回“使用最多X个字节”,并由Memory Consumer自己隐式地申请内存。

将托管的堆上内存池和堆外内存池分离

当前(Flink 1.9),所有托管内存都以相同的类型分配,不管是在堆上还是堆外。这对于当前用例来说是很好的,在当前用例中,我们不需要在同一TaskExecutor中同时使用堆上和堆外托管内存。

在这次建议的设计中,state backend(状态后端)使用的内存也被认为是托管内存,这意味着在相同集群中的作业可能需要不同类型的托管内存。例如,一个流作业使用memorystateback和另一个流作业使用rocksdbstateback。

因此,我们将托管内存池分为on-heap-pool和off-heap-pool。我们使用一个off-heap比例来决定管理内存的哪些部分应该进入off-heap-pool,而将其余部分留给on-heap-pool。用户仍然可以通过将堆外比例设置为0 / 1来将集群配置为使用所有的堆上/堆外托管内存。

Memory Pools和配置方式

在这里插入图片描述

框架堆内存(Framework Heap Memory)

  • Flink TaskManager使用的堆上内存。它不属于slot资源配置文件。
    (taskmanager.memory.framework.heap)
    (默认128mb)

用户堆内存(Task Heap Memory)

用户代码使用的堆内存。
(taskmanager.memory.task.heap)

用户堆外内存(Task Off-Heap Memory)

用户代码使用的堆外内存。
(taskmanager.memory.task.offheap)
(默认0 b)

shuffle 内存(shuffle memory)

用于shuffle的堆外内存。
(taskmanager.memory.shuffle。[最小/最大/部分)
(默认最小为64mb,最大为1gb,比例为0.1)

托管内存(Managed Memory)

分为On-heap和Off-heap Flink托管内存。

  • 配置项:
    (taskmanager.memory.managed.[size|fraction])。
    (taskmanager.memory.managed.offheap-fraction)
    (默认fraction=0.5, offheap-fraction=0.0)

  • 计算方式:
    On-Heap Managed Memory = Managed Memory * (1 - offheap-fraction)
    Off-Heap Managed Memory = Managed Memory * offheap-fraction

JVM元数据(JVM Metaspace)

堆外内存,归JVM元数据使用。
(taskmanager.memory.jvm-metaspace)
(默认192 mb)

JVM的开销(JVM Overhead)

堆外内存,用于线程堆栈空间、I/O直接内存、编译缓存等。
(taskmanager.memory.jvm-overhead.[min/max/fraction])
(默认最小为128mb,最大为1gb,比例为0.1)

总Flink内存(Total Flink Memory)

  • 总Flink Memory配置项,属于粗粒度,使用户更容易配置。
    它包括上述Framework Heap Memory, Task Heap Memory, Task Off-Heap Memory, Shuffle Memory, and Managed Memory。
    但不包括JVM Metaspace和JVM Overhead。

  • 配置项:(taskmanager.memory.total-flink.size)

总进程内存(Total Process Memory)

  • 总Process Memory配置项,属于粗粒度,使用户更容易配置。
    它包括上述Total Flink Memory, and JVM Metaspace and JVM Overhead。

  • 配置项:(taskmanager.memory.total-process.size)

JVM参数

JVM堆内存

包括 Framework Heap Memory, Task Heap Memory, and On-Heap Managed Memory
显式地将-Xmx和-Xms设置为这个值

JVM直接内存

包括任务堆外内存和随机内存(Task Off-heap Memory和Shuffle Memory)
显式地将-XX:MaxDirectMemorySize设置为这个值
对于非堆托管内存,我们总是使用Unsafe.allocateMemory()来申请内存,这个动作不受此参数的限制。

JVM metaspace

将-XX:MaxMetaspaceSize设置为已配置的JVM元数据空间

内存计算

  • 所有内存/池大小的计算都在TaskExecutor JVM启动之前。一旦启动了JVM,就不需要在Flink TaskExecutor中进一步的计算和派生。

  • 计算应该只在两个地方执行。

    • standalone模式:在启动shell脚本时。
    • yarn/mesos/k8s:在ResourceMananger端(资源管理器端)。
  • 启动脚本时,实际上可以调用Flink runtime java代码来执行计算逻辑。通过这种方式,我们可以确保standalone集群和其他模式集群具有一致的内存计算逻辑。

  • 计算出的内存/池大小,作为动态配置(通过’-D’)传递给TaskExecutor。

计算逻辑

我们需要配置这三个选项中的一个:

  • 任务堆内存和托管内存(Task Heap Memory and Managed Memory)
  • 总Flink内存(Total Flink Memory)
  • 总进程内存(Total Process Memory)

下面逻辑描述了如何从一个值计算出其余值:

  • 如果同时配置了Task Heap Memory(任务堆内存)和Managed Memory(托管内存),则使用它们派生总Flink内存

    • 如果shuffle内存是显式配置的,我们使用该值
    • 否则,我们计算它,使它构成最终总Flink内存的配置分数(见getAbsoluteOrInverseFraction())
  • 如果配置的是总Flink内存(Process Memory),而不是任务堆内存(Task Heap Memory)和托管内存(Managed Memory),那么我们将派生出shuffle内存和托管内存(Managed Memory),并将其余内存(不包括框架堆内存Framework Heap Memory和任务堆外内存Task Off-Heap Memory)作为任务堆内存(Task Off-Heap Memory)。

    • 如果shuffle内存是显式配置的,我们使用该值
    • 否则,我们计算它,通过Total Flink Momory乘以比例(见getAbsoluteOrFraction())
    • 如果托管内存(Managed Memory )是显式配置的,则使用该值
    • 否则,我们计算它,通过Total Flink Momory乘以比例(见getAbsoluteOrFraction())
  • 如果只配置了总进程内存(Total Process Memory),那么我们将通过以下方式获得总Flink Memory

    • 我们得到(或计算相对)并从整个进程内存中减去JVM开销(参见getAbsoluteOrFraction())
    • 剩下的部分减去JVM Metaspace
    • 我们将其余部分作为总Flink Momory
接口代码定义:
def getAbsoluteOrFraction(key: ConfigOption, base: Long): Long = {

    conf.getOrElse(key) {

        val (min, max, fraction) = getRange(conf, key)

        val relative = fraction * base

        Math.max(min, Math.min(relative, max))

    }

}

def getAbsoluteOrInverseFraction(key: ConfigOption, base: Long): Long = {

    conf.getOrElse(key) {

        val (min, max, fraction) = getRange(conf, key)

        val relative = fraction / (1 - fraction) * base

        Math.max(min, Math.min(relative, max))

    }

}

实施步骤

步骤1、引入一个开关,来启用新的TaskExecutor内存配置

引入临时配置选项,作为当前/新TaskExecutor内存配置切换(代码中)。这允许我们在不影响现有代码行为的情况下,实现和测试新的代码路径。

步骤2、实现内存计算逻辑

  • 引入新的配置选项
  • 引入新的数据结构和逻辑:
    • 用于存储TaskExecutor的内存/池大小的数据结构
    • 用于从配置中,计算内存/池大小的逻辑
    • 用于生成动态配置的逻辑
    • 用于生成JVM参数的逻辑
      此步骤不应引入任何行为更改。

步骤3、使用新的内存计算逻辑启动TaskExecutor

  • 调用第2步中引入的数据结构和实用程序,生成用于启动新任务执行器的JVM参数和动态配置。
    • 在启动脚本(standalone模式)
    • 在资源管理器(yarn、mesos、k8s)
  • Task executor使用第2步中引入的数据结构和实用程序来设置内存池大小和槽资源配置文件。
    • MemoryManager
    • ShuffleEnvironment
    • TaskSlotTable
      使用独立的代码路径,实现上述步骤(仅用于新Mode)

步骤4、独立堆上和堆外托管内存池

  • 更新MemoryManager,使其拥有两个独立的池。
  • 扩展MemoryManager接口,以指定从哪个池分配内存。

在遗留/新模式的公共代码路径中实现此步骤。

  • 对于遗留模式,根据配置的内存类型,我们可以将两个池中的一个,设置为托管内存总大小,并始终从这个池进行分配,让另一个池为空

第5步、将本机内存用于托管内存

  • 使用Unsafe.allocateMemory来申请内存
    • MemoryManager
      在遗留/新模式的公共代码路径中实现这个issue。这只会影响GC行为。

步骤6、清理遗留模式

  • 修复/更新/删除遗留模式的测试用例
  • 弃用/删除遗留的配置选项
  • 删除遗留代码路径
  • 移除旧模式/新模式的开关

兼容性、弃用和迁移计划
本FLIP改变了用户配置集群资源的方式,在某些情况下,如果从以前的版本迁移过来,可能需要重新配置集群。
不推荐(Deprecated )的配置键如下:

在这里插入图片描述

测试计划

  • 我们需要更新现有的集成测试,并添加新的集成测试,以验证新的内存配置,行为是否正确。
  • 如果当前集成测试失败了,其他常规集成和端到端测试也会失败。

限制

  • 建议的设计使用Unsafe.allocateMemory()来分配托管内存,这不再支持Java 12。我们需要在未来寻找替代的解决方案。

后续

  • 当前FLIP需要非常详细的文档,来帮助用户理解如何正确配置Flink进程,以及在何种情况下应该使用哪些Key。
  • 最好在web UI中,显示配置的内存池大小,这样用户就可以立即看到TMs使用了多少内存。

替代方案

关于JVM直接内存,我们有以下替代方案:
1、让GC释放MemorySegments,并通过设置适当的JVM最大直接内存大小参数来触发GC。
2、让GC释放MemorySegments,通过记录JVM最大直接内存的使用量,触发GC。
3、手动分配和释放MemorySegments。
我们决定使用3,但取决于Segment故障的安全程度,我们可以很容易地在实现后切换到其他替代方案。

"C:\Program Files\Java\jdk1.8.0_241\bin\java.exe" "-javaagent:D:\IntelliJ IDEA 2023.3.4\lib\idea_rt.jar=49897:D:\IntelliJ IDEA 2023.3.4\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_241\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_241\jre\lib\rt.jar;C:\Users\18795\IdeaProjects\demo11\target\classes;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-scala_2.12\1.14.0\flink-scala_2.12-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-core\1.14.0\flink-core-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-annotations\1.14.0\flink-annotations-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-metrics-core\1.14.0\flink-metrics-core-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\commons\commons-lang3\3.3.2\commons-lang3-3.3.2.jar;D:\Maven安装包\Maven_Repository\com\esotericsoftware\kryo\kryo\2.24.0\kryo-2.24.0.jar;D:\Maven安装包\Maven_Repository\com\esotericsoftware\minlog\minlog\1.2\minlog-1.2.jar;D:\Maven安装包\Maven_Repository\org\objenesis\objenesis\2.1\objenesis-2.1.jar;D:\Maven安装包\Maven_Repository\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;D:\Maven安装包\Maven_Repository\org\apache\commons\commons-compress\1.21\commons-compress-1.21.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-shaded-guava\30.1.1-jre-14.0\flink-shaded-guava-30.1.1-jre-14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-java\1.14.0\flink-java-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\commons\commons-math3\3.5\commons-math3-3.5.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-shaded-asm-7\7.1-14.0\flink-shaded-asm-7-7.1-14.0.jar;D:\Maven安装包\Maven_Repository\org\scala-lang\scala-reflect\2.12.7\scala-reflect-2.12.7.jar;D:\Maven安装包\Maven_Repository\org\scala-lang\scala-library\2.12.7\scala-library-2.12.7.jar;D:\Maven安装包\Maven_Repository\org\scala-lang\scala-compiler\2.12.7\scala-compiler-2.12.7.jar;D:\Maven安装包\Maven_Repository\org\scala-lang\modules\scala-xml_2.12\1.0.6\scala-xml_2.12-1.0.6.jar;D:\Maven安装包\Maven_Repository\com\twitter\chill_2.12\0.7.6\chill_2.12-0.7.6.jar;D:\Maven安装包\Maven_Repository\com\twitter\chill-java\0.7.6\chill-java-0.7.6.jar;D:\Maven安装包\Maven_Repository\org\slf4j\slf4j-api\1.7.15\slf4j-api-1.7.15.jar;D:\Maven安装包\Maven_Repository\com\google\code\findbugs\jsr305\1.3.9\jsr305-1.3.9.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-shaded-force-shading\14.0\flink-shaded-force-shading-14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-streaming-scala_2.12\1.14.0\flink-streaming-scala_2.12-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-streaming-java_2.12\1.14.0\flink-streaming-java_2.12-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-file-sink-common\1.14.0\flink-file-sink-common-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-clients_2.12\1.14.0\flink-clients_2.12-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-runtime\1.14.0\flink-runtime-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-rpc-core\1.14.0\flink-rpc-core-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-rpc-akka-loader\1.14.0\flink-rpc-akka-loader-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-queryable-state-client-java\1.14.0\flink-queryable-state-client-java-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-hadoop-fs\1.14.0\flink-hadoop-fs-1.14.0.jar;D:\Maven安装包\Maven_Repository\commons-io\commons-io\2.8.0\commons-io-2.8.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-shaded-netty\4.1.65.Final-14.0\flink-shaded-netty-4.1.65.Final-14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-shaded-jackson\2.12.4-14.0\flink-shaded-jackson-2.12.4-14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-shaded-zookeeper-3\3.4.14-14.0\flink-shaded-zookeeper-3-3.4.14-14.0.jar;D:\Maven安装包\Maven_Repository\org\javassist\javassist\3.24.0-GA\javassist-3.24.0-GA.jar;D:\Maven安装包\Maven_Repository\org\xerial\snappy\snappy-java\1.1.8.3\snappy-java-1.1.8.3.jar;D:\Maven安装包\Maven_Repository\org\lz4\lz4-java\1.8.0\lz4-java-1.8.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-optimizer\1.14.0\flink-optimizer-1.14.0.jar;D:\Maven安装包\Maven_Repository\commons-cli\commons-cli\1.3.1\commons-cli-1.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-client\3.3.1\hadoop-client-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-common\3.3.1\hadoop-common-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\thirdparty\hadoop-shaded-protobuf_3_7\1.1.1\hadoop-shaded-protobuf_3_7-1.1.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\thirdparty\hadoop-shaded-guava\1.1.1\hadoop-shaded-guava-1.1.1.jar;D:\Maven安装包\Maven_Repository\com\google\guava\guava\27.0-jre\guava-27.0-jre.jar;D:\Maven安装包\Maven_Repository\com\google\guava\failureaccess\1.0\failureaccess-1.0.jar;D:\Maven安装包\Maven_Repository\com\google\guava\listenablefuture\9999.0-empty-to-avoid-conflict-with-guava\listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar;D:\Maven安装包\Maven_Repository\org\checkerframework\checker-qual\2.5.2\checker-qual-2.5.2.jar;D:\Maven安装包\Maven_Repository\com\google\j2objc\j2objc-annotations\1.1\j2objc-annotations-1.1.jar;D:\Maven安装包\Maven_Repository\org\codehaus\mojo\animal-sniffer-annotations\1.17\animal-sniffer-annotations-1.17.jar;D:\Maven安装包\Maven_Repository\org\apache\httpcomponents\httpclient\4.5.13\httpclient-4.5.13.jar;D:\Maven安装包\Maven_Repository\org\apache\httpcomponents\httpcore\4.4.13\httpcore-4.4.13.jar;D:\Maven安装包\Maven_Repository\commons-codec\commons-codec\1.11\commons-codec-1.11.jar;D:\Maven安装包\Maven_Repository\commons-net\commons-net\3.6\commons-net-3.6.jar;D:\Maven安装包\Maven_Repository\jakarta\activation\jakarta.activation-api\1.2.1\jakarta.activation-api-1.2.1.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-servlet\9.4.40.v20210413\jetty-servlet-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-security\9.4.40.v20210413\jetty-security-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-util-ajax\9.4.40.v20210413\jetty-util-ajax-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-webapp\9.4.40.v20210413\jetty-webapp-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-xml\9.4.40.v20210413\jetty-xml-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\javax\servlet\jsp\jsp-api\2.1\jsp-api-2.1.jar;D:\Maven安装包\Maven_Repository\com\sun\jersey\jersey-servlet\1.19\jersey-servlet-1.19.jar;D:\Maven安装包\Maven_Repository\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;D:\Maven安装包\Maven_Repository\log4j\log4j\1.2.17\log4j-1.2.17.jar;D:\Maven安装包\Maven_Repository\commons-beanutils\commons-beanutils\1.9.4\commons-beanutils-1.9.4.jar;D:\Maven安装包\Maven_Repository\org\apache\commons\commons-configuration2\2.1.1\commons-configuration2-2.1.1.jar;D:\Maven安装包\Maven_Repository\org\apache\commons\commons-text\1.4\commons-text-1.4.jar;D:\Maven安装包\Maven_Repository\org\apache\avro\avro\1.7.7\avro-1.7.7.jar;D:\Maven安装包\Maven_Repository\org\codehaus\jackson\jackson-core-asl\1.9.13\jackson-core-asl-1.9.13.jar;D:\Maven安装包\Maven_Repository\org\codehaus\jackson\jackson-mapper-asl\1.9.13\jackson-mapper-asl-1.9.13.jar;D:\Maven安装包\Maven_Repository\com\thoughtworks\paranamer\paranamer\2.3\paranamer-2.3.jar;D:\Maven安装包\Maven_Repository\com\google\re2j\re2j\1.1\re2j-1.1.jar;D:\Maven安装包\Maven_Repository\com\google\protobuf\protobuf-java\2.5.0\protobuf-java-2.5.0.jar;D:\Maven安装包\Maven_Repository\com\google\code\gson\gson\2.2.4\gson-2.2.4.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-auth\3.3.1\hadoop-auth-3.3.1.jar;D:\Maven安装包\Maven_Repository\com\nimbusds\nimbus-jose-jwt\9.8.1\nimbus-jose-jwt-9.8.1.jar;D:\Maven安装包\Maven_Repository\com\github\stephenc\jcip\jcip-annotations\1.0-1\jcip-annotations-1.0-1.jar;D:\Maven安装包\Maven_Repository\net\minidev\json-smart\2.4.2\json-smart-2.4.2.jar;D:\Maven安装包\Maven_Repository\net\minidev\accessors-smart\2.4.2\accessors-smart-2.4.2.jar;D:\Maven安装包\Maven_Repository\org\ow2\asm\asm\8.0.1\asm-8.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\curator\curator-framework\4.2.0\curator-framework-4.2.0.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-simplekdc\1.0.1\kerb-simplekdc-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-client\1.0.1\kerb-client-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerby-config\1.0.1\kerby-config-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-common\1.0.1\kerb-common-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-crypto\1.0.1\kerb-crypto-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-util\1.0.1\kerb-util-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\token-provider\1.0.1\token-provider-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-admin\1.0.1\kerb-admin-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-server\1.0.1\kerb-server-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-identity\1.0.1\kerb-identity-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerby-xdr\1.0.1\kerby-xdr-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\curator\curator-client\4.2.0\curator-client-4.2.0.jar;D:\Maven安装包\Maven_Repository\org\apache\curator\curator-recipes\4.2.0\curator-recipes-4.2.0.jar;D:\Maven安装包\Maven_Repository\org\apache\htrace\htrace-core4\4.1.0-incubating\htrace-core4-4.1.0-incubating.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerb-core\1.0.1\kerb-core-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerby-pkix\1.0.1\kerby-pkix-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerby-asn1\1.0.1\kerby-asn1-1.0.1.jar;D:\Maven安装包\Maven_Repository\org\apache\kerby\kerby-util\1.0.1\kerby-util-1.0.1.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\jackson\core\jackson-databind\2.10.5.1\jackson-databind-2.10.5.1.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\jackson\core\jackson-core\2.10.5\jackson-core-2.10.5.jar;D:\Maven安装包\Maven_Repository\org\codehaus\woodstox\stax2-api\4.2.1\stax2-api-4.2.1.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\woodstox\woodstox-core\5.3.0\woodstox-core-5.3.0.jar;D:\Maven安装包\Maven_Repository\dnsjava\dnsjava\2.1.7\dnsjava-2.1.7.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-hdfs-client\3.3.1\hadoop-hdfs-client-3.3.1.jar;D:\Maven安装包\Maven_Repository\com\squareup\okhttp\okhttp\2.7.5\okhttp-2.7.5.jar;D:\Maven安装包\Maven_Repository\com\squareup\okio\okio\1.6.0\okio-1.6.0.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\jackson\core\jackson-annotations\2.10.5\jackson-annotations-2.10.5.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-yarn-api\3.3.1\hadoop-yarn-api-3.3.1.jar;D:\Maven安装包\Maven_Repository\javax\xml\bind\jaxb-api\2.2.11\jaxb-api-2.2.11.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-yarn-client\3.3.1\hadoop-yarn-client-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\websocket\websocket-client\9.4.40.v20210413\websocket-client-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-client\9.4.40.v20210413\jetty-client-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-http\9.4.40.v20210413\jetty-http-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-util\9.4.40.v20210413\jetty-util-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\jetty-io\9.4.40.v20210413\jetty-io-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\websocket\websocket-common\9.4.40.v20210413\websocket-common-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\eclipse\jetty\websocket\websocket-api\9.4.40.v20210413\websocket-api-9.4.40.v20210413.jar;D:\Maven安装包\Maven_Repository\org\jline\jline\3.9.0\jline-3.9.0.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-mapreduce-client-core\3.3.1\hadoop-mapreduce-client-core-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-yarn-common\3.3.1\hadoop-yarn-common-3.3.1.jar;D:\Maven安装包\Maven_Repository\javax\servlet\javax.servlet-api\3.1.0\javax.servlet-api-3.1.0.jar;D:\Maven安装包\Maven_Repository\com\sun\jersey\jersey-core\1.19\jersey-core-1.19.jar;D:\Maven安装包\Maven_Repository\javax\ws\rs\jsr311-api\1.1.1\jsr311-api-1.1.1.jar;D:\Maven安装包\Maven_Repository\com\sun\jersey\jersey-client\1.19\jersey-client-1.19.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\jackson\module\jackson-module-jaxb-annotations\2.10.5\jackson-module-jaxb-annotations-2.10.5.jar;D:\Maven安装包\Maven_Repository\jakarta\xml\bind\jakarta.xml.bind-api\2.3.2\jakarta.xml.bind-api-2.3.2.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\jackson\jaxrs\jackson-jaxrs-json-provider\2.10.5\jackson-jaxrs-json-provider-2.10.5.jar;D:\Maven安装包\Maven_Repository\com\fasterxml\jackson\jaxrs\jackson-jaxrs-base\2.10.5\jackson-jaxrs-base-2.10.5.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-mapreduce-client-jobclient\3.3.1\hadoop-mapreduce-client-jobclient-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-mapreduce-client-common\3.3.1\hadoop-mapreduce-client-common-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\hadoop\hadoop-annotations\3.3.1\hadoop-annotations-3.3.1.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-connector-kafka_2.12\1.14.0\flink-connector-kafka_2.12-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-connector-base\1.14.0\flink-connector-base-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\apache\kafka\kafka-clients\2.4.1\kafka-clients-2.4.1.jar;D:\Maven安装包\Maven_Repository\com\github\luben\zstd-jni\1.4.3-1\zstd-jni-1.4.3-1.jar;D:\Maven安装包\Maven_Repository\org\apache\flink\flink-json\1.14.0\flink-json-1.14.0.jar;D:\Maven安装包\Maven_Repository\org\slf4j\slf4j-simple\1.7.36\slf4j-simple-1.7.36.jar" KafkaToKafkaJob [main] WARN org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Property [transaction.timeout.ms] not specified. Setting it to 3600000 ms [main] INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The configuration option taskmanager.cpu.cores required for local execution is not set, setting it to the maximal possible value. [main] INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The configuration option taskmanager.memory.task.heap.size required for local execution is not set, setting it to the maximal possible value. [main] INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The configuration option taskmanager.memory.task.off-heap.size required for local execution is not set, setting it to the maximal possible value. [main] INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The configuration option taskmanager.memory.network.min required for local execution is not set, setting it to its default value 64 mb. [main] INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The configuration option taskmanager.memory.network.max required for local execution is not set, setting it to its default value 64 mb. [main] INFO org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The configuration option taskmanager.memory.managed.size required for local execution is not set, setting it to its default value 128 mb. [main] INFO org.apache.flink.runtime.minicluster.MiniCluster - Starting Flink Mini Cluster [main] INFO org.apache.flink.runtime.minicluster.MiniCluster - Starting Metrics Registry [main] INFO org.apache.flink.runtime.metrics.MetricRegistryImpl - No metrics reporter configured, no metrics will be exposed/reported. [main] INFO org.apache.flink.runtime.minicluster.MiniCluster - Starting RPC Service(s) [main] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start local actor system [flink-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started [main] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka://flink [main] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start local actor system [flink-metrics-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started [main] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka://flink-metrics [main] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService at akka://flink-metrics/user/rpc/MetricQueryService . [main] INFO org.apache.flink.runtime.minicluster.MiniCluster - Starting high-availability services [main] INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory C:\Users\18795\AppData\Local\Temp\blobStore-395858f9-6e79-4996-b985-dabf8fa8d8ea [main] INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:49901 - max concurrent requests: 50 - max backlog: 1000 [main] INFO org.apache.flink.runtime.blob.PermanentBlobCache - Created BLOB cache storage directory C:\Users\18795\AppData\Local\Temp\blobStore-16c800cc-b3b0-4b70-ac5c-7861ba41fdfd [main] INFO org.apache.flink.runtime.blob.TransientBlobCache - Created BLOB cache storage directory C:\Users\18795\AppData\Local\Temp\blobStore-e7acd0f0-084b-485b-809f-b82728bd7695 [main] INFO org.apache.flink.runtime.minicluster.MiniCluster - Starting 1 TaskManger(s) [main] INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Starting TaskManager with ResourceID: 4afd33fe-2e15-4a98-a576-d013345b4b7c [main] INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices - Temporary file directory 'C:\Users\18795\AppData\Local\Temp': total 198 GB, usable 88 GB (44.44% usable) [main] INFO org.apache.flink.runtime.io.disk.iomanager.IOManager - Created a new FileChannelManager for spilling of task related data to disk (joins, sorting, ...). Used directories: C:\Users\18795\AppData\Local\Temp\flink-io-66b5f27e-b3b0-463f-a1bd-ea29f5390c52 [main] INFO org.apache.flink.runtime.io.network.NettyShuffleServiceFactory - Created a new FileChannelManager for storing result partitions of BLOCKING shuffles. Used directories: C:\Users\18795\AppData\Local\Temp\flink-netty-shuffle-a407813a-f186-4386-baf3-ec9244c38fcb [main] INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool - Allocated 64 MB for network buffer pool (number of memory segments: 2048, bytes per segment: 32768). [main] INFO org.apache.flink.runtime.io.network.NettyShuffleEnvironment - Starting the network environment and its components. [main] INFO org.apache.flink.runtime.taskexecutor.KvStateService - Starting the kvState service and its components. [main] INFO org.apache.flink.configuration.Configuration - Config uses fallback configuration key 'akka.ask.timeout' instead of key 'taskmanager.slot.timeout' [main] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.taskexecutor.TaskExecutor at akka://flink/user/rpc/taskmanager_0 . [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService - Start job leader service. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.filecache.FileCache - User file cache uses directory C:\Users\18795\AppData\Local\Temp\flink-dist-cache-2a4c4b41-bb2f-4c1e-88a0-cf5ef1549b97 [main] INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Starting rest endpoint. [main] INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Failed to load web based job submission extension. Probable reason: flink-runtime-web is not in the classpath. [main] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - Log file environment variable 'log.file' is not set. [main] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'web.log.path'. [main] INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Rest endpoint listening at localhost:49954 [main] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Proposing leadership to contender http://localhost:49954 [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - http://localhost:49954 was granted leadership with leaderSessionID=1242e32c-a913-428b-916a-1672c2a1bbd1 [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Received confirmation of leadership for leader http://localhost:49954 , session=1242e32c-a913-428b-916a-1672c2a1bbd1 [main] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Proposing leadership to contender LeaderContender: DefaultDispatcherRunner [main] INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl - Starting resource manager service. [main] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Proposing leadership to contender LeaderContender: ResourceManagerServiceImpl [mini-cluster-io-thread-2] INFO org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunner - DefaultDispatcherRunner was granted leadership with leader id 3494d5a0-1484-46c6-af3d-11f76998217f. Creating new DispatcherLeaderProcess. [pool-2-thread-1] INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl - Resource manager service is granted leadership with session id 0a684d7f-95f7-4df1-964c-b4caa47cab32. [main] INFO org.apache.flink.runtime.minicluster.MiniCluster - Flink Mini Cluster started successfully [mini-cluster-io-thread-2] INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess - Start SessionDispatcherLeaderProcess. [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess - Recover all persisted job graphs. [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess - Successfully recovered 0 persisted job graphs. [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/rpc/dispatcher_1 . [pool-2-thread-1] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/rpc/resourcemanager_2 . [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Received confirmation of leadership for leader akka://flink/user/rpc/dispatcher_1 , session=3494d5a0-1484-46c6-af3d-11f76998217f [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Starting the resource manager. [mini-cluster-io-thread-2] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Received confirmation of leadership for leader akka://flink/user/rpc/resourcemanager_2 , session=0a684d7f-95f7-4df1-964c-b4caa47cab32 [flink-akka.actor.default-dispatcher-6] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Connecting to ResourceManager akka://flink/user/rpc/resourcemanager_2(964cb4caa47cab320a684d7f95f74df1). [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Resolved ResourceManager address, beginning registration [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Received JobGraph submission 'Kafka Forward: ExamTopic01 → ExamTopic02' (df2dea9c36325aded3c724b0720bd6c2). [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 'Kafka Forward: ExamTopic01 → ExamTopic02' (df2dea9c36325aded3c724b0720bd6c2). [flink-akka.actor.default-dispatcher-6] INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID 4afd33fe-2e15-4a98-a576-d013345b4b7c (akka://flink/user/rpc/taskmanager_0) at ResourceManager [flink-akka.actor.default-dispatcher-6] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Successful registration at resource manager akka://flink/user/rpc/resourcemanager_2 under registration id b2e86c78c25728134d6c4ecb271aa712. [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Proposing leadership to contender LeaderContender: JobMasterServiceLeadershipRunner [jobmanager-io-thread-1] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/rpc/jobmanager_3 . [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 'Kafka Forward: ExamTopic01 → ExamTopic02' (df2dea9c36325aded3c724b0720bd6c2). [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - Using restart back off time strategy NoRestartBackoffTimeStrategy for Kafka Forward: ExamTopic01 → ExamTopic02 (df2dea9c36325aded3c724b0720bd6c2). [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master for job Kafka Forward: ExamTopic01 → ExamTopic02 (df2dea9c36325aded3c724b0720bd6c2). [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization on master in 0 ms. [jobmanager-io-thread-1] INFO org.apache.flink.runtime.scheduler.adapter.DefaultExecutionTopology - Built 1 pipelined regions in 2 ms [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been configured, using default (HashMap) org.apache.flink.runtime.state.hashmap.HashMapStateBackend@1e4cd263 [jobmanager-io-thread-1] INFO org.apache.flink.runtime.state.StateBackendLoader - State backend loader loads the state backend as HashMapStateBackend [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - Checkpoint storage is set to 'jobmanager' [jobmanager-io-thread-1] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - No checkpoint found during restore. [jobmanager-io-thread-1] INFO org.apache.flink.runtime.jobmaster.JobMaster - Using failover strategy org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy@44c902ad for Kafka Forward: ExamTopic01 → ExamTopic02 (df2dea9c36325aded3c724b0720bd6c2). [jobmanager-io-thread-1] INFO org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService - Received confirmation of leadership for leader akka://flink/user/rpc/jobmanager_3 , session=fa6c5f3c-c29b-4667-9a9e-4f126e184889 [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 'Kafka Forward: ExamTopic01 → ExamTopic02' (df2dea9c36325aded3c724b0720bd6c2) under job master id 9a9e4f126e184889fa6c5f3cc29b4667. [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.jobmaster.JobMaster - Starting scheduling with scheduling strategy [org.apache.flink.runtime.scheduler.strategy.PipelinedRegionSchedulingStrategy] [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job Kafka Forward: ExamTopic01 → ExamTopic02 (df2dea9c36325aded3c724b0720bd6c2) switched from state CREATED to RUNNING. [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source -> Sink: Unnamed (1/1) (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from CREATED to SCHEDULED. [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager akka://flink/user/rpc/resourcemanager_2(964cb4caa47cab320a684d7f95f74df1) [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager address, beginning registration [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering job manager 9a9e4f126e184889fa6c5f3cc29b4667@akka://flink/user/rpc/jobmanager_3 for job df2dea9c36325aded3c724b0720bd6c2. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered job manager 9a9e4f126e184889fa6c5f3cc29b4667@akka://flink/user/rpc/jobmanager_3 for job df2dea9c36325aded3c724b0720bd6c2. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully registered at ResourceManager, leader id: 964cb4caa47cab320a684d7f95f74df1. [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager - Received resource requirements from job df2dea9c36325aded3c724b0720bd6c2: [ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN}, numberOfRequiredSlots=1}] [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Receive slot request 0628cd6703b3b400e296bfd92281fe07 for job df2dea9c36325aded3c724b0720bd6c2 from resource manager with leader id 964cb4caa47cab320a684d7f95f74df1. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Allocated slot for 0628cd6703b3b400e296bfd92281fe07. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService - Add job df2dea9c36325aded3c724b0720bd6c2 for job leader monitoring. [mini-cluster-io-thread-1] INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService - Try to register at job manager akka://flink/user/rpc/jobmanager_3 with leader id fa6c5f3c-c29b-4667-9a9e-4f126e184889. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService - Resolved JobManager address, beginning registration [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService - Successful registration at job manager akka://flink/user/rpc/jobmanager_3 for job df2dea9c36325aded3c724b0720bd6c2. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Establish JobManager connection for job df2dea9c36325aded3c724b0720bd6c2. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Offer reserved slots to the leader of job df2dea9c36325aded3c724b0720bd6c2. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source -> Sink: Unnamed (1/1) (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from SCHEDULED to DEPLOYING. [flink-akka.actor.default-dispatcher-4] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Custom Source -> Sink: Unnamed (1/1) (attempt #0) with attempt id 6c5fc0fa56d9b0b02eeab15e4ab8a649 to 4afd33fe-2e15-4a98-a576-d013345b4b7c @ www.Brenz.pl (dataPort=-1) with allocation id 0628cd6703b3b400e296bfd92281fe07 [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl - Activate slot 0628cd6703b3b400e296bfd92281fe07. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.state.changelog.StateChangelogStorageLoader - StateChangelogStorageLoader initialized with shortcut names {memory}. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.state.changelog.StateChangelogStorageLoader - Creating a changelog storage with name 'memory'. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Received task Source: Custom Source -> Sink: Unnamed (1/1)#0 (6c5fc0fa56d9b0b02eeab15e4ab8a649), deploy into slot with allocation id 0628cd6703b3b400e296bfd92281fe07. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Sink: Unnamed (1/1)#0 (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from CREATED to DEPLOYING. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl - Activate slot 0628cd6703b3b400e296bfd92281fe07. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Custom Source -> Sink: Unnamed (1/1)#0 (6c5fc0fa56d9b0b02eeab15e4ab8a649) [DEPLOYING]. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.runtime.tasks.StreamTask - No state backend has been configured, using default (HashMap) org.apache.flink.runtime.state.hashmap.HashMapStateBackend@536aad7a [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.runtime.state.StateBackendLoader - State backend loader loads the state backend as HashMapStateBackend [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Checkpoint storage is set to 'jobmanager' [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Sink: Unnamed (1/1)#0 (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from DEPLOYING to INITIALIZING. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source -> Sink: Unnamed (1/1) (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from DEPLOYING to INITIALIZING. [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Using AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE semantic. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction - FlinkKafkaProducer 1/1 - no state to restore [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [node01:9092, node02:9092, node03:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 3600000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration 'key.deserializer' was supplied but isn't a known config. [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration 'value.deserializer' was supplied but isn't a known config. [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration 'group.id' was supplied but isn't a known config. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.1 [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c57222ae8cd7866b [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758282681035 [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Starting FlinkKafkaInternalProducer (1/1) to produce into default topic ExamTopic02 [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 has no restore state. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [node01:9092, node02:9092, node03:9092] check.crcs = true client.dns.lookup = default client.id = client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = exam-consumer-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'value.serializer' was supplied but isn't a known config. [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'transaction.timeout.ms' was supplied but isn't a known config. [Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'key.serializer' was supplied but isn't a known config. [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.1 [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c57222ae8cd7866b [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758282681119 [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-exam-consumer-group-1, groupId=exam-consumer-group] Cluster ID: B0bBZVzzQ9GkwwAzlHzH9A [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: B0bBZVzzQ9GkwwAzlHzH9A [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 1 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='ExamTopic01', partition=0}] [Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Sink: Unnamed (1/1)#0 (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from INITIALIZING to RUNNING. [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source -> Sink: Unnamed (1/1) (6c5fc0fa56d9b0b02eeab15e4ab8a649) switched from INITIALIZING to RUNNING. [Legacy Source Thread - Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 creating fetcher with offsets {KafkaTopicPartition{topic='ExamTopic01', partition=0}=-915623761773}. [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [node01:9092, node02:9092, node03:9092] check.crcs = true client.dns.lookup = default client.id = client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = exam-consumer-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'value.serializer' was supplied but isn't a known config. [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'transaction.timeout.ms' was supplied but isn't a known config. [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] WARN org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'key.serializer' was supplied but isn't a known config. [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.1 [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c57222ae8cd7866b [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1758282681381 [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-exam-consumer-group-2, groupId=exam-consumer-group] Subscribed to partition(s): ExamTopic01-0 [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-exam-consumer-group-2, groupId=exam-consumer-group] Cluster ID: B0bBZVzzQ9GkwwAzlHzH9A [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-exam-consumer-group-2, groupId=exam-consumer-group] Discovered group coordinator node01:9092 (id: 2147483646 rack: null) [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-exam-consumer-group-2, groupId=exam-consumer-group] Found no committed offset for partition ExamTopic01-0 [Kafka Fetcher for Source: Custom Source -> Sink: Unnamed (1/1)#0] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-exam-consumer-group-2, groupId=exam-consumer-group] Resetting offset for partition ExamTopic01-0 to offset 0. 对吗
09-20
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值