背景
<flink.version>1.10.1</flink.version>
在测试Table API和Flink SQL时出现如下错误
错误日志
Exception in thread "main" org.apache.flink.table.api.TableException: Could not instantiate the executor. Make sure a planner module is on the classpath
at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl$.lookupExecutor(StreamTableEnvironmentImpl.scala:366)
at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:321)
at org.apache.flink.table.api.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:425)
at com.zjl.api.tableapi.Example$.main(Example.scala:42)
at com.zjl.api.tableapi.Example.main(Example.scala)
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.delegation.ExecutorFactory' in
the classpath.
Reason: No factory supports the additional filters.
The following properties are requested:
class-name=org.apache.flink.table.executor.StreamExecutorFactory
streaming-mode=true
The following factories have been considered:
org.apache.flink.table.planner.delegation.BlinkExecutorFactory
at org.apache.flink.table.factories.ComponentFactoryService.find(ComponentFactoryService.java:71)
at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl$.lookupExecutor(StreamTableEnvironmentImpl.scala:350)
... 4 more
POM
<properties>
<maven.compiler.source>8</maven.compiler.source>
<maven.compiler.target>8</maven.compiler.target>
<flink.version>1.10.1</flink.version>
<scala.binary.version>2.12</scala.binary.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>flink-connector-redis_2.11</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch6_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-scala-bridge_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
解决办法
如下方注释部分所示,采用 .useBlinkPlanner() //blink Pannner,可解决问题
//基于env创建一个表执行环境
val settings: EnvironmentSettings = EnvironmentSettings
.newInstance()
// .useOldPlanner() //flink Planner 这个报错
.useBlinkPlanner() //blink Pannner
.inStreamingMode().build()
这篇博客详细记录了在使用Flink 1.10.1版本时遇到的Table API和Flink SQL测试错误,错误出现在特定的日志中。通过检查POM文件,博主发现解决方案在于更新POM配置,通过调用.useBlinkPlanner()修复了问题,使得Flink作业能够正常运行。
1356

被折叠的 条评论
为什么被折叠?



