准备环境:centos 7 64位 4G内存以上
maven 3.6.3
java8
flink1.12.0 源码
1.配置 maven地址 ,添加镜像
[root@izbp196t897o8rgu9nmaqtz ~]# vi /root/apache-maven-3.6.3/conf/settings.xml
<mirrors>
<!-- mirror
| Specifies a repository mirror site to use instead of a given repository. The repository that
| this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used
| for inheritance and direct lookup purposes, and must be unique across the set of mirrors.
|
<mirror>
<id>mirrorId</id>
<mirrorOf>repositoryId</mirrorOf>
<name>Human Readable Name for this Mirror.</name>
<url>http://my.repository.com/repo/path</url>
</mirror>
-->
<mirror>
<id>nexus-hortonworks</id>
<mirrorOf>central</mirrorOf>
<name>Nexus hortonworks</name>
<url>https://repo.hortonworks.com/content/groups/public/</url>
</mirror>
<mirror>
<id>central</id>
<name>Maven Repository Switchboard</name>
<url>http://repo1.maven.org/maven2/</url>
<mirrorOf>central</mirrorOf>
</mirror>
<mirror>
<id>central2</id>
<name>Maven Repository Switchboard</name>
<url>http://repo1.maven.apache.org/maven2/</url>
<mirrorOf>central</mirrorOf>
</mirror>
</mirrors>
2.下载源代码
[root@izbp196t897o8rgu9nmaqtz ~]# wget http://mirror.bit.edu.cn/apache/flink/flink-1.12.0/flink-1.12.0-src.tgz
[root@izbp196t897o8rgu9nmaqtz ~]# tar -xzvf flink-1.12.0-src.tgz
[root@izbp196t897o8rgu9nmaqtz ~]# cd flink-1.12.0
[root@izbp196t897o8rgu9nmaqtz flink-1.12.0]# mvn clean install -DskipTests -Drat.skip=true -Dcheckstyle.skip=true -Dscala=2.11.12
经过漫长等待。。。。。。。
编译成功效果:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Flink : 1.12.0:
[INFO]
[INFO] Flink : Tools : Force Shading … SUCCESS [ 1.994 s]
[INFO] Flink : … SUCCESS [ 1.252 s]
[INFO] Flink : Annotations … SUCCESS [ 1.189 s]
[INFO] Flink : Test utils : … SUCCESS [ 0.065 s]
[INFO] Flink : Test utils : Junit … SUCCESS [ 1.417 s]
[INFO] Flink : Metrics : … SUCCESS [ 0.135 s]
[INFO] Flink : Metrics : Core … SUCCESS [ 1.399 s]
[INFO] Flink : Core … SUCCESS [ 17.905 s]
[INFO] Flink : Java … SUCCESS [ 3.250 s]
[INFO] Flink : Queryable state : … SUCCESS [ 0.052 s]
[INFO] Flink : Queryable state : Client Java … SUCCESS [ 0.461 s]
[INFO] Flink : FileSystems : … SUCCESS [ 0.050 s]
[INFO] Flink : FileSystems : Hadoop FS … SUCCESS [ 1.470 s]
[INFO] Flink : Runtime … SUCCESS [04:43 min]
[INFO] Flink : Scala … SUCCESS [ 45.503 s]
[INFO] Flink : FileSystems : Mapr FS … SUCCESS [ 0.670 s]
[INFO] Flink : FileSystems : Hadoop FS shaded … SUCCESS [ 4.096 s]
[INFO] Flink : FileSystems : S3 FS Base … SUCCESS [ 1.116 s]
[INFO] Flink : FileSystems : S3 FS Hadoop … SUCCESS [ 5.485 s]
[INFO] Flink : FileSystems : S3 FS Presto … SUCCESS [ 7.530 s]
[INFO] Flink : FileSystems : Swift FS Hadoop … SUCCESS [ 17.777 s]
[INFO] Flink : FileSystems : OSS FS … SUCCESS [ 5.369 s]
[INFO] Flink : FileSystems : Azure FS Hadoop … SUCCESS [ 7.982 s]
[INFO] Flink : Optimizer … SUCCESS [ 1.275 s]
[INFO] Flink : Connectors : … SUCCESS [ 0.114 s]
[INFO] Flink : Connectors : File Sink Common … SUCCESS [ 0.253 s]
[INFO] Flink : Streaming Java … SUCCESS [ 8.001 s]
[INFO] Flink : Clients … SUCCESS [ 1.139 s]
[INFO] Flink : Test utils : Utils … SUCCESS [ 1.458 s]
[INFO] Flink : Runtime web … SUCCESS [01:22 min]
[INFO] Flink : Examples : … SUCCESS [ 0.064 s]
[INFO] Flink : Examples : Batch … SUCCESS [ 12.365 s]
[INFO] Flink : Connectors : Hadoop compatibility … SUCCESS [ 5.114 s]
[INFO] Flink : State backends : … SUCCESS [ 0.039 s]
[INFO] Flink : State backends : RocksDB … SUCCESS [ 0.722 s]
[INFO] Flink : Tests … SUCCESS [ 34.831 s]
[INFO] Flink : Streaming Scala … SUCCESS [ 29.168 s]
[INFO] Flink : Connectors : HCatalog … SUCCESS [ 4.865 s]
[INFO] Flink : Test utils : Connectors … SUCCESS [ 0.171 s]
[INFO] Flink : Connectors : Base … SUCCESS [ 0.380 s]
[INFO] Flink : Connectors : Files … SUCCESS [ 0.744 s]
[INFO] Flink : Table : … SUCCESS [ 0.028 s]
[INFO] Flink : Table : Common … SUCCESS [ 2.099 s]
[INFO] Flink : Table : API Java … SUCCESS [ 0.955 s]
[INFO] Flink : Table : API Java bridge … SUCCESS [ 0.639 s]
[INFO] Flink : Table : API Scala … SUCCESS [ 10.516 s]
[INFO] Flink : Table : API Scala bridge … SUCCESS [ 8.991 s]
[INFO] Flink : Table : SQL Parser … SUCCESS [ 8.732 s]
[INFO] Flink : Libraries : … SUCCESS [ 0.049 s]
[INFO] Flink : Libraries : CEP … SUCCESS [ 2.481 s]
[INFO] Flink : Table : Planner … SUCCESS [03:08 min]
[INFO] Flink : Table : SQL Parser Hive … SUCCESS [ 3.098 s]
[INFO] Flink : Table : Runtime Blink … SUCCESS [ 3.379 s]
[INFO] Flink : Table : Planner Blink … SUCCESS [02:54 min]
[INFO] Flink : Formats : … SUCCESS [ 0.100 s]
[INFO] Flink : Formats : Json … SUCCESS [ 0.671 s]
[INFO] Flink : Connectors : Elasticsearch base … SUCCESS [ 1.547 s]
[INFO] Flink : Connectors : Elasticsearch 5 … SUCCESS [ 11.596 s]
[INFO] Flink : Connectors : Elasticsearch 6 … SUCCESS [ 1.132 s]
[INFO] Flink : Connectors : Elasticsearch 7 … SUCCESS [ 0.867 s]
[INFO] Flink : Connectors : HBase base … SUCCESS [ 1.951 s]
[INFO] Flink : Connectors : HBase 1.4 … SUCCESS [ 2.191 s]
[INFO] Flink : Connectors : HBase 2.2 … SUCCESS [ 2.261 s]
[INFO] Flink : Formats : Hadoop bulk … SUCCESS [ 0.522 s]
[INFO] Flink : Formats : Orc … SUCCESS [ 0.938 s]
[INFO] Flink : Formats : Orc nohive … SUCCESS [ 0.481 s]
[INFO] Flink : Formats : Avro … SUCCESS [ 2.278 s]
[INFO] Flink : Formats : Parquet … SUCCESS [ 5.077 s]
[INFO] Flink : Formats : Csv … SUCCESS [ 0.628 s]
[INFO] Flink : Connectors : Hive … SUCCESS [ 6.487 s]
[INFO] Flink : Connectors : JDBC … SUCCESS [ 1.379 s]
[INFO] Flink : Connectors : RabbitMQ … SUCCESS [ 0.474 s]
[INFO] Flink : Connectors : Twitter … SUCCESS [ 2.230 s]
[INFO] Flink : Connectors : Nifi … SUCCESS [ 0.497 s]
[INFO] Flink : Connectors : Cassandra … SUCCESS [ 4.003 s]
[INFO] Flink : Metrics : JMX … SUCCESS [ 0.351 s]
[INFO] Flink : Connectors : Kafka … SUCCESS [ 2.601 s]
[INFO] Flink : Connectors : Google PubSub … SUCCESS [ 0.724 s]
[INFO] Flink : Connectors : Kinesis … SUCCESS [ 11.477 s]
[INFO] Flink : Connectors : SQL : Elasticsearch 6 … SUCCESS [ 5.483 s]
[INFO] Flink : Connectors : SQL : Elasticsearch 7 … SUCCESS [ 7.128 s]
[INFO] Flink : Connectors : SQL : HBase 1.4 … SUCCESS [ 7.316 s]
[INFO] Flink : Connectors : SQL : HBase 2.2 … SUCCESS [ 13.217 s]
[INFO] Flink : Connectors : SQL : Hive 1.2.2 … SUCCESS [ 4.366 s]
[INFO] Flink : Connectors : SQL : Hive 2.2.0 … SUCCESS [ 5.100 s]
[INFO] Flink : Connectors : SQL : Hive 2.3.6 … SUCCESS [ 5.053 s]
[INFO] Flink : Connectors : SQL : Hive 3.1.2 … SUCCESS [ 6.533 s]
[INFO] Flink : Connectors : SQL : Kafka … SUCCESS [ 0.960 s]
[INFO] Flink : Connectors : SQL : Kinesis … SUCCESS [ 7.769 s]
[INFO] Flink : Formats : Avro confluent registry … SUCCESS [ 0.549 s]
[INFO] Flink : Formats : Sequence file … SUCCESS [ 0.328 s]
[INFO] Flink : Formats : Compress … SUCCESS [ 0.314 s]
[INFO] Flink : Formats : SQL Orc … SUCCESS [ 0.286 s]
[INFO] Flink : Formats : SQL Parquet … SUCCESS [ 0.661 s]
[INFO] Flink : Formats : SQL Avro … SUCCESS [ 0.954 s]
[INFO] Flink : Formats : SQL Avro Confluent Registry … SUCCESS [ 3.694 s]
[INFO] Flink : Examples : Streaming … SUCCESS [ 12.468 s]
[INFO] Flink : Examples : Table … SUCCESS [ 9.610 s]
[INFO] Flink : Examples : Build Helper : … SUCCESS [ 0.066 s]
[INFO] Flink : Examples : Build Helper : Streaming Twitter SUCCESS [ 0.486 s]
[INFO] Flink : Examples : Build Helper : Streaming State machine SUCCESS [ 0.504 s]
[INFO] Flink : Examples : Build Helper : Streaming Google PubSub SUCCESS [ 4.234 s]
[INFO] Flink : Container … SUCCESS [ 0.207 s]
[INFO] Flink : Queryable state : Runtime … SUCCESS [ 0.443 s]
[INFO] Flink : Mesos … SUCCESS [ 22.370 s]
[INFO] Flink : Kubernetes … SUCCESS [ 4.624 s]
[INFO] Flink : Yarn … SUCCESS [ 1.077 s]
[INFO] Flink : Libraries : Gelly … SUCCESS [ 1.361 s]
[INFO] Flink : Libraries : Gelly scala … SUCCESS [ 16.105 s]
[INFO] Flink : Libraries : Gelly Examples … SUCCESS [ 8.161 s]
[INFO] Flink : External resources : … SUCCESS [ 0.035 s]
[INFO] Flink : External resources : GPU … SUCCESS [ 0.123 s]
[INFO] Flink : Metrics : Dropwizard … SUCCESS [ 0.165 s]
[INFO] Flink : Metrics : Graphite … SUCCESS [ 0.112 s]
[INFO] Flink : Metrics : InfluxDB … SUCCESS [ 0.453 s]
[INFO] Flink : Metrics : Prometheus … SUCCESS [ 0.246 s]
[INFO] Flink : Metrics : StatsD … SUCCESS [ 0.208 s]
[INFO] Flink : Metrics : Datadog … SUCCESS [ 0.179 s]
[INFO] Flink : Metrics : Slf4j … SUCCESS [ 0.150 s]
[INFO] Flink : Libraries : CEP Scala … SUCCESS [ 11.613 s]
[INFO] Flink : Table : Uber … SUCCESS [ 6.545 s]
[INFO] Flink : Table : Uber Blink … SUCCESS [ 7.147 s]
[INFO] Flink : Python … SUCCESS [ 11.469 s]
[INFO] Flink : Table : SQL Client … SUCCESS [ 1.507 s]
[INFO] Flink : Libraries : State processor API … SUCCESS [ 0.739 s]
[INFO] Flink : ML : … SUCCESS [ 0.028 s]
[INFO] Flink : ML : API … SUCCESS [ 0.139 s]
[INFO] Flink : ML : Lib … SUCCESS [ 0.436 s]
[INFO] Flink : ML : Uber … SUCCESS [ 0.124 s]
[INFO] Flink : Scala shell … SUCCESS [ 10.957 s]
[INFO] Flink : Dist … SUCCESS [ 56.323 s]
[INFO] Flink : Yarn Tests … SUCCESS [ 37.550 s]
[INFO] Flink : E2E Tests : … SUCCESS [09:53 min]
[INFO] Flink : E2E Tests : CLI … SUCCESS [ 0.233 s]
[INFO] Flink : E2E Tests : Parent Child classloading program SUCCESS [ 0.179 s]
[INFO] Flink : E2E Tests : Parent Child classloading lib-package SUCCESS [ 0.120 s]
[INFO] Flink : E2E Tests : Dataset allround … SUCCESS [ 0.152 s]
[INFO] Flink : E2E Tests : Dataset Fine-grained recovery … SUCCESS [ 0.156 s]
[INFO] Flink : E2E Tests : Datastream allround … SUCCESS [ 0.719 s]
[INFO] Flink : E2E Tests : Batch SQL … SUCCESS [ 0.207 s]
[INFO] Flink : E2E Tests : Stream SQL … SUCCESS [ 0.164 s]
[INFO] Flink : E2E Tests : Distributed cache via blob … SUCCESS [ 0.135 s]
[INFO] Flink : E2E Tests : High parallelism iterations … SUCCESS [ 7.111 s]
[INFO] Flink : E2E Tests : Stream stateful job upgrade … SUCCESS [ 0.600 s]
[INFO] Flink : E2E Tests : Queryable state … SUCCESS [ 1.428 s]
[INFO] Flink : E2E Tests : Local recovery and allocation … SUCCESS [ 0.148 s]
[INFO] Flink : E2E Tests : Elasticsearch 5 … SUCCESS [02:04 min]
[INFO] Flink : E2E Tests : Elasticsearch 6 … SUCCESS [ 2.790 s]
[INFO] Flink : Quickstart : … SUCCESS [ 0.537 s]
[INFO] Flink : Quickstart : Java … SUCCESS [ 0.419 s]
[INFO] Flink : Quickstart : Scala … SUCCESS [ 0.110 s]
[INFO] Flink : E2E Tests : Quickstart … SUCCESS [ 0.374 s]
[INFO] Flink : E2E Tests : Confluent schema registry … SUCCESS [ 2.039 s]
[INFO] Flink : E2E Tests : Stream state TTL … SUCCESS [ 3.229 s]
[INFO] Flink : E2E Tests : SQL client … SUCCESS [05:30 min]
[INFO] Flink : E2E Tests : File sink … SUCCESS [ 0.874 s]
[INFO] Flink : E2E Tests : State evolution … SUCCESS [ 0.471 s]
[INFO] Flink : E2E Tests : RocksDB state memory control … SUCCESS [ 0.608 s]
[INFO] Flink : E2E Tests : Common … SUCCESS [ 0.613 s]
[INFO] Flink : E2E Tests : Metrics availability … SUCCESS [ 0.148 s]
[INFO] Flink : E2E Tests : Metrics reporter prometheus … SUCCESS [ 0.167 s]
[INFO] Flink : E2E Tests : Heavy deployment … SUCCESS [ 6.807 s]
[INFO] Flink : E2E Tests : Connectors : Google PubSub … SUCCESS [07:57 min]
[INFO] Flink : E2E Tests : Streaming Kafka base … SUCCESS [ 0.172 s]
[INFO] Flink : E2E Tests : Streaming Kafka … SUCCESS [ 6.343 s]
[INFO] Flink : E2E Tests : Plugins : … SUCCESS [ 0.056 s]
[INFO] Flink : E2E Tests : Plugins : Dummy fs … SUCCESS [ 0.114 s]
[INFO] Flink : E2E Tests : Plugins : Another dummy fs … SUCCESS [ 0.101 s]
[INFO] Flink : E2E Tests : TPCH … SUCCESS [ 20.120 s]
[INFO] Flink : E2E Tests : Streaming Kinesis … SUCCESS [ 11.819 s]
[INFO] Flink : E2E Tests : Elasticsearch 7 … SUCCESS [ 2.995 s]
[INFO] Flink : E2E Tests : Common Kafka … SUCCESS [ 8.173 s]
[INFO] Flink : E2E Tests : TPCDS … SUCCESS [ 1.901 s]
[INFO] Flink : E2E Tests : Netty shuffle memory control … SUCCESS [ 0.121 s]
[INFO] Flink : E2E Tests : Python … SUCCESS [ 6.835 s]
[INFO] Flink : E2E Tests : HBase … SUCCESS [ 2.052 s]
[INFO] Flink : State backends : Heap spillable … SUCCESS [ 0.259 s]
[INFO] Flink : Contrib : … SUCCESS [ 0.027 s]
[INFO] Flink : Contrib : Connectors : Wikiedits … SUCCESS [ 0.213 s]
[INFO] Flink : FileSystems : Tests … SUCCESS [ 0.577 s]
[INFO] Flink : Docs … SUCCESS [ 39.850 s]
[INFO] Flink : Walkthrough : … SUCCESS [ 0.037 s]
[INFO] Flink : Walkthrough : Common … SUCCESS [ 0.286 s]
[INFO] Flink : Walkthrough : Datastream Java … SUCCESS [ 0.074 s]
[INFO] Flink : Walkthrough : Datastream Scala … SUCCESS [ 0.191 s]
[INFO] Flink : Tools : CI : Java … SUCCESS [03:46 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 54:04 min
[INFO] Finished at: 2021-01-11T20:20:08+08:00
[INFO] ------------------------------------------------------------------------
3 问题与解决:
(1)flink-runtime-web 编译报错 Failed to execute goal com.github.eirslett:frontend-maven-plugin:1.6:npm (npm install) on project flink-runtime-web_2.11: Failed to run task: ‘npm ci --cache-max=0 --no-save’ failed. org.apache.commons.exec.ExecuteException: Process exited with an error: -4048 (Exit value: -4048) -> [Help 1]
解决:
先删除flink-runtime-web\web-dashboard下的node_modules文件夹
使用了淘宝源,删除它
npm install -g mirror-config-china --registry=https://registry.npm.taobao.org/
npm config get registry
npm config rm registry
npm info express
删除node_modules文件夹
cache clean --force
npm update
重新打包
替换flink-runtime-web的pom.xml文件中的
ci --cache-max=0 --no-save为 install -registry=https://registry.npm.taobao.org --cache-max=0 --no-save 即:install -registry=https://registry.npm.taobao.org --cache-max=0 --no-save
frontend-maven-plugin插件完整如下:
<plugin>
<groupId>com.github.eirslett</groupId>
<artifactId>frontend-maven-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<id>install node and npm</id>
<goals>
<goal>install-node-and-npm</goal>
</goals>
<configuration>
<nodeVersion>v10.9.0</nodeVersion>
</configuration>
</execution>
<execution>
<id>npm install</id>
<goals>
<goal>npm</goal>
</goals>
<configuration>
<arguments>install -registry=https://registry.npm.taobao.org --cache-max=0 --no-save</arguments>
<environmentVariables>
<HUSKY_SKIP_INSTALL>true</HUSKY_SKIP_INSTALL>
</environmentVariables>
</configuration>
</execution>
<execution>
<id>npm run build</id>
<goals>
<goal>npm</goal>
</goals>
<configuration>
<arguments>run build</arguments>
</configuration>
</execution>
</executions>
<configuration>
<workingDirectory>web-dashboard</workingDirectory>
</configuration>
</plugin>
(2)flink-table-planner 编译报错 Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compi
le (default) on project iteblog: wrap: org.apache.commons.exec.ExecuteException:
Process exited with an error: 1 (Exit value: 1) -> [Help 1]
解决:flink-table-planner 下面的pom.xml文件中添加依赖 ,mvn编译命令加上 -Dscala=2.11.12
<dependency>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</dependency>
(3)有时卡在下载node-v10.9.0-linux-x64.tar.gz的过程中,可以预先下载好,然后根据控制台打印出的日志放入对应路径。
[root@izbp196t897o8rgu9nmaqtz ~] wget https://nodejs.org/dist/v10.9.0/node-v10.9.0-linux-x64.tar.gz
[root@izbp196t897o8rgu9nmaqtz ~] mv node-v10.9.0-linux-x64.tar.gz /root/.m2/repository/com/github/eirslett/node/10.9.0/node-10.9.0-linux-x64.tar.gz
(4) 报错 [ERROR] Unable to save binary /root/flink-1.12.0/flink-runtime-web/web-dashboard/node_modules/node-sass/vendor/linux-x64-64 : { Error: EACCES: permission denied, mkdir ‘/root/flink-1.12.0/flink-runtime-web/web-dashboard/node_modules/node-sass/vendor’
忽略。
(5) 报错 Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.5.1:test-compile (default) on project flink-parquet_2.12: protoc did not exit cleanly. Review output for more information. -> [Help 1]
(6) 报错[ERROR] Failed to execute goal com.diffplug.spotless:spotless-maven-plugin:2.4.2:check (spotless-check) on project flink-annotations: Execution spotless-check of goal com.diffplug.spotless:spotless-maven-plugin:2.4.2:check failed: Unable to resolve dependencies: The following artifacts could not be resolved: com.google.guava:guava:jar:27.0.1-jre, com.google.errorprone:javac-shaded:jar:9+181-r4173-1: Could not transfer artifact com.google.guava:guava:jar:27.0.1-jre from/to nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public): Timeout while waiting for concurrent download of /root/.m2/repository/com/google/guava/guava/27.0.1-jre/guava-27.0.1-jre.jar.part to progress -> [Help1]
(7)[ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.5.1:test-compile (default) on project flink-parquet_2.11: protoc did not exit cleanly. Review output for more information. -> [Help 1]
(8) 报错 [ERROR] Failed to execute 大多都是镜像下载,网络问题,翻墙问题,解决方法,下一条,一劳永逸。
(9)花钱买个vpn编译,配国际镜像,编译过程的波折可以少很多。
注意:编译失败,重新编译需要kill掉之前的编译进程。