【flink脚本系列】 kubernetes-jobmanager.sh功能用法示例源码解析

FlinkKubernetesJobManager脚本与组件管理
本文介绍了一个用于在Kubernetes上启动FlinkJobManager的脚本,提供多种用法示例,包括启动KubernetesSession和KubernetesApplication模式,并提及了Flink中的其他类似脚本如start-cluster.sh和stop-cluster.sh。链接提供了Flink在Kubernetes原生集成的详细文档。

脚本主要功能

该脚本用于在 Kubernetes 上启动 Flink 的 JobManager,它适用于 Flink 的原生 Kubernetes 集成。

注意:此脚本不应手动启动,而是由 Kubernetes 集成使用。

脚本加上中文注释

#!/bin/bash

# Start a Flink JobManager for native Kubernetes.
# NOTE: This script is not meant to be started manually. It will be used by native Kubernetes integration.

USAGE="使用方法:kubernetes-jobmanager.sh kubernetes-session|kubernetes-application [args]"

ENTRY_POINT_NAME=$1  # 获取入口点名称(kubernetes-session或kubernetes-application)
networks:  net:    external: trueservices:  jobmanager1:    restart: always    image: apache/flink:1.16.3    container_name: jobmanager1    hostname: jobmanager1    ports:     - '8081:8081'    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-jobmanager1.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net  jobmanager2:    restart: always    image: apache/flink:1.16.3    container_name: jobmanager2    hostname: jobmanager2    ports:     - '8082:8081'    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-jobmanager2.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1  taskmanager1:    restart: always    image: apache/flink:1.16.3    container_name: taskmanager1    hostname: taskmanager1    command: taskmanager    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-taskmanager1.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1      - jobmanager2  taskmanager2:    restart: always    image: apache/flink:1.16.3    container_name: taskmanager2    hostname: taskmanager2    command: taskmanager    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-taskmanager2.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1      - jobmanager2  taskmanager3:    restart: always    image: apache/flink:1.16.3    container_name: taskmanager3    hostname: taskmanager3    command: taskmanager    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-taskmanager3.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1      - jobmanager2
最新发布
04-04
### Apache Flink 1.16.3 Docker Compose 文件配置 为了正确配置包含 `jobmanager` 和 `taskmanager` 的 Apache Flink 1.16.3 Docker Compose 文件,以下是完整的解决方案: #### 1. 创建基础目录结构 创建一个工作目录来存储所有的必要文件。例如: ```bash mkdir flink-docker && cd flink-docker ``` 在此目录下放置以下文件: - `docker-compose.yml` - `flink-conf.yaml` - 自定义脚本(如 `start.sh`) --- #### 2. 编写 `docker-compose.yml` 下面是一个适用于 Apache Flink 1.16.3 的标准 `docker-compose.yml` 文件示例。 ```yaml version: '3' services: jobmanager: image: flink:1.16.3-scala_2.12-java8 container_name: flink_jobmanager ports: - "8081:8081" environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager command: ["standalone-job", "--job-classname", "org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint"] volumes: - ./config/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/:/opt/flink/lib/ - ./scripts/start.sh:/start.sh taskmanager: image: flink:1.16.3-scala_2.12-java8 container_name: flink_taskmanager depends_on: - jobmanager environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager command: ["taskmanager"] deploy: replicas: 2 volumes: - ./config/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/:/opt/flink/lib/ volumes: data: ``` 上述配置说明如下: - **JobManager**: 使用官方镜像启动 JobManager 并暴露 Web UI 端口 (默认为 8081)[^1]。 - **TaskManager**: 同样基于官方镜像启动 TaskManager,并通过 `replicas` 参数指定副本数量。 - **共享卷**: 将本地的 `flink-conf.yaml` 和自定义库 (`./lib`) 映射到容器内部路径 `/opt/flink/conf` 和 `/opt/flink/lib` 中。 --- #### 3. 配置 `flink-conf.yaml` Flink 的核心配置位于 `flink-conf.yaml` 文件中。以下是一些必要的参数设置: ```yaml jobmanager.rpc.address: jobmanager jobmanager.memory.process.size: 1600m taskmanager.numberOfTaskSlots: 4 taskmanager.memory.process.size: 1g parallelism.default: 4 restart-strategy: fixed-delay restart-strategy.fixed-delay.attempts: 3 restart-strategy.fixed-delay.delay: 10s high-availability: zookeeper high-availability.zookeeper.quorum: zookeeper:2181 high-availability.storageDir: hdfs:///recovery/ state.backend: filesystem state.checkpoints.dir: hdfs:///checkpoints/ state.savepoints.dir: hdfs:///savepoints/ ``` 此配置实现了以下功能- 设置 JobManager 地址为 `jobmanager`[^2]。 - 定义高可用模式并启用 ZooKeeper 支持[^3]。 - 指定状态后端以及检查点和保存点的存储位置。 注意:如果未使用 HDFS 或 ZooKeeper,则需调整相应部分。 --- #### 4. 复制额外资源至容器 将所需的 JAR 文件和其他依赖项复制到容器中的适当位置。例如: ```bash docker cp flink-docker-1.0.jar jobmanager:/flink-docker-1.0.jar docker cp lib/ jobmanager:/flink-docker-lib/ docker cp scripts/start.sh jobmanager:/start.sh ``` 这一步骤确保了所有外部依赖能够被正常加载。 --- #### 5. 测试 Kafka 集成(可选) 如果有涉及 Kafka 的场景,可以通过以下命令进入 Kafka 容器进行调试: ```bash docker exec -it kafka /bin/bash cd /opt/bitnami/kafka/bin ``` 需要注意的是,在执行前应确认 Kafka 已经作为服务运行于同一网络环境中[^4]。 --- ### 总结 以上方法提供了一种标准化的方式构建支持 HA 的分布式 Flink 集群环境。它不仅涵盖了基本组件部署还考虑到了扩展性和容错能力的需求。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

BigDataMLApplication

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值