flink集群各种部署方式

flink部署方式

无资源管理standalone

原生无平台
  1. Session Mode
    # (1) Start Cluster
    $ ./bin/start-cluster.sh
    # (2) You can now access the Flink Web Interface on http://localhost:8081
    # (3) Submit example job
    $ ./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
    # (4) Stop the cluster again
    $ ./bin/stop-cluster.sh
    
  2. Application Mode
    $ cp ./examples/streaming/TopSpeedWindowing.jar lib/
    Then, we can launch the JobManager:
    $ ./bin/standalone-job.sh start --job-classname org.apache.flink.streaming.examples.windowing.TopSpeedWindowing
    The web interface is now available at localhost:8081. However, the application won’t be able to start, because there are no TaskManagers running yet:
    $ ./bin/taskmanager.sh start
    Note: You can start ~~~~~~~~multiple TaskManagers, if your application needs more resources.
    Stopping the services is also supported via the scripts. Call them multiple times if you want to stop multiple instances, or use stop-all:
    $ ./bin/taskmanager.sh stop
    $ ./bin/standalone-job.sh stop
    
  3. Per-Job Mode
    不支持
运行在docker
  1. Session Mode
    #-------------------------命令行---------------------------
    #设置变量,创建docker-network
    $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
    $ docker network create flink-network
    #创建jobmanager
    $ docker run \
      --rm \
      --name=jobmanager \
      --network flink-network \
      --publish 8081:8081 \
      --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
      flink:1.12.2-scala_2.11 jobmanager
    #创建taskmanager
    $ docker run \
      --rm \
      --name=taskmanager \
      --network flink-network \
      --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
      flink:1.12.2-scala_2.11 taskmanager
    #进入jobmanager,run
    $ ./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
    #-------------------------docker-compose---------------------------
    version: "2.2"
        services:
          jobmanager:
            image: flink:1.12.2-scala_2.11
            ports:
              - "8081:8081"
            command: jobmanager
            environment:
              - |
                FLINK_PROPERTIES=
                jobmanager.rpc.address: jobmanager
        
          taskmanager:
            image: flink:1.12.2-scala_2.11
            depends_on:
              - jobmanager
            command: taskmanager
            scale: 1
            environment:
              - |
                FLINK_PROPERTIES=
                jobmanager.rpc.address: jobmanager
                taskmanager.numberOfTaskSlots: 2
    
  2. Application Mode
    通过运行flink docker的standalone-job就会以application模式启动Jobmanager container
    还得把程序包挂载到容器内部的/opt/flink/usrlib
    -----------------------------直接命令挂载方式--------------------------
    $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
    $ docker network create flink-network
    
    $ docker run \
        --mount type=bind,src=/host/path/to/job/artifacts1,target=/opt/flink/usrlib/artifacts1 \
        --mount type=bind,src=/host/path/to/job/artifacts2,target=/opt/flink/usrlib/artifacts2 \
        --rm \
        --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
        --name=jobmanager \
        --network flink-network \
        flink:1.12.2-scala_2.11 standalone-job \
        --job-classname com.job.ClassName \
        [--job-id <job id>] \
        [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
        [job arguments]
    
    $ docker run \
        --mount type=bind,src=/host/path/to/job/artifacts1,target=/opt/flink/usrlib/artifacts1 \
        --mount type=bind,src=/host/path/to/job/artifacts2,target=/opt/flink/usrlib/artifacts2 \
        --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
        flink:1.12.2-scala_2.11 taskmanager
    
    -----------------------------Dockerfile生成镜像方式--------------------------
    FROM flink
    ADD /host/path/to/job/artifacts/1 /opt/flink/usrlib/artifacts/1
    ADD /host/path/to/job/artifacts/2 /opt/flink/usrlib/artifacts/2
    
    $ docker build --tag flink_with_job_artifacts .
    $ docker run \
        flink_with_job_artifacts standalone-job \
        --job-classname com.job.ClassName \
        [--job-id <job id>] \
        [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
        [job arguments]
    
    $ docker run flink_with_job_artifacts taskmanager
    
    也可以使用docker-compose部署application
     version: "2.2"
     services:
       jobmanager:
         image: flink:1.12.2-scala_2.11
         ports:
           - "8081:8081"
         command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
         volumes:
           - /host/path/to/job/artifacts:/opt/flink/usrlib
         environment:
           - |
             FLINK_PROPERTIES=
             jobmanager.rpc.address: jobmanager
             parallelism.default: 2
     
       taskmanager:
         image: flink:1.12.2-scala_2.11
         depends_on:
           - jobmanager
         command: taskmanager
         scale: 1
         volumes:
           - /host/path/to/job/artifacts:/opt/flink/usrlib
         environment:
           - |
             FLINK_PROPERTIES=
             jobmanager.rpc.address: jobmanager
             taskmanager.numberOfTaskSlots: 2
             parallelism.default: 2
    
    可以定制化创建一个镜像
    https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/docker.html#advanced-customization
  3. Per-Job Mode
    不支持
运行在k8s

ConfigMap,Service通用yaml
https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/docker.html#advanced-customization

#flink-configuration-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
    app: flink
### 使用 Docker 部署 Flink 集群 为了使用 Docker 部署 Apache Flink 集群,需遵循一系列配置和命令来确保集群能够正常运行。官方提供了一种简便的方法通过单个 Bash 脚本来启动本地集群环境[^1]。 对于生产级别的部署或者更复杂的设置,则建议采用更为详细的配置方式。当目标是在容器环境中操作时,Docker 提供了一个灵活的选择。具体来说: - **准备阶段** 安装 DockerDocker Compose 是必要的前提条件。这些工具允许创建多容器的应用程序栈,并简化管理多个相互依赖的服务的过程。 - **构建镜像** 可以基于官方提供的 `flink` Dockerfile 或者自定义的 Dockerfile 来构建适合特定需求的 Flink 镜像。如果计划利用 AWS S3 存储日志或其他数据文件,在构建过程中应考虑加入相应的支持库,比如将 `flink-s3-fs-hadoop` JAR 文件复制到插件目录下以便启用对 S3 的访问能力[^2]。 - **编写 docker-compose.yml** 创建一个描述整个应用架构和服务间关系的 YAML 文件非常重要。下面是一个简单的例子用于说明如何定义 master(jobmanager)和 worker(taskmanager)节点之间的交互: ```yaml version: '3' services: jobmanager: image: flink:latest ports: - "8081:8081" command: standalone-job --job-classname org.apache.flink.streaming.examples.socket.WindowedSocketWindowWordCount environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager taskmanager: image: flink:latest depends_on: - jobmanager entrypoint: ["./bin/taskmanager.sh", "start"] deploy: replicas: 2 ``` 这段配置指定了两个主要组件——JobManager 和 TaskManager,并设置了它们之间的工作流程以及对外暴露端口等细节信息。 - **启动集群** 利用上述编写的 compose 文件,执行如下指令即可一键启动完整的 Flink 集群实例: ```bash docker-compose up -d ``` 该命令会按照指定方式自动拉取所需资源并完成初始化过程,最终形成一个多节点协同工作的分布式计算平台。 #### 注意事项 考虑到高可用性和容错机制的重要性,在实际应用场景中可能还需要进一步调整参数设定,例如增加冗余副本数量、优化网络连接策略或是集成外部监控系统等等。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值