Flink error:File or directory data/output already exists. Existing files and directories are not ...

本文解析了Flink在sink到本地目录时遇到的File or directory already exists错误,详细介绍了问题的原因在于默认不覆盖已存在的文件或目录,并提供了两种解决方案:一是删除目标目录,二是使用覆盖模式进行写入。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、问题描述

flink在sink到本地目录的时候,报错:

09:12:19,166 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Job DataSet Sink (5e5507e2029038a82605e0ffd0ff4a14) switched from state FAILING to FAILED.
java.io.IOException: File or directory data/output already exists. Existing files and directories are not overwritten in NO_OVERWRITE mode. Use OVERWRITE mode to overwrite existing files and directories.
    at org.apache.flink.core.fs.FileSystem.initOutPathLocalFS(FileSystem.java:773)
    at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.initOutPathLocalFS(SafetyNetWrapperFileSystem.java:137)
    at org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:227)
    at org.apache.flink.api.java.io.TextOutputFormat.open(TextOutputFormat.java:88)
    at org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:202)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
    at java.lang.Thread.run(Thread.java:745)

   
二、问题原因

代码中的目录已经存在,而写操作writeAsText默认不覆盖之前的文件或者文件夹,所以导致报错。

val filepath="data//output"
text.writeAsText(filepath)

三、解决办法

1.方法一:删除目录
2.方法二:添加覆盖的参数

val filepath="data//output"
text.writeAsText(filepath,FileSystem.WriteMode.OVERWRITE)

 

networks:  net:    external: trueservices:  jobmanager1:    restart: always    image: apache/flink:1.16.3    container_name: jobmanager1    hostname: jobmanager1    ports:     - '8081:8081'    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-jobmanager1.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net  jobmanager2:    restart: always    image: apache/flink:1.16.3    container_name: jobmanager2    hostname: jobmanager2    ports:     - '8082:8081'    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-jobmanager2.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1  taskmanager1:    restart: always    image: apache/flink:1.16.3    container_name: taskmanager1    hostname: taskmanager1    command: taskmanager    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-taskmanager1.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1      - jobmanager2  taskmanager2:    restart: always    image: apache/flink:1.16.3    container_name: taskmanager2    hostname: taskmanager2    command: taskmanager    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-taskmanager2.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1      - jobmanager2  taskmanager3:    restart: always    image: apache/flink:1.16.3    container_name: taskmanager3    hostname: taskmanager3    command: taskmanager    volumes:      - /etc/localtime:/etc/localtime      - /home/sumengnan/apache/flink/timezone:/etc/timezone      - /home/sumengnan/apache/flink/conf/flink-conf-taskmanager3.yaml:/opt/flink/conf/flink-conf.yaml      - /home/sumengnan/apache/flink/conf/log4j.properties:/opt/flink/conf/log4j.properties      - /home/sumengnan/apache/flink/conf/logback.xml:/opt/flink/conf/logback.xml      - /home/sumengnan/apache/flink/conf/log4j-console.properties:/opt/flink/conf/log4j-console.properties      - /home/sumengnan/apache/flink/conf/logback-console.xml:/opt/flink/conf/logback-console.xml      - /home/sumengnan/apache/flink/data:/opt/flink/data      - /home/sumengnan/apache/flink/log:/opt/flink/log      - /home/sumengnan/apache/flink/tmp:/opt/flink/tmp    networks:      - net    depenes_on:      - jobmanager1      - jobmanager2
最新发布
04-04
### Apache Flink 1.16.3 Docker Compose 文件配置 为了正确配置包含 `jobmanager` 和 `taskmanager` 的 Apache Flink 1.16.3 Docker Compose 文件,以下是完整的解决方案: #### 1. 创建基础目录结构 创建一个工作目录来存储所有的必要文件。例如: ```bash mkdir flink-docker && cd flink-docker ``` 在此目录下放置以下文件: - `docker-compose.yml` - `flink-conf.yaml` - 自定义脚本(如 `start.sh`) --- #### 2. 编写 `docker-compose.yml` 下面是一个适用于 Apache Flink 1.16.3 的标准 `docker-compose.yml` 文件示例。 ```yaml version: '3' services: jobmanager: image: flink:1.16.3-scala_2.12-java8 container_name: flink_jobmanager ports: - "8081:8081" environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager command: ["standalone-job", "--job-classname", "org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint"] volumes: - ./config/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/:/opt/flink/lib/ - ./scripts/start.sh:/start.sh taskmanager: image: flink:1.16.3-scala_2.12-java8 container_name: flink_taskmanager depends_on: - jobmanager environment: - JOB_MANAGER_RPC_ADDRESS=jobmanager command: ["taskmanager"] deploy: replicas: 2 volumes: - ./config/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/:/opt/flink/lib/ volumes: data: ``` 上述配置说明如下: - **JobManager**: 使用官方镜像启动 JobManager 并暴露 Web UI 端口 (默认为 8081)[^1]。 - **TaskManager**: 同样基于官方镜像启动 TaskManager,并通过 `replicas` 参数指定副本数量。 - **共享卷**: 将本地的 `flink-conf.yaml` 和自定义库 (`./lib`) 映射到容器内部路径 `/opt/flink/conf` 和 `/opt/flink/lib` 中。 --- #### 3. 配置 `flink-conf.yaml` Flink 的核心配置位于 `flink-conf.yaml` 文件中。以下是一些必要的参数设置: ```yaml jobmanager.rpc.address: jobmanager jobmanager.memory.process.size: 1600m taskmanager.numberOfTaskSlots: 4 taskmanager.memory.process.size: 1g parallelism.default: 4 restart-strategy: fixed-delay restart-strategy.fixed-delay.attempts: 3 restart-strategy.fixed-delay.delay: 10s high-availability: zookeeper high-availability.zookeeper.quorum: zookeeper:2181 high-availability.storageDir: hdfs:///recovery/ state.backend: filesystem state.checkpoints.dir: hdfs:///checkpoints/ state.savepoints.dir: hdfs:///savepoints/ ``` 此配置实现了以下功能: - 设置 JobManager 地址为 `jobmanager`[^2]。 - 定义高可用模式并启用 ZooKeeper 支持[^3]。 - 指定状态后端以及检查点和保存点的存储位置。 注意:如果未使用 HDFS 或 ZooKeeper,则需调整相应部分。 --- #### 4. 复制额外资源至容器 将所需的 JAR 文件和其他依赖项复制到容器中的适当位置。例如: ```bash docker cp flink-docker-1.0.jar jobmanager:/flink-docker-1.0.jar docker cp lib/ jobmanager:/flink-docker-lib/ docker cp scripts/start.sh jobmanager:/start.sh ``` 这一步骤确保了所有外部依赖能够被正常加载。 --- #### 5. 测试 Kafka 集成(可选) 如果有涉及 Kafka 的场景,可以通过以下命令进入 Kafka 容器进行调试: ```bash docker exec -it kafka /bin/bash cd /opt/bitnami/kafka/bin ``` 需要注意的是,在执行前应确认 Kafka 已经作为服务运行于同一网络环境中[^4]。 --- ### 总结 以上方法提供了一种标准化的方式构建支持 HA 的分布式 Flink 集群环境。它不仅涵盖了基本组件部署还考虑到了扩展性和容错能力的需求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值