expected '<document>', but found Scalar in 'reader', line 51, column 1:

本文详细解析了在启动Apache Storm时遇到的配置错误,主要原因是storm.yaml文件中的格式不正确,如在“-”和“:”后未留空格。文章提供了正确的配置示例,帮助读者避免类似的错误。

expected '<document start>', but found Scalar in 'reader', line 51, column 1:    - "localhost"   ^

 

      启动storm时发生了问题,如下:

Exception in thread "main" Exception java.lang.ExceptionInInitializerError

at org.apache.storm.config$read_storm_config.invoke(config.clj:78)

at org.apache.storm.config$fn__972.invoke(config.clj:100)

at org.apache.storm.config__init.load(Unknown Source)

at org.apache.storm.config__init.<clinit>(Unknown Source)

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:348)

at clojure.lang.RT.classForName(RT.java:2154)

at clojure.lang.RT.classForName(RT.java:2163)

at clojure.lang.RT.loadClassForName(RT.java:2182)

at clojure.lang.RT.load(RT.java:436)

at clojure.lang.RT.load(RT.java:412)

at clojure.core$load$fn__5448.invoke(core.clj:5866)

at clojure.core$load.doInvoke(core.clj:5865)

at clojure.lang.RestFn.invoke(RestFn.java:408)

at clojure.core$load_one.invoke(core.clj:5671)

at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)

at clojure.core$load_lib.doInvoke(core.clj:5710)

at clojure.lang.RestFn.applyTo(RestFn.java:142)

at clojure.core$apply.invoke(core.clj:632)

in thread "main" java.lang.ExceptionInInitializerErrorat clojure.core$load_libs.doInvoke(core.clj:5753)

at clojure.lang.RestFn.applyTo(RestFn.java:137)

at clojure.core$apply.invoke(core.clj:634)

at clojure.core$use.doInvoke(core.clj:5843)

at clojure.lang.RestFn.invoke(RestFn.java:408)

at org.apache.storm.command.config_value$loading__5340__auto____11328.invoke(config_value.clj:16)

at org.apache.storm.command.config_value__init.load(Unknown Source)

at org.apache.storm.command.config_value__init.<clinit>(Unknown Source)

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:348)

at clojure.lang.RT.classForName(RT.java:2154)

at clojure.lang.RT.classForName(RT.java:2163)

at clojure.lang.RT.loadClassForName(RT.java:2182)

at clojure.lang.RT.load(RT.java:436)

at clojure.lang.RT.load(RT.java:412)

at clojure.core$load$fn__5448.invoke(core.clj:5866)

 

at clojure.core$load.doInvoke(core.clj:5865)

at clojure.lang.RestFn.invoke(RestFn.java:408)

at clojure.lang.Var.invoke(Var.java:379)

at org.apache.storm.command.config_value.<clinit>(Unknown Source)

Caused by: expected '<document start>', but found Scalar

 in 'reader', line 51, column 1:

     - "localhost"

    ^

 

      storm.yaml文件中

      在 “-” 和 “:”之后一定要留下空格。

 

 storm.zookeeper.servers:空格

 -空格"localhost"

 storm.zookeeper.port:空格2181

 nimbus.host:空格"localhost"

 supervisor.slots.ports:空格

 -空格6700

 -空格6701

 -空格6702

 -空格6703

 storm.local.dir:空格"/home/injavawetrust/program/storm/workdir"

Starting Job Manager [ERROR] The execution result is empty. [ERROR] Could not get JVM parameters and dynamic configurations properly. [ERROR] Raw output from BashJavaUtils: INFO [] - Using standard YAML parser to load flink configuration file from /opt/flink/conf/config.yaml. ERROR [] - Failed to parse YAML configuration org.snakeyaml.engine.v2.exceptions.YamlEngineException: expected '<document start>', but found '<scalar>' in reader, line 1, column 9 at org.snakeyaml.engine.v2.parser.ParserImpl$ParseDocumentStart.produce(ParserImpl.java:493) ~[flink-dist-2.0.0.jar:2.0.0] at org.snakeyaml.engine.v2.parser.ParserImpl.lambda$produce$1(ParserImpl.java:232) ~[flink-dist-2.0.0.jar:2.0.0] at java.util.Optional.ifPresent(Unknown Source) ~[?:?] at org.snakeyaml.engine.v2.parser.ParserImpl.produce(ParserImpl.java:232) ~[flink-dist-2.0.0.jar:2.0.0] at org.snakeyaml.engine.v2.parser.ParserImpl.peekEvent(ParserImpl.java:206) ~[flink-dist-2.0.0.jar:2.0.0] at org.snakeyaml.engine.v2.parser.ParserImpl.checkEvent(ParserImpl.java:198) ~[flink-dist-2.0.0.jar:2.0.0] at org.snakeyaml.engine.v2.composer.Composer.getSingleNode(Composer.java:131) ~[flink-dist-2.0.0.jar:2.0.0] at org.snakeyaml.engine.v2.api.Load.loadOne(Load.java:110) ~[flink-dist-2.0.0.jar:2.0.0] at org.snakeyaml.engine.v2.api.Load.loadFromInputStream(Load.java:123) ~[flink-dist-2.0.0.jar:2.0.0] at org.apache.flink.configuration.YamlParserUtils.loadYamlFile(YamlParserUtils.java:100) [flink-dist-2.0.0.jar:2.0.0] at org.apache.flink.configuration.GlobalConfiguration.loadYAMLResource(GlobalConfiguration.java:252) [flink-dist-2.0.0.jar:2.0.0] at org.apache.flink.configuration.GlobalConfiguration.loadConfiguration(GlobalConfiguration.java:150) [flink-dist-2.0.0.jar:2.0.0] at org.apache.flink.runtime.util.ConfigurationParserUtils.loadCommonConfiguration(ConfigurationParserUtils.java:153) [flink-dist-2.0.0.jar:2.0.0] at org.apache.flink.runtime.util.bash.FlinkConfigLoader.loadConfiguration(FlinkConfigLoader.java:41) [flink-dist-2.0.0.jar:2.24.1] at org.apache.flink.runtime.util.bash.BashJavaUtils.runCommand(BashJavaUtils.java:66) [bash-java-utils.jar:2.24.1] at org.apache.flink.runtime.util.bash.BashJavaUtils.main(BashJavaUtils.java:54) [bash-java-utils.jar:2.24.1] Exception in thread "main" java.lang.RuntimeException: Error parsing YAML configuration. at org.apache.flink.configuration.GlobalConfiguration.loadYAMLResource(GlobalConfiguration.java:257) at org.apache.flink.configuration.GlobalConfiguration.loadConfiguration(GlobalConfiguration.java:150) at org.apache.flink.runtime.util.ConfigurationParserUtils.loadCommonConfiguration(ConfigurationParserUtils.java:153) at org.apache.flink.runtime.util.bash.FlinkConfigLoader.loadConfiguration(FlinkConfigLoader.java:41)--- services: kafka-0: image: apache/kafka:3.9.1 container_name: kafka-0 ports: - "19092:9092" - "19093:9093" environment: KAFKA_NODE_ID: 1 KAFKA_PROCESS_ROLES: broker,controller KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:19093 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092,CONTROLLER://localhost:19093 KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER KAFKA_LOG_DIRS: /var/lib/kafka/data KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" volumes: - ./kafka/conf/log4j.properties:/opt/kafka/config/log4j.properties - ./kafka/0/conf/kraft/server.properties:/opt/kafka/config/kraft/server.properties - ./kafka/0/data:/var/lib/kafka/data command: - sh - -c - > if [ ! -f /var/lib/kafka/data/meta.properties ]; then # 生成随机 UUID 并格式化存储(-c 指定配置文件路径) /opt/kafka/bin/kafka-storage.sh format \ -t $(/opt/kafka/bin/kafka-storage.sh random-uuid) \ -c /opt/kafka/config/kraft/server.properties fi exec /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server.properties healthcheck: test: - CMD - kafka-broker-api-versions.sh - --bootstrap-server - localhost:19092 interval: 10s timeout: 10s retries: 5 networks: - datacamp-net flink-jobmanager-0: image: flink:2.0.0-java17 container_name: flink-jobmanager-0 ports: - "18081:8081" environment: FLINK_PROPERTIES: | jobmanager.rpc.address: flink-jobmanager-0 state.backend: filesystem state.checkpoints.dir: file:///tmp/flink-checkpoints heartbeat.interval: 1000 heartbeat.timeout: 5000 rest.flamegraph.enabled: true web.upload.dir: /opt/flink/usrlib volumes: - ./flink/jobmanager/conf:/opt/flink/conf - ./flink/jobmanager/0/flink-checkpoints:/tmp/flink-checkpoints - ./flink/jobmanager/0/usrlib:/opt/flink/usrlib command: jobmanager healthcheck: test: - CMD - curl - -f - http://localhost:8081 interval: 15s timeout: 5s retries: 10 networks: - datacamp-net flink-taskmanager-0: image: flink:2.0.0-java17 container_name: flink-taskmanager-0 environment: FLINK_PROPERTIES: | jobmanager.rpc.address: flink-jobmanager-0 taskmanager.numberOfTaskSlots: 2 state.backend: filesystem state.checkpoints.dir: file:///tmp/flink-checkpoints heartbeat.interval: 1000 heartbeat.timeout: 5000 volumes: - ./flink/taskmanager/conf:/opt/flink/conf - ./flink/taskmanager/0/flink-checkpoints:/tmp/flink-checkpoints - ./flink/taskmanager/0/usrlib:/opt/flink/usrlib command: taskmanager depends_on: flink-jobmanager-0: condition: service_healthy networks: - datacamp-net flink-taskmanager-1: image: flink:2.0.0-java17 container_name: flink-taskmanager-1 environment: FLINK_PROPERTIES: | jobmanager.rpc.address: flink-jobmanager-0 taskmanager.numberOfTaskSlots: 2 state.backend: filesystem state.checkpoints.dir: file:///tmp/flink-checkpoints heartbeat.interval: 1000 heartbeat.timeout: 5000 volumes: - ./flink/taskmanager/conf:/opt/flink/conf - ./flink/taskmanager/1/flink-checkpoints:/tmp/flink-checkpoints - ./flink/taskmanager/1/usrlib:/opt/flink/usrlib command: taskmanager depends_on: flink-jobmanager-0: condition: service_healthy networks: - datacamp-net networks: datacamp-net: driver: bridge
最新发布
06-01
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值