flume-ng总结/注意事项

本文介绍了在使用Flume-ng进行日志收集时遇到的一些常见问题及其解决方案,包括avroclient传输过程中数据丢失的问题、启动脚本中内存设置不当导致的错误以及server端memorychannel配置不当引发的问题。

1. 在客户端,使用flume-ng 的avro client传输过程中会丢数据,如

$bin/flume-ng avro-client -H localhost -p 41414 -F /usr/logs/log.10


2. /bin目录下flume-ng启动脚本中的OPTS要设置的大一些,否则会报内存溢出的错误。默认是20m,如下:

JAVA_OPTS="-Xmx20m"

 

3. server端的memory channel的capacity和transactionCapacity一定要设置的比client的大,否则会报错,如下:

13 六月 2013 17:51:57,546 ERROR [pool-7-thread-1] (org.apache.flume.source.AvroSource.appendBatch:261)  - Avro source r1: Unable to process event batch. Exception follows.
org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: ..}



 

# 基础镜像:AlmaLinux 9(非最小化版,使用 dnf 包管理器) FROM almalinux:9 # 1. 修正环境变量配置(确保 PATH 正确包含 Flume 和 Java 路径) ENV JAVA_HOME=/opt/jdk8 \ FLUME_HOME=/opt/flume \ PATH="${PATH}:${JAVA_HOME}/bin:${FLUME_HOME}/bin" ENV LANG=en_US.UTF-8 # 2. 安装基础工具(修复 RUN 命令换行格式:反斜杠后无空格,下一行无缩进) RUN dnf install -y tar gzip --nogpgcheck && \ dnf clean all # 3. 安装 Java(使用绝对路径执行 java -version) COPY jdk-8u202-linux-x64.tar.gz /tmp/ RUN mkdir -p $JAVA_HOME && \ tar -xz --strip-components=1 -C $JAVA_HOME -f /tmp/jdk-8u202-linux-x64.tar.gz && \ rm -f /tmp/jdk-8u202-linux-x64.tar.gz && \ $JAVA_HOME/bin/java -version # 使用绝对路径,不依赖 PATH # 4. 修正 Flume 安装步骤(同样确保换行格式正确) COPY apache-flume-1.11.0-bin.tar.gz /tmp/ RUN echo "开始安装 Flume..." && \ tar -xzvf /tmp/apache-flume-1.11.0-bin.tar.gz -C /opt && \ mv /opt/apache-flume-1.11.0-bin /opt/flume && \ chmod +x /opt/flume/bin/flume-ng && \ if [ ! -f "/opt/flume/bin/flume-ng" ]; then \ echo "ERROR: flume-ng 不存在!" && exit 1; \ else \ echo "Flume 安装成功!flume-ng 路径:/opt/flume/bin/flume-ng"; \ fi && \ rm -f /tmp/apache-flume-1.11.0-bin.tar.gz && \ rm -rf /opt/flume/docs /opt/flume/examples # 5. 配置 flume-env.sh(保持不变) RUN echo "export JAVA_HOME=$JAVA_HOME" > $FLUME_HOME/conf/flume-env.sh && \ echo "export JAVA_OPTS=\"-Xmx512m -Xms256m\"" >> $FLUME_HOME/conf/flume-env.sh && \ chmod +x $FLUME_HOME/conf/flume-env.sh # 6. 配置 log4j(保持不变,确保日志输出) RUN echo "log4j.rootLogger=INFO, console" > $FLUME_HOME/conf/log4j.properties && \ echo "log4j.appender.console=org.apache.log4j.ConsoleAppender" >> $FLUME_HOME/conf/log4j.properties && \ echo "log4j.appender.console.target=System.out" >> $FLUME_HOME/conf/log4j.properties && \ echo "log4j.appender.console.layout=org.apache.log4j.PatternLayout" >> $FLUME_HOME/conf/log4j.properties && \ echo "log4j.appender.console.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] - %m%n" >> $FLUME_HOME/conf/log4j.properties && \ echo "log4j.logger.org.apache.flume=INFO" >> $FLUME_HOME/conf/log4j.properties && \ echo "log4j.logger.org.apache.flume.sink.LoggerSink=INFO" >> $FLUME_HOME/conf/log4j.properties # 7. 暴露端口(保持不变) EXPOSE 55555 # 8. 通过 shell 执行启动命令,确保环境变量生效(保持不变) CMD ["/bin/sh", "-c", "$FLUME_HOME/bin/flume-ng agent -n agent -c $FLUME_HOME/conf -f $FLUME_HOME/conf/flume.conf"]
09-06
[+] Building 123.6s (15/15) FINISHED docker:default => [internal] load build definition from dockerfile 0.0s => => transferring dockerfile: 1.89kB 0.0s => WARN: JSONArgsRecommended: JSON arguments recommended for ENTRYPOINT to prevent unintended behavior related t 0.0s => [internal] load metadata for docker.io/library/almalinux:9 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => CACHED [ 1/10] FROM docker.io/library/almalinux:9 0.0s => [internal] load build context 0.0s => => transferring context: 132B 0.0s => [ 2/10] RUN dnf update -y && dnf install -y tar gzip procps vim && dnf clean all 112.1s => [ 3/10] COPY jdk-8u202-linux-x64.tar.gz /tmp/ 0.6s => [ 4/10] RUN mkdir -p "/opt/jdk8" && tar -xzf /tmp/jdk-*.tar.gz -C "/opt/jdk8" --strip-components=1 && 3.3s => [ 5/10] COPY apache-flume-1.11.0-bin.tar.gz /tmp/ 0.3s => [ 6/10] RUN mkdir -p "/opt/flume" && tar -xzf /tmp/apache-flume-*.tar.gz -C "/opt/flume" --strip-componen 1.1s => [ 7/10] RUN echo "flume.root.logger=INFO,console" > "/opt/flume/conf/log4j.properties" && echo "log4j.app 0.3s => [ 8/10] COPY flume.conf /opt/flume/conf/ 0.3s => [ 9/10] RUN groupadd -r flume && useradd -r -g flume -d /flume flume && chown -R flume:flume "/opt/fl 3.4s => [10/10] WORKDIR /opt/flume 0.1s => exporting to image 2.0s => => exporting layers 2.0s => => writing image sha256:9e3bfbb15c04ee4fd55f93dce462c44c36d962dc35fc86a45d98d4c8d128939d 0.0s => => naming to docker.io/library/flume-fixed 0.0s 1 warning found (use docker --debug to expand): - JSONArgsRecommended: JSON arguments recommended for ENTRYPOINT to prevent unintended behavior related to OS signals (line 47) trcao@Trong:~/almaflume$ docker run -d --name flume-test \ -v $(pwd)/flume.conf:/opt/flume/conf/flume.conf \ flume-fixed 35781133c17bdf07c1a3d3f5a0f67b9f94503cf837873d96151a85a601784da1 trcao@Trong:~/almaflume$ docker logs -f flume-test /bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) /bin/sh: line 1: [bin/flume-ng,: No such file or directory trcao@Trong:~/almaflume$ docker logs -f flume-test /bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8) /bin/sh: line 1: [bin/flume-ng,: No such file or directory trcao@Trong:~/almaflume$
09-06
[root@hadoop1 flume-1.9.0]# bin/flume-ng agent -n a3 -c conf/ -f dir-flume-hdfs.conf Info: Sourcing environment configuration script /export/servers/flume-1.9.0/conf/flume-env.sh Info: Including Hadoop libraries found via (/export/servers/hadoop-3.3.0/bin/hadoop) for HDFS access Info: Including Hive libraries found via () for Hive access + exec /export/servers/jdk1.8.0_381/bin/java -Xmx20m -cp '/export/servers/flume-1.9.0/conf:/export/servers/flume-1.9.0/lib/*:/export/servers/hadoop-3.3.0/etc/hadoop:/export/servers/hadoop-3.3.0/share/hadoop/common/lib/*:/export/servers/hadoop-3.3.0/share/hadoop/common/*:/export/servers/hadoop-3.3.0/share/hadoop/hdfs:/export/servers/hadoop-3.3.0/share/hadoop/hdfs/lib/*:/export/servers/hadoop-3.3.0/share/hadoop/hdfs/*:/export/servers/hadoop-3.3.0/share/hadoop/mapreduce/*:/export/servers/hadoop-3.3.0/share/hadoop/yarn:/export/servers/hadoop-3.3.0/share/hadoop/yarn/lib/*:/export/servers/hadoop-3.3.0/share/hadoop/yarn/*:/lib/*' -Djava.library.path=:/export/servers/hadoop-3.3.0/lib/native org.apache.flume.node.Application -n a3 -f dir-flume-hdfs.conf SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/servers/flume-1.9.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/servers/hadoop-3.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
05-18
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值