Flume零点漂移 解决思路

Fliume发生零点漂移的场景:

零点漂移的本质是 时间基准不一致 ,核心是日志的 “真实产生时间” 与 Flume 处理 / 存储时使用的 “判断时间” 出现偏差,最终导致数据被归到错误的零点分区。

解决方案:

我将采用自定义拦截器和Flume配置文件的方法 

拦截器和flume配置文件一定要对应

自定义拦截器

import com.alibaba.fastjson.JSONObject;

import org.apache.flume.Context;

import org.apache.flume.Event;

import org.apache.flume.interceptor.Interceptor;

import java.nio.charset.StandardCharsets;

import java.util.Iterator;

import java.util.List;

import java.util.Map;

public class TimestampInterceptor implements Interceptor {

    @Override

    public void initialize() {

    }

    @Override

    public Event intercept(Event event) {

        byte[] body = event.getBody();

        String log = new String(body, StandardCharsets.UTF_8);

        Map<String, String> headers = event.getHeaders();

        JSONObject jsonObject = JSONObject.parseObject(log);

        String ts = jsonObject.getString("ts");

        headers.put("timestamp", ts);

        return event;

    }

    @Override

    public List<Event> intercept(List<Event> list) {

        Iterator<Event> iterator = list.iterator();

        while (iterator.hasNext()) {

            Event event = iterator.next();

            intercept(event);

        }

        return list;

    }

    @Override

    public void close() {

    }

    public static class Builder implements Interceptor.Builder {

        @Override

        public Interceptor build() {

            return new TimestampInterceptor();

        }

        @Override

        public void configure(Context context) {

        }

    }

}

Flume文件

a1.sources=r1

a1.channels=c1

a1.sinks=k1

#配置source1

a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource

a1.sources.r1.batchSize = 5000

a1.sources.r1.batchDurationMillis = 2000

a1.sources.r1.kafka.bootstrap.servers = kafka01:9092,kafka02:9092,kafka03:9092

a1.sources.r1.kafka.topics=kafka_log

# 添加自定义的flume拦截器类

a1.sources.r1.interceptors = i1

a1.sources.r1.interceptors.i1.type = com.fblinux.flume.log.TimestampInterceptor$Builder

#配置channel

a1.channels.c1.type = file

a1.channels.c1.checkpointDir = /opt/module/flume/checkpoint/behavior1

a1.channels.c1.dataDirs = /opt/module/flume/data/behavior1

a1.channels.c1.maxFileSize = 2146435071

a1.channels.c1.capacity = 1000000

a1.channels.c1.keep-alive = 6

#配置sink

a1.sinks.k1.type = hdfs

a1.sinks.k1.hdfs.path = /data/log/kafka_log/%Y-%m-%d

a1.sinks.k1.hdfs.filePrefix = log

a1.sinks.k1.hdfs.round = false

a1.sinks.k1.hdfs.rollInterval = 10

a1.sinks.k1.hdfs.rollSize = 134217728

a1.sinks.k1.hdfs.rollCount = 0

#控制输出文件类型

a1.sinks.k1.hdfs.fileType = CompressedStream

a1.sinks.k1.hdfs.codeC = gzip

#组装

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值