第5章 Flume对接Kafka

本文介绍了如何使用Flume简单地从文件日志中收集数据并将其发送到Kafka,以及如何通过自定义Interceptor实现在Flume中进行数据分离,根据日志内容将数据发送到不同的Kafka主题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1 简单实现
1)配置flume

define

a1.sources = r1
a1.sinks = k1
a1.channels = c1

source

a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/data/flume.log

sink

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1

channel

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

bind

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
2) 启动kafka消费者
3) 进入flume根目录下,启动flume
$ bin/flume-ng agent -c conf/ -n a1 -f jobs/flume-kafka.conf
4) 向 /opt/module/data/flume.log里追加数据,查看kafka消费者消费情况
$ echo hello >> /opt/module/data/flume.log
6.2 数据分离
0)需求: 将flume采集的数据按照不同的类型输入到不同的topic中
将日志数据中带有atguigu的,输入到Kafka的first主题中,
将日志数据中带有shangguigu的,输入到Kafka的second主题中,
其他的数据输入到Kafka的third主题中
1)编写Flume的Interceptor
package com.atguigu.kafka.flumeInterceptor;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import javax.swing.text.html.HTMLEditorKit;
import java.util.List;
import java.util.Map;

public class FlumeKafkaInterceptor implements Interceptor {
@Override
public void initialize() {

}

/**
 * 如果包含"atguigu"的数据,发送到first主题
 * 如果包含"sgg"的数据,发送到second主题
 * 其他的数据发送到third主题
 * @param event
 * @return
 */
@Override
public Event intercept(Event event) {
    //1.获取event的header
    Map<String, String> headers = event.getHeaders();
    //2.获取event的body
    String body = new String(event.getBody());
    if(body.contains("atguigu")){
        headers.put("topic","first");
    }else if(body.contains("sgg")){
        headers.put("topic","second");
    }
    return event;

}

@Override
public List<Event> intercept(List<Event> events) {
    for (Event event : events) {
      intercept(event);
    }
    return events;
}

@Override
public void close() {

}

public static class MyBuilder implements  Builder{

    @Override
    public Interceptor build() {
        return  new FlumeKafkaInterceptor();
    }

    @Override
    public void configure(Context context) {

    }
}

}
2)将写好的interceptor打包上传到Flume安装目录的lib目录下
3)配置flume

Name the components on this agent

a1.sources = r1
a1.sinks = k1
a1.channels = c1

Describe/configure the source

a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 6666

Describe the sink

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = third
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1

#Interceptor
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.atguigu.kafka.flumeInterceptor.FlumeKafkaInterceptor$MyBuilder

# Use a channel which buffers events in memory

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

Bind the source and sink to the channel

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
4) 启动kafka消费者
5) 进入flume根目录下,启动flume
$ bin/flume-ng agent -c conf/ -n a1 -f jobs/flume-kafka.conf
6) 向6666端口写数据,查看kafka消费者消费情况

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值