Kafka----API操作和Flume+Kafka整合

操作思路

开启zkserver -> 开启broker -> 创建topic -> 开启producer往topic发送消息 -> 开启consumer从topic中获取消息

单节点–单Broker集群

开启zookeeper:
  zookeeper-server-start.sh /home/wyc/apps/kafka/config/zookeeper.properties
开启broker:
  kafka-server-start.sh /home/wyc/apps/kafka/config/server.properties
创建topic:
  kafka-topics.sh --create --topic APIPro --zookeeper localhost:2181 --partitions 1 --replication-factor 1
开启producer:
  kafka-console-producer.sh --topic APIPro --broker-list localhost:9092
开启consumer:
  kafka-console-consumer.sh --zookeeper localhost:2181 --topic APIPro --from-beginning

测试生产者消费者是否正常通信
在这里插入图片描述
在这里插入图片描述

1. API操作producer
需提前加入你使用的kafka版本对应的client的依赖(我使用的是0.10.0.1)
依赖官网查找地址:https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>0.10.0.1</version>
</dependency>

代码:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;

public class KafkaPro {
    public static void main(String[] args){
        Properties props = new Properties();
        props.put("bootstrap.servers", "master:9092"); 
        //需要设置域名IP映射,或者写IP地址
        props.put("acks", "all");  
        //“all”设置将导致阻塞记录的完整提交,这是最慢但最持久的设置。
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<String,String>(props);
        for (int i = 0; i < 10; i++) {
            producer.send(new ProducerRecord<String, String>("APIPro", Integer.toString(i), Integer.toString(i)));
            //producer.send(new ProducerRecord<String, String>("APIPro","fenqu","shuju"));
        }
        producer.close();
    }
}

在这里插入图片描述

2. 通过Kafka Producer向指定分区中生产数据
例如:一个topic两个分区,往第一个分区中写入0-10,往第二个分区中写入10-30

创建topic:

kafka-topics.sh --create --topic APIPro01 --zookeeper localhost:2181 --partitions 2 --replication-factor 1

代码:

public class KafkaPro01 {
    public static void main(String[] args){
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.172.159:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<String,String>(props);
        for (int i = 0; i < 10; i++) {
            producer.send(new ProducerRecord<String, String>("APIPro01",0,Integer.toString(i), Integer.toString(i)));
            //0代表指定的分区,分区默认从0开始
        }
        for (int i = 10; i < 30; i++) {
            producer.send(new ProducerRecord<String, String>("APIPro01", 1,Integer.toString(i), Integer.toString(i)));
        }
        producer.close();
    }
}

3. API操作Consumer

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Arrays;
import java.util.Properties;

public class KafkaCon {
    public static void main(String[] args){
        Properties pro = new Properties();
        pro.setProperty("bootstrap.servers","master:9092");
        pro.setProperty("group.id", "1");
        pro.setProperty("enable.auto.commit","true");
        pro.setProperty("anto.commit.interval.ms","1000");
        pro.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        pro.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String,String> consumer = new KafkaConsumer<String,String>(pro);
        consumer.subscribe(Arrays.asList("APIPro"));
        while(true){
            ConsumerRecords<String,String> records = consumer.poll(100) ;
            for (ConsumerRecord<String,String> record : records){
                System.out.printf("offset = %d ,key = %s , value = %s%n",record.offset(),record.key(),record.value());
            }
        }
    }
}

Flume和Kafka整合

Flume
参照:http://flume.apache.org/releases/content/1.6.0/FlumeUserGuide.html#kafka-sink
注意:需要选择对应版本的flume(1.6.0),可在网址栏直接修改数字进入对应版本下
在这里插入图片描述

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/wyc/tmp/flume-kafka

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = flume-kafka-topic
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20

# Use a channel which buffers events in memory
a1.channels.c1.type = memory

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

Kafka

开启zookeeper:
  zookeeper-server-start.sh /home/wyc/apps/kafka/config/zookeeper.properties
开启broker:
  kafka-server-start.sh /home/wyc/apps/kafka/config/server.properties
创建topic:
  kafka-topics.sh --create --topic flume-kafka-topic --zookeeper localhost:2181 --partitions 1 --replication-factor 1
开启consumer:
  kafka-console-consumer.sh --zookeeper localhost:2181 --topic flume-kafka-topic --from-beginning

测试

开启flume:(在此之前需确认开启kafka相关服务)
flume-ng agent --name a1 --conf /home/hyxy/soft/flume/conf/ --conf-file /home/wyc/apps/flume/conf/flume-kafka.conf -Dflume.root.logger=INFO,console

创建测试文件
touch flume-kafka

向测试文件中写入测试数据,并查看Kafka端consumer的接收情况
echo aaa >> flume-kafka

flume端:
在这里插入图片描述
Kafka----consumer端:
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值