kafka第四篇--快速入门(如何使用kafka)

本文指导您如何快速启动并操作Kafka集群,包括下载代码、搭建Zookeeper实例、创建主题、发送消息及消费消息,同时介绍了如何配置多节点Broker集群以实现故障容错。

Quick Start

Step 1: Download the code

Download  the 0.8 release.
> tar xzf kafka-<VERSION>.tgz
> cd kafka-<VERSION>
> ./sbt update
> ./sbt package
> ./sbt assembly-package-dependency
This tutorial assumes you are starting on a fresh zookeeper instance with no pre-existing data. If you want to migrate from an existing 0.7 installation you will need to follow the migration instructions.

Step 2: Start the server

Kafka uses zookeeper so you need to first start a zookeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node zookeeper instance.

> bin/zookeeper-server-start.sh config/zookeeper.properties
[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
...
Now start the Kafka server:
> bin/kafka-server-start.sh config/server.properties
[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
...

Step 3: Create a topic

Let's create a topic named "test" with a single partition and only one replica:
> bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic test
We can now see that topic if we run the list topic command:
> bin/kafka-list-topic.sh --zookeeper localhost:2181
Alternatively, you can also configure your brokers to auto-create topics when a non-existent topic is published to.

Step 4: Send some messages

Kafka comes with a command line client that will take input from a file or standard in and send it out as messages to the Kafka cluster. By default each line will be sent as a separate message.

Run the producer and then type a few messages to send to the server.

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 
This is a message
This is another message

Step 5: Start a consumer

Kafka also has a command line consumer that will dump out messages to standard out.
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
This is a message
This is another message

If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.

All the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.

Step 6: Setting up a multi-broker cluster

So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).

First we make a config file for each of the brokers:

> cp config/server.properties config/server-1.properties 
> cp config/server.properties config/server-2.properties
Now edit these new files and set the following properties:
 
config/server-1.properties:
    broker.id=1
    port=9093
    log.dir=/tmp/kafka-logs-1
 
config/server-2.properties:
    broker.id=2
    port=9094
    log.dir=/tmp/kafka-logs-2
The  broker.id  property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from trying to all register on the same port or overwrite each others data.

We already have Zookeeper and our single node started, so we just need to start the two new nodes. However, this time we have to override the JMX port used by java too to avoid clashes with the running node:

> JMX_PORT=9997 bin/kafka-server-start.sh config/server-1.properties &
...
> JMX_PORT=9998 bin/kafka-server-start.sh config/server-2.properties &
...
Now create a new topic with a replication factor of three:
> bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 3 --partition 1 --topic my-replicated-topic
Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "list topics" command:
> bin/kafka-list-topic.sh --zookeeper localhost:2181
topic: my-replicated-topic  partition: 0  leader: 1  replicas: 1,2,0  isr: 1,2,0
topic: test	                partition: 0  leader: 0  replicas: 0      isr: 0
Here is an explanation of output:
  • "leader" is the node responsible for all reads and writes for the given partition. Each node would be the leader for a randomly selected portion of the partitions.
  • "replicas" is the list of nodes that are supposed to server the log for this partition regardless of whether they are currently alive.
  • "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
Note that both topics we created have only a single partition (partition 0). The original topic has no replicas and so it is only present on the leader (node 0), the replicated topic is present on all three nodes with node 1 currently acting as leader and all replicas in sync.

As before let's publish a few messages message:

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
...
my test message 1
my test message 2
^C 
Now consume this message:
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
Now let's test out fault-tolerance. Kill the broker acting as leader for this topic's only partition:
> pkill -9 -f server-1.properties
Leadership should switch to one of the slaves:
> bin/kafka-list-topic.sh --zookeeper localhost:2181
...
topic: my-replicated-topic	partition: 0	leader: 2	replicas: 1,2,0	isr: 2
topic: test	partition: 0	leader: 0	replicas: 0	isr: 0
And the messages should still be available for consumption even though the leader that took the writes originally is down:
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C

转载自:http://kafka.apache.org/08/quickstart.html

### 头歌 Kafka 入门 第二关 Producer 简单模式 教程 Kafka 的生产者(Producer)是向 Kafka 主题(Topic)发布消息的客户端程序。在头歌平台的 Kafka 入门篇第二关中,Producer 简单模式主要涉及如何创建一个基本的生产者实例,并将消息发送到指定的主题。 以下是关于 Producer 简单模式的核心内容和实现方法: #### 1. 生产者的基本配置 生产者需要通过配置参数连接到 Kafka 集群。关键的配置包括 `bootstrap.servers` 和 `acks` 等[^2]。 - **`bootstrap.servers`**: 指定 Kafka 集群的地址,例如 `hadoop102:9092`。 - **`acks`**: 控制生产者在接收到服务器确认前的消息发送行为。例如,`acks=1` 表示至少有一个副本接收到消息后才认为发送成功[^2]。 #### 2. 发送消息的代码示例 以下是一个简单的 Kafka 生产者代码示例,用于发送消息到指定主题: ```java import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerRecord; import java.util.Properties; public class SimpleProducer { public static void main(String[] args) { // 设置生产者配置 Properties props = new Properties(); props.put("bootstrap.servers", "hadoop102:9092"); // Kafka 集群地址 props.put("acks", "1"); // 确认机制 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); // 创建生产者实例 KafkaProducer<String, String> producer = new KafkaProducer<>(props); // 构造消息并发送 for (int i = 0; i < 10; i++) { ProducerRecord<String, String> record = new ProducerRecord<>("first", Integer.toString(i), "Message " + i); producer.send(record); } // 关闭生产者 producer.close(); } } ``` #### 3. 运行生产者程序 在头歌平台上运行上述代码时,请确保: - Kafka 集群已启动,并且可以访问 `hadoop102:9092`。 - 目标主题 `first` 已经创建。如果未创建,可以通过以下命令创建主题: ```bash sbin/kafka-topics.sh --create --topic first --bootstrap-server hadoop102:9092 --partitions 1 --replication-factor 1 ``` #### 4. 验证消息发送 为了验证生产者是否成功发送消息,可以使用 Kafka 消费者命令查看主题中的消息: ```bash sbin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic first --from-beginning ``` 此命令会从主题 `first` 的开头开始读取消息[^3]。 --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值