kafka单机环境搭建与测试

本文详细介绍了如何从零开始部署Kafka集群与Zookeeper,并提供了完整的步骤说明,包括下载、配置及启动等关键环节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.下载zookeeper-3.4.6.tar.gz

wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz


2.进行解压 .

tar zxvf zookeeper-3.4.6.tar.gz


3.下载Kafka并解压,这里采用kafka_2.10-0.8.2.0版本


4.复制zookeeper/conf下的zoo_sample.cfg,命名为zoo.cfg 

cp -rf conf/zoo_sample.cfg conf/zoo.cfg

5.更改zoo.cfg的配置

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper123  #目录要手动提前创建
# the port at which the clients will connect
clientPort=2181  #可以修改zookeeper启动端口
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
~                                  


6.启动zookeeper

bin/zkServer.sh start


7.进入kafka/config下的server.properties进行配置

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk. 
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. 
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according 
# to the retention policies
log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=xx.xx.xxx.xxx:2181 #ip地址:zk端口号

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

8.启动kafka

bin/kafka-server-start.sh config/server.properties &



检验是否搭建成功

1.创建topic 名字test

 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

2.查询topic列表,看是否创建成功

bin/kafka-topics.sh --list --zookeeper localhost:2181

3.创建Producer,发送消息

 bin/kafka-console-producer.sh --broker-list xx.xx.xx.xx:9092 --topic test

4.创建consumer接收消息

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

5.若consumer接收消息成功,则搭建成功



### Kafka 单机安装配置教程 #### 准备工作 确保已安装并配置好 Java 运行环境,因为 Kafka 是基于 Scala 开发的工具,依赖于 Java 环境[^4]。 #### 下载解压 Kafka 下载最新版本的 Kafka 并将其解压缩至目标路径。例如: ```bash wget https://downloads.apache.org/kafka/2.5.0/kafka_2.12-2.5.0.tgz tar -xzf kafka_2.12-2.5.0.tgz cd kafka_2.12-2.5.0 ``` #### 配置文件调整 进入 `config` 文件夹,找到 `server.properties` 文件,并对其进行必要的修改。以下是几个重要参数说明: - 修改监听地址为实际 IP 地址而非默认的 localhost。例如设置如下属性以避免连接问题: ```properties listeners=PLAINTEXT://<your-ip>:9092 advertised.listeners=PLAINTEXT://<your-ip>:9092 ``` 如果不更改上述两项,默认情况下可能会导致客户端无法正确访问 Kafka 实例[^2]。 - 设置数据存储目录(可选),通过指定 log.dirs 参数来定义日志保存位置: ```properties log.dirs=/tmp/kafka-logs ``` 完成以上改动后保存退出编辑器。 #### 启动 Zookeeper 和 Kafka Server Kafka 使用 Apache ZooKeeper 来管理集群元数据,在单节点模式下也需要先启动它。 ```bash ./bin/zookeeper-server-start.sh -daemon config/zookeeper.properties ``` 接着再开启 Kafka broker 服务进程: ```bash ./bin/kafka-server-start.sh -daemon config/server.properties ``` 此时可以通过 ps 命令验证两个后台程序均已正常运行状态存在[^3]。 #### 创建 Topic 及测试消息收发功能 利用自带脚本快速建立一个新的 topic 名称为 test 的队列结构体实例化操作命令如下所示: ```bash ./bin/kafka-topics.sh --bootstrap-server <your-ip>:9092 --create --topic test --partitions 1 --replication-factor 1 ``` 随后分别打开两个终端窗口用于模拟生产者发送以及消费者接收端逻辑行为过程演示效果展示出来即可证明整个流程搭建顺利完成[^1]。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值