kafka实践

这篇博客详细介绍了Kafka的部署过程,包括启动Zookeeper和Kafka服务器,创建主题,列出主题,发送和消费消息。此外,还提到了可能出现的问题及解决办法,如JVM内存设置和清理offset的注意事项。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

https://gitee.com/abcd_1101/BigData/tree/master/springboot-kafka-demo

这里的readme有详细步骤,下包什么的就不说了,全是命令加代码。

 

1.startup zookeeper
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
netstat -tunlp |grep 2181
2.startup kafka
nohup bin/kafka-server-start.sh config/server.properties &
netstat -tunlp |grep 9092

1.create topic:
[root@VM_0_14_centos kafka_2.11-2.1.0]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
[2018-12-26 12:27:52,627] INFO Accepted socket connection from /127.0.0.1:57358 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-12-26 12:27:52,633] INFO Client attempting to establish new session at /127.0.0.1:57358 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-12-26 12:27:52,638] INFO Established session 0x10303ac9e570001 with negotiated timeout 30000 for client /127.0.0.1:57358 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-12-26 12:27:53,062] INFO Got user-level KeeperException when processing sessionid:0x10303ac9e570001 type:setData cxid:0x4 zxid:0x1f txntype:-1 reqpath:n/a Error Path:/config/topics/test Error:KeeperErrorCode = NoNode for /config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
Created topic "test".
[2018-12-26 12:27:53,186] INFO Processed session termination for sessionid: 0x10303ac9e570001 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-12-26 12:27:53,227] INFO Closed socket connection for client /127.0.0.1:57358 which had sessionid 0x10303ac9e570001 (org.apache.zookeeper.server.NIOServerCnxn)
[2018-12-26 12:27:53,319] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(test-0) (kafka.server.ReplicaFetcherManager)
[2018-12-26 12:27:53,422] INFO [Log partition=test-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2018-12-26 12:27:53,429] INFO [Log partition=test-0, dir=/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 73 ms (kafka.log.Log)
[2018-12-26 12:27:53,431] INFO Created log for partition test-0 in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 2.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0,                                                                          preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-12-26 12:27:53,432] INFO [Partition test-0 broker=0] No checkpointed highwatermark is found for partition test-0 (kafka.cluster.Partition)
[2018-12-26 12:27:53,447] INFO Replica loaded for partition test-0 with initial high watermark0 (kafka.cluster.Replica)
[2018-12-26 12:27:53,450] INFO [Partition test-0 broker=0] test-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)


2.list topic
bin/kafka-topics.sh --list --zookeeper 119.29.56.220:2181
[root@VM_0_14_centos kafka_2.11-2.1.0]# bin/kafka-topics.sh --list --zookeeper localhost:2181
[2018-12-26 12:28:13,172] INFO Accepted socket connection from /127.0.0.1:57376 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-12-26 12:28:13,172] INFO Client attempting to establish new session at /127.0.0.1:57376 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-12-26 12:28:13,180] INFO Established session 0x10303ac9e570002 with negotiated timeout 30000 for client /127.0.0.1:57376 (org.apache.zookeeper.server.ZooKeeperServer)
test
[2018-12-26 12:28:13,233] INFO Processed session termination for sessionid: 0x10303ac9e570002 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-12-26 12:28:13,249] INFO Closed socket connection for client /127.0.0.1:57376 which had sessionid 0x10303ac9e570002 (org.apache.zookeeper.server.NIOServerCnxn)
上面原来是提示zookeeper出错了,所以需要重新启动zookeeper,正常只会出现test

3.produce message
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>This is a message
>This is another message

4.consume topic
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

notice:
1.kafka default jvm is 1G, you can change bin/kafka-server-start.sh to change JVM
2.if want to use spring connect, update config/server.properties 
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners:PLAINTEXT://119.29.56.220:9092
3.Error while reading checkpoint file /tmp/kafka-logs/cleaner-offset-checkpoint
NoSuchFileException: /tmp/kafka-logs/cleaner-offset-checkpoint
每次清理完,要更新当前已经清理到的位置, 记录在cleaner-offset-checkpoint文件中,作为下一次清理时生成firstDirtyOffset的参考;

### 使用Docker部署Kafka的最佳实践 #### 一、准备工作 为了确保顺利部署,在开始之前需确认已正确安装并配置好Docker以及Docker Compose工具。这一步骤至关重要,因为后续操作都将基于这两个组件完成[^3]。 ```bash # 安装Docker wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker systemctl enable docker && systemctl start docker docker --version # 安装Docker Compose sudo curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose docker-compose --version ``` #### 二、创建`docker-compose.yml`文件 编写一个合适的`docker-compose.yml`文件对于简化多容器管理非常重要。下面是一个简单的例子用于启动单节点的Kafka服务: ```yaml version: '2' services: zookeeper: image: wurstmeister/zookeeper ports: - "2181:2181" kafka: image: wurstmeister/kafka depends_on: - zookeeper ports: - "9092:9092" environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ``` 此配置定义了一个ZooKeeper实例和一个依赖它的Kafka broker。通过设置环境变量`KAFKA_ZOOKEEPER_CONNECT`指定了连接到哪个ZooKeeper服务器;而`depends_on`则保证了只有当ZooKeeper准备好之后才会尝试启动Kafka。 #### 三、运行Kafka集群 有了上述准备后,只需执行如下命令即可轻松启动整个Kafka集群: ```bash docker-compose up -d ``` 这条指令会以前台模式异步地拉取镜像(如果本地不存在的话),然后按照指定的服务顺序依次启动各个容器。参数`-d`表示后台运行这些进程。 #### 四、验证部署成功与否 可以通过访问暴露出来的端口来测试是否能够正常工作。比如可以利用官方提供的客户端工具查看特定主题的信息作为初步检验手段之一[^4]。 ```bash kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic __consumer_offsets \ --partition 48 \ --from-beginning \ --formatter 'kafka.coordinator.group.GroupMetadataManager$OffsetsMessageFormatter' ``` 以上就是使用Docker部署Kafka的一套完整流程介绍。值得注意的是,实际生产环境中可能还需要考虑更多因素如数据持久化存储方案的选择等,因此建议读者进一步深入研究相关文档资料以满足具体需求[^1]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值