Section I: 文件清单
- kafka_2.12-0.10.2.1.gz
Section II: 下载链接
[kafka 下载链接]: https://kafka.apache.org/downloads
Section III: Kafka分布式部署
总览,集群信息:
节点角色 | Master | Slave1 | Slave2 |
---|---|---|---|
IP | 192.168.137.128 | 192.168.137.129 | 192.168.137.130 |
HostName | BlogMaster | BlogSlave1 | BlogSlave2 |
Hadoop | BolgMaster-YES | BlogSlave1-YES | BlogSlave2-YES |
Zookeeper | BolgMaster-YES | BlogSlave1-YES | BlogSlave2-YES |
Kafka | BolgMaster-YES | BlogSlave1-YES | BlogSlave2-Yes |
Step 1: 在主节点进行解压Kafka安装包至指定目录
[root@BlogMaster ~]# tar -zxvf kafka_2.12-0.10.2.1.gz -C /opt/cluster/
Step 2: 在在主节点的Kafka安装目录下创建名为“logs”的文件夹,用于记录Kafka操作记录
[root@BlogMaster kafka_2.12-0.10.2.1]# mkdir logs
Step 3: 在主节点,修改server.properties日志文件(位于/opt/cluster/kafka_2.12-0.10.2.1/config)
进入该文件后,修改四个选项:
- broker.id
The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
- delete.topic.enable
# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
- log.dirs
# A comma seperated list of directories under which to store log files
log.dirs=/opt/cluster/kafka_2.12-0.10.2.1/logs
- zookeeper.connect
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can