安装kafka2.11

目录

一、Kafka集群部署

1.启动hadoop集群

2.启动yarn集群

3.启动zookeeper

4.查看zookeeper状态

5.导入安装包到/opt/software/下,导入成功查看文件

6.解压

7.改名

8.在/opt/module/kafka目录下创建logs文件夹

9.修改配置文件

10.配置环境变量,其他集群同上

11.把集群文件同步到其它机器,

12.修改#broker的全局唯一编号,不能重复

13.启动集群,其他集群同上

14.查看进程,有kafka就可以了


一、Kafka集群部署

1.启动hadoop集群

[root@hadoop01 ~]# cd /opt/module/hadoop-2.7.3/
[root@hadoop01 hadoop-2.7.3]# sbin/start-dfs.sh 

2.启动yarn集群

[root@hadoop01 hadoop-2.7.3]# sbin/start-yarn.sh

3.启动zookeeper

在hadoop01,hadoop02,hadoop03启动

[root@hadoop01 ~]# cd /opt/module/zookeeper/
[root@hadoop01 zookeeper]# bin/zkServer.sh start

4.查看zookeeper状态

两台follower,一台leader

在hadoop01

[root@hadoop01 zookeeper]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... already running as process 32710.
[root@hadoop01 zookeeper]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Mode: follower

在hadoop02

[root@hadoop02 zookeeper]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... already running as process 5633.
[root@hadoop02 zookeeper]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Mode: leader

在hadoop03

[root@hadoop03 zookeeper]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... already running as process 5355.
[root@hadoop03 zookeeper]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Mode: follower

5.导入安装包到/opt/software/下,导入成功查看文件

[root@hadoop01 ~]# cd /opt/software/
[root@hadoop01 software]# ll
total 968736
-rw-r--r--. 1 root root  41414555 Sep 21 17:03 kafka_2.11-0.11.0.0.tgz

6.解压

[root@hadoop01 software]# tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/

7.改名

[root@hadoop01 module]# mv kafka_2.11-0.11.0.0/ kafka

8.在/opt/module/kafka目录下创建logs文件夹

[root@hadoop01 module]# cd kafka/
[root@hadoop01 kafka]# mkdir logs

9.修改配置文件

[root@hadoop01 kafka]$ cd config/
[root@hadoop01 config]$ vim server.properties

需要修改24,63,126 

18 ############################# Server Basics #############################
 19 
 20 # The id of the broker. This must be set to a unique integer for each broker.
 21 broker.id=0
 22 
 23 # Switch to enable topic deletion or not, default value is false

 24 delete.topic.enable=true    #删除topic功能使能

 60 ############################# Log Basics #############################
 61 
 62 # A comma seperated list of directories under which to store log files
 63 log.dirs=/opt/module/kafka/logs    #kafka运行日志存放的路径

126 zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181    #配置连接Zookeeper集群地址

10.配置环境变量,其他集群同上

[root@hadoop01 config]# vim /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka
export PATH=$PATH:$KAFKA_HOME/bin
[root@hadoop01 config]# source /etc/profile

11.把集群文件同步到其它机器,

具体创建方法在链接里面的hadoop安装的编写集群同步脚本

http://t.csdn.cn/rB6H5

[root@hadoop01 module]# /root/bin/xsync /opt/module/kafka/

12.修改#broker的全局唯一编号,不能重复

[root@hadoop02 ~]# cd /opt/module/kafka/config/
[root@hadoop02 config]# vim server.properties 
 21 broker.id=1
[root@hadoop03 ~]# cd /opt/module/kafka/config/
[root@hadoop03 config]# vim server.properties 
 21 broker.id=2

13.启动集群,其他集群同上

[root@hadoop01 kafka]# bin/kafka-server-start.sh /opt/module/kafka/config/server.properties 1>/dev/null 2>1 &
[1] 65369

14.查看进程,有kafka就可以了

[root@hadoop01 kafka]# jps

44097 Worker
49155 NodeManager
28469 SecondaryNameNode
44037 Master
48917 ResourceManager
48038 HistoryServer
32710 QuorumPeerMain
438 Jps
65369 Kafka
28189 NameNode
28286 DataNode
49311 JobHistoryServer

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lambda-小张

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值