1、Alt+p打开sftp,将kafka上传到linux,我上传到hdp-4,/root/apps
sftp> cd apps
sftp> put -r "C:\Users\ThinkPad\Documents\Tencent Files\840657524\FileRecv\kafka_2.12-2.2.0.tgz"
Uploading kafka_2.12-2.2.0.tgz to /root/apps/kafka_2.12-2.2.0.tgz
100% 55691KB 55691KB/s 00:00:01
C:/Users/ThinkPad/Documents/Tencent Files/840657524/FileRecv/kafka_2.12-2.2.0.tgz: 57028557 bytes transferred in 1 seconds (55691 KB/s)
sftp>
上传完成
到apps中查看
drwxr-xr-x. 10 1001 1002 161 3月 16 2019 hadoop-3.1.2
drwxr-xr-x. 7 10 143 245 12月 16 2018 jdk1.8.0_201
-rw-r--r--. 1 root root 57028557 10月 13 12:03 kafka_2.12-2.2.0.tgz
2、解压kafka到apps
[root@hdp-4 apps]# tar -zxvf kafka_2.12-2.2.0.tgz
3、设置环境变量
[root@hdp-4 apps]# vi /etc/profile
在最下面输入
exprot KAFKA_HOME=/root/apps/kafka_2.12-2.2.0
exprot PATH=$PATH:$KAFKA_HOME/bin
保存退出
4、source命令使环境变量生效
[root@hdp-1 ~]# source /etc/profile
[root@hdp-1 ~]#
5、修改kafka配置文件
cd apps/kafka_2.12-2.2.0/config/
备份一下原始的server.properties
cp server.properties server.properties.bak
修改配置文件: vi /server.properties
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://hdp-4:9092
#listeners=PLAINTEXT://:9092
修改listeners=PLAINTEXT的host_name为自己的linux名
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
保证broker.id与其他的虚拟机不冲突就行
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/root/kafkadata/kafka-logs
修改log配置文件
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=hdp-4:2181
修改zookeeper的连接地址,多个地址逗分开
6、分发,将hdp-4的kafka安装文件分发到其他linu
scp -r apps/kafka_2.12-2.2.0 hdp-3:/apps
注意:分别修改config中的server.properties
7、启动zookeeper集群
脚本启动:
./zkmanager.sh start
脚本在/root,脚本代码
#! /bin/bash
for host in hdp-1 hdp-2 hdp-3
do
echo “${host}:${1}ing....”
ssh $host "source /etc/profile;/root/apps/zookeeper-3.4.6/bin/zkServer.sh $1"
done
8、启动kafka
[root@hdp-1 ~]# cd apps/kafka_2.12-2.2.0/bin
[root@hdp-1 bin]# ./kafka-server-start.sh ../config/server.properties