Kafka2.12安装与配置/生产与消费
一、Kafka安装与配置
1.1 Java环境为前提
jdk下载地址链接:jdk1.8 提取码:9plz
zookeeper下载地址链接:zookeeper3.4.14 提取码:zkvq
kafka下载地址链接:kafka2.12 提取码:oroq
1、上传jdk-8u261-linux-x64.rpm到服务器并安装
#安装命令
rpm -ivh jdk-8u261-linux-x64.rpm

2、配置环境变量
# 编辑配置文件,jdk的bin目录到/etc/profile文件,对所有用户的shell有效
vim /etc/profile
# 生效
source /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_261-amd64
export PATH=$PATH:$JAVA_HOME/bin
# 验证
java -version

至此,jdk安装成功
1.2 Zookeeper的安装配置
1、上传zookeeper-3.4.14.tar.gz到服务器,解压到/opt
# 解压zk到指定目录
tar -zxf zookeeper-3.4.14.tar.gz -C /opt

2、修改Zookeeper保存数据的目录,dataDir
# 进入conf配置目录
cd /opt/zookeeper-3.4.14/conf
# 复制zoo_sample.cfg命名为zoo.cfg
cp zoo_sample.cfg zoo.cfg
# 编辑zoo.cfg文件
vim zoo.cfg
dataDir=/var/dabing/zookeeper/data

3、编辑/etc/profile,使配置生效
设置环境变量ZOO_LOG_DIR,指定Zookeeper保存日志的位置;
ZOOKEEPER_PREFIX指向Zookeeper的解压目录;
将Zookeeper的bin目录添加到PATH中:
export ZOOKEEPER_PREFIX=/opt/zookeeper-3.4.14
export PATH=$PATH:$ZOOKEEPER_PREFIX/bin
export ZOO_LOG_DIR=/var/dabing/zookeeper/log
//更新配置文件
source /etc/profile

4、启动Zookeeper,确认Zookeeper的状态
zkServer.sh start

至此,zookeeper安装成功
1.3 Kafka的安装与配置
1、上传kafka_2.12-1.0.2.tgz到服务器并解压
tar -zxf kafka_2.12-1.0.2.tgz -C /opt

2、配置环境变量并生效
vim /etc/profile
#kafka
export KAFKA=/opt/kafka_2.12-1.0.2
export PATH=$PATH:$KAFKA/bin

3、配置/opt/kafka_2.12-1.0.2/config中的server.properties文件
vi /opt/kafka_2.12-1.0.2/config/server.properties

Kafka连接Zookeeper的地址,此处使用本地启动的Zookeeper实例
连接地址是localhost:2181
后面的 myKafka 是Kafka在Zookeeper中的根节点路径:

配置kafka存储持久化数据目录

创建上述持久化数据目录

4、启动Kafka:
进入Kafka安装的根目录,执行如下命令:
kafka-server-start.sh $KAFKA/config/server.properties

启动成功,可以看到控制台输出的最后一行的started状态:此时kafka安装成功

详细执行日志
[root@kafka bin]# kafka-server-start.sh ../config/server.properties
[2020-11-03 03:30:56,120] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 1.0-IV0
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/dabing/kafka/kafka-logs---------------------------------修改
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 1.0-IV0
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol

本文介绍了Kafka 2.12版本的安装配置过程,包括Java环境、Zookeeper及Kafka本身的部署步骤,并演示了如何使用Kafka进行消息生产和消费。
最低0.47元/天 解锁文章
568





