kafka的三种部署模式

/*************
*kafka 0.8.1.1的安装部署
*blog:www.r66r.net
*qq:26571864

**************/

相关部署视频地址:http://edu.51cto.com/course/course_id-2374.html




kafka的部署模式为3种模式
1)单broker模式

2)单机多broker模式 (伪集群)

3)多机多broker模式 (真正的集群模式)


第一种模式安装

1.在hadoopdn2机器上面上传kafka文件,并解压到 /opt/hadoop/kafka下面

2.修改 /opt/hadoop/kafka/kafka_2.9.2-0.8.1.1/config 下面的server.properties 配置文件
broker.id=0 默认不用修改
修改
log.dirs=/opt/hadoop/kafka/kafka-logs  
log.flush.interval.messages=10000 默认不用修改
log.flush.interval.ms=1000        默认不用修改
zookeeper.connect=hadoopdn2:2181

3.启动kafka的broker

> bin/kafka-server-start.sh config/server.properties

正常启动如下:
[2014-11-18 10:36:32,196] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:os.version=2.6.32-220.el6.x86_64 (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:user.name=hadoop (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:user.home=/home/hadoop (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,196] INFO Client environment:user.dir=/opt/hadoop/kafka/kafka_2.9.2-0.8.1.1 (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,197] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@c2f8b5a (org.apache.zookeeper.ZooKeeper)
[2014-11-18 10:36:32,231] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2014-11-18 10:36:32,238] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2014-11-18 10:36:32,262] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x349c07dcd7a0002, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2014-11-18 10:36:32,266] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2014-11-18 10:36:32,415] INFO Starting log cleanup with a period of 60000 ms. (kafka.log.LogManager)
[2014-11-18 10:36:32,422] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[2014-11-18 10:36:32,502] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2014-11-18 10:36:32,503] INFO [Socket Server on Broker 0], Started (kafka.network.SocketServer)
[2014-11-18 10:36:32,634] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2014-11-18 10:36:32,716] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2014-11-18 10:36:32,887] INFO Registered broker 0 at path /brokers/ids/0 with address JobTracker:9092. (kafka.utils.ZkUtils$)
[2014-11-18 10:36:32,941] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2014-11-18 10:36:33,034] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)


4.创建topics
> bin/kafka-topics.sh --create --zookeeper hadoopdn2:2181 --replication-factor 1 --partitions 1 --topic test

查看队列列表
> bin/kafka-topics.sh --list --zookeeper hadoopdn2:2181

查看队列明细
> bin/kafka-topics.sh --describe  --zookeeper hadoopdn2:2181 --topic test

Topic[队列]:test    PartitionCount[分区数量]:1    ReplicationFactor:1    Configs:
    Topic: test    Partition: 0    Leader: 0    Replicas: 0    Isr: 0
第一行是所有Partition的总结。后面的行是每个partition一行。
    
查看帮助文档
> bin/kafka-topics.sh --help 查看与topics相关的指令


第二种模式部署:

1.为第二个broker创建server的配置文件
> cp server.properties server1.properties

2.修改server1.properties

broker.id=1
port=9093   
log.dirs=/opt/hadoop/kafka/kafka-logs-server1
zookeeper.connect=hadoopdn2:2181


3.启动kafka的broker

> nohup bin/kafka-server-start.sh config/server1.properties &

4.通过zookeeper的客户端可以查看当前的broker

[zk: hadoopdn2:2181(CONNECTED) 7] ls /                              
[zookeeper, admin, consumers, config, controller, brokers, controller_epoch]
[zk: hadoopdn2:2181(CONNECTED) 8] ls /brokers
[topics, ids]
[zk: hadoopdn2:2181(CONNECTED) 9] ls /brokers/ids
[1, 0]

5.查看队列情况

$ bin/kafka-topics.sh --describe test --zookeeper hadoopdn2:2181
Topic:test    PartitionCount:1    ReplicationFactor:1    Configs:
    Topic: test    Partition: 0    Leader: 0    Replicas: 0    Isr: 0
    
6.修改test队列的参数
$ bin/kafka-topics.sh   --zookeeper hadoopdn2:2181  --partitions 3 --topic test --alter

$ bin/kafka-topics.sh --describe test --zookeeper hadoopdn2:2181
Topic:test    PartitionCount:3    ReplicationFactor:1    Configs:
    Topic: test    Partition: 0    Leader: 0    Replicas: 0[在broker0上面]    Isr: 0
    Topic: test    Partition: 1    Leader: 1    Replicas: 1[在broker1上面]    Isr: 1
    Topic: test    Partition: 2    Leader: 0    Replicas: 0[在broker0上面]    Isr: 0
    
    
第三种部署方式:

1.在hadoopdn3机器上面上传kafka文件,并解压到 /opt/hadoop/kafka下面    

2.修改 /opt/hadoop/kafka/kafka_2.9.2-0.8.1.1/config 下面的server.properties 配置文件
broker.id=2 必须修改保证每个broker的ID唯一
修改
log.dirs=/opt/hadoop/kafka/kafka-logs  
log.flush.interval.messages=10000 默认不用修改
log.flush.interval.ms=1000        默认不用修改
zookeeper.connect=hadoopdn2:2181

3.通过zookeeper的客户端查看

[zk: hadoopdn2:2181(CONNECTED) 10] ls /brokers/ids
[2, 1, 0]

broker的id为2的已经注册到zookeeper上面了

到此为止,kafka的部署模式已经完整。

### Kafka 集群模式部署教程 #### 1. 环境准备 在进行Kafka集群模式部署之前,需确保已准备好必要的软硬件环境。通常情况下,Kafka依赖于ZooKeeper来管理元数据和协调服务状态。因此,在开始部署前,应先完成ZooKeeper集群的安装与配置[^3]。 #### 2. 节点规划 对于生产环境中典型的三节点Kafka集群,可以按照以下方式进行规划: - **节点名称**: Spark01, Spark02, Spark03 - **IP地址**: 假设分别为 `192.168.1.1`, `192.168.1.2`, 和 `192.168.1.3` - **角色分配**: 每个节点运行一个Kafka Broker实例,并共享同一套ZooKeeper集群 此步骤有助于合理分布负载并提高系统的可用性和可靠性。 #### 3. 安装与配置 以下是具体的安装与配置过程: ##### (a) 下载并解压Kafka软件包 将官方发布的最新版本下载至各服务器上,并将其解压到指定路径下。例如: ```bash tar -zxvf kafka_2.13-3.0.0.tgz -C /usr/local/ cd /usr/local/kafka_2.13-3.0.0/ ``` ##### (b) 修改核心参数设置 编辑位于 `$KAFKA_HOME/config/server.properties` 文件中的关键属性以适应集群需求。主要包括但不限于以下几个方面: - 设置唯一的Broker ID以便区分不同节点上的进程; - 明确日志存储位置及其大小限制; - 连接远程ZooKeepers而非本地实例; 具体示例代码如下所示: ```properties broker.id=1 # 对应每台机器唯一编号 {1|2|3} listeners=PLAINTEXT://:9092 log.dirs=/tmp/kafka-logs-${broker.id} zookeeper.connect=zk_host1:2181,zk_host2:2181,zk_host3:2181 num.partitions=3 # 默认分区数建议大于等于副本因子数量 default.replication.factor=3 # 数据复制级别推荐值为N(N>=Replica Count) offsets.topic.replication.factor=3# Offsets Topic Replication Factor同样重要 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 # ISR(In-Sync Replica Set Size)最小成员数目定义 ``` 注意上述配置项均需依据实际情况调整适当数值[^2]。 ##### (c) 同步更改后的文件夹结构给其他两处主机 利用scp工具快速实现跨设备间同步操作: ```bash scp -r $KAFKA_HOME user@remote-host:/target/path/ ``` #### 4. 测试验证 当所有准备工作完成后即可尝试启动整个系统链路功能是否正常工作。依次执行下列指令开启对应的服务端口监听活动: ```bash nohup bin/zookeeper-server-start.sh config/zookeeper.properties & sleep 5; nohup bin/kafka-server-start.sh config/server.properties & ``` 最后可通过创建主题、发送消息以及消费记录等方式进一步确认整体架构运转状况良好[^1]。 --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值