kafka执行工具系列之(一)原生的kafka的脚本

原生 kafka 提供的工具脚本:

一、所有工具脚本的位置:
[root@master my_bin]# cd $KAFKA_HOME 
[root@master kafka]# cd bin/
[root@master bin]# ll
总用量 116
-rwxr-xr-x. 1 root root 1052 8月   4 2016 connect-distributed.sh
-rwxr-xr-x. 1 root root 1051 8月   4 2016 connect-standalone.sh
-rwxr-xr-x. 1 root root  861 8月   4 2016 kafka-acls.sh
-rwxr-xr-x. 1 root root  864 8月   4 2016 kafka-configs.sh
-rwxr-xr-x. 1 root root  945 8月   4 2016 kafka-console-consumer.sh
-rwxr-xr-x. 1 root root  944 8月   4 2016 kafka-console-producer.sh
-rwxr-xr-x. 1 root root  871 8月   4 2016 kafka-consumer-groups.sh
-rwxr-xr-x. 1 root root  872 8月   4 2016 kafka-consumer-offset-checker.sh
-rwxr-xr-x. 1 root root  948 8月   4 2016 kafka-consumer-perf-test.sh
-rwxr-xr-x. 1 root root  862 8月   4 2016 kafka-mirror-maker.sh
-rwxr-xr-x. 1 root root  886 8月   4 2016 kafka-preferred-replica-election.sh
-rwxr-xr-x. 1 root root  959 8月   4 2016 kafka-producer-perf-test.sh
-rwxr-xr-x. 1 root root  874 8月   4 2016 kafka-reassign-partitions.sh
-rwxr-xr-x. 1 root root  868 8月   4 2016 kafka-replay-log-producer.sh
-rwxr-xr-x. 1 root root  874 8月   4 2016 kafka-replica-verification.sh
-rwxr-xr-x. 1 root root 6358 8月   4 2016 kafka-run-class.sh
-rwxr-xr-x. 1 root root 1364 8月   4 2016 kafka-server-start.sh
-rwxr-xr-x. 1 root root  975 8月   4 2016 kafka-server-stop.sh
-rwxr-xr-x. 1 root root  870 8月   4 2016 kafka-simple-consumer-shell.sh
-rwxr-xr-x. 1 root root  945 8月   4 2016 kafka-streams-application-reset.sh
-rwxr-xr-x. 1 root root  863 8月   4 2016 kafka-topics.sh
-rwxr-xr-x. 1 root root  958 8月   4 2016 kafka-verifiable-consumer.sh
-rwxr-xr-x. 1 root root  958 8月   4 2016 kafka-verifiable-producer.sh
drwxr-xr-x. 2 root root 4096 8月   4 2016 windows
-rwxr-xr-x. 1 root root  867 8月   4 2016 zookeeper-security-migration.sh
-rwxr-xr-x. 1 root root 1381 8月   4 2016 zookeeper-server-start.sh
-rwxr-xr-x. 1 root root  978 8月   4 2016 zookeeper-server-stop.sh
-rwxr-xr-x. 1 root root  968 8月   4 2016 zookeeper-shell.sh
二、实际生产使用的命令

1、 显示出Consumer的Group、Topic、分区ID、分区对应已经消费的Offset、logSize大小,Lag以及Owner等信息。

KAFKA_HOME/bin/kafka-consumer-offset-checker.sh --zookeeper zk的master的ip:2181  --topic stable-test --group  test-consumer-group

举例:
[root@localhost data]# kafka-consumer-offset-checker --zookeeper localhost :2181  --group test-consumer-group  --topic stable-test
[2017-08-22 19:24:24,222] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
Group           Topic                          Pid Offset          logSize         Lag             Owner
test-consumer-group stable-test                    0   601808          601808          0               none
test-consumer-group stable-test                    1   602826          602828          2               none
test-consumer-group stable-test                    2   602136          602136          0               none

在这里插入代码片
/home/migu/kafka/kafka_2.11-0.8.2.2/bin/kafka-consumer-offset-checker.sh --zookeeper zk_master_ip:2181  --topic filter_session_start --group  logProcessor-product|awk '{s +=$4;e +=$5;t +=$6};END {print s"\t"e"\t"t}'    
#后面awk的加上执行条件可看到kafka消费者消费的总的offset数

awk的命令详解

### 自动化部署原生 Kafka 集群 #### 使用 Shell 脚本实现自动化部署 为了简化操作并提高效率,可以编写Shell脚本来完成Kafka集群的安装与配置工作。此方法适用于希望快速搭建测试环境的情况。 ```bash #!/bin/bash # 设置变量 KAFKA_VERSION="3.0.0" SCALA_VERSION="2.13" DOWNLOAD_URL=https://archive.apache.org/dist/kafka/${KAFKA_VERSION}/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz INSTALL_DIR=/opt/kafka DATA_DIRS=("/data/kafka1" "/data/kafka2" "/data/kafka3") # 下载解压软件包到指定目录下 wget $DOWNLOAD_URL -O /tmp/kafka.tgz && tar zxvf /tmp/kafka.tgz -C ${INSTALL_DIR} --strip-components 1 # 创建数据存储路径 for dir in "${DATA_DIRS[@]}"; do mkdir -p "$dir" done # 修改配置文件中的参数值 sed -i 's/#advertised.listeners=PLAINTEXT:\/\/your.host.name:9092/advertised.listeners=PLAINTEXT:\/\/localhost:9092/g' \ ${INSTALL_DIR}/config/server.properties # 启动 Zookeeper 和 Kafka Broker 实例 nohup ${INSTALL_DIR}/bin/zookeeper-server-start.sh ${INSTALL_DIR}/config/zookeeper.properties & sleep 5 nohup ${INSTALL_DIR}/bin/kafka-server-start.sh ${INSTALL_DIR}/config/server.properties & echo "Kafka cluster has been successfully deployed." ``` 上述脚本实现了下载、解压缩Apache Kafka分发版;创建必要的磁盘空间来保存消息日志;调整`server.properties`的关键属性以便于网络通信;最后启动ZooKeeper以及Broker节点的服务进程[^1]。 #### Ansible Playbook 方案 对于产环境中更复杂的场景,则推荐采用Ansible Playbooks来进行编排式的批量部署: ```yaml --- - hosts: kafka_nodes become: yes vars: kafka_version: "3.0.0" scala_version: "2.13" install_dir: "/opt/kafka" tasks: - name: Install Java Runtime Environment (JRE) yum: name: java-latest-openjdk-headless state: present - name: Download and extract Apache Kafka package unarchive: src: https://archive.apache.org/dist/kafka/{{ kafka_version }}/kafka_{{ scala_version }}-{{ kafka_version }}.tgz dest: "{{ install_dir }}" remote_src: true creates: "{{ install_dir }}/bin/kafka-topics.sh" - name: Create directories for storing logs file: path: "{{ item }}" state: directory owner: kafka group: kafka mode: '0755' with_items: - /var/lib/kafka/data - /var/log/kafka - name: Configure server properties template: src: templates/server.properties.j2 dest: "{{ install_dir }}/config/server.properties" owner: kafka group: kafka mode: '0644' - name: Start Zookeeper service systemd: name: zookeeper enabled: yes state: started - name: Start Kafka broker services on all nodes systemd: name: kafka enabled: yes state: started ``` 这段Playbook定义了系列的任务列表,依次执行Java运行时环境(JRE)的安装、获取官方发布的二进制版本、准备持久化的卷映射位置、定制个性化的配置模板,并最终激活守护程序以保持长时间在线服务状态[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值