kafka创建消费者报错:zookeeper is not a recognized option

在搭建kafka时,使用特定命令创建消费者时遇到'zookeeper is not a recognized option'的错误。错误源于命令参数不正确。正确创建消费者的命令应去除zookeeper选项,指定消费的生产者地址。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

kafka创建消费者报错:zookeeper is not a recognized option


最近在hadoop上搭建kafka的时候,使用

bin/kafka-console-producer.sh --broker-list hadoop132:9092,hadoop133:9092,hadoop134:9092 --topic first

此命令创建kafka消息生产者,也就是hadoop132:9092、hadoop133:9092、hadoop134:9092这三个端口生产消息。

之后在hadoop132上使用

 bin/kafka-console-consumer.sh --zookeeper hadoop132:2181 --from-beginning --topic first

此命令创建消费者的时候,一直报错

zookeeper is not a recognized option

意思就是bin/kafka-console-producer.sh这个命令本身没有zookeeper这个参数,而在下面的提示中

Option                                   Description                            
------                                   -----------                            
--bootstrap-server <String: server to    REQUIRED: The server(s) to connect to. 
  connect to>                                                                   
--consumer-property <String:             A mechanism to pass user-defined       
  consumer_prop>                           properties in the form key=value to  
                                           the consumer.                        
--consumer.config <String: config file>  Consumer config properties file. Note  
                                           that [consumer-property] takes       
                                           precedence over this config.         
--enable-systest-events                  Log lifecycle events of the consumer   
                                           in addition to logging consumed      
                                           messages. (This is specific for      
                                           system tests.)                       
--formatter <String: class>              The name of a class to use for         
                                           formatting kafka messages for        
                                           display. (default: kafka.tools.      
                                           DefaultMessageFormatter)             
--from-beginning                         If the consumer does not already have  
                                           an established offset to consume     
                                           from, start with the earliest        
                                           message present in the log rather    
                                           than the latest message.             
--group <String: consumer group id>      The consumer group id of the consumer. 
--isolation-level <String>               Set to read_committed in order to      
                                           filter out transactional messages    
                                           which are not committed. Set to      
                                           read_uncommittedto read all          
                                           messages. (default: read_uncommitted)
--key-deserializer <String:                                                     
  deserializer for key>                                                         
--max-messages <Integer: num_messages>   The maximum number of messages to      
                                           consume before exiting. If not set,  
                                           consumption is continual.            
--offset <String: consume offset>        The offset id to consume from (a non-  
                                           negative number), or 'earliest'      
                                           which means from beginning, or       
                                           'latest' which means from end        
                                           (default: latest)                    
--partition <Integer: partition>         The partition to consume from.         
                                           Consumption starts from the end of   
                                           the partition unless '--offset' is   
                                           specified.                           
--property <String: prop>                The properties to initialize the       
                                           message formatter. Default           
                                           properties include:                  
                                         	print.timestamp=true|false            
                                         	print.key=true|false                  
                                         	print.value=true|false                
                                         	key.separator=<key.separator>         
                                         	line.separator=<line.separator>       
                                         	key.deserializer=<key.deserializer>   
                                         	value.deserializer=<value.            
                                           deserializer>                        
                                         Users can also pass in customized      
                                           properties for their formatter; more 
                                           specifically, users can pass in      
                                           properties keyed with 'key.          
                                           deserializer.' and 'value.           
                                           deserializer.' prefixes to configure 
                                           their deserializers.                 
--skip-message-on-error                  If there is an error when processing a 
                                           message, skip it instead of halt.    
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is    
                                           available for consumption for the    
                                           specified interval.                  
--topic <String: topic>                  The topic id to consume on.            
--value-deserializer <String:                                                   
  deserializer for values>                                                      
--whitelist <String: whitelist>          Whitelist of topics to include for     
                                           consumption.   

所以我们需要将创建消费者的命令改成
bin/kafka-console-consumer.sh --bootstrap-server hadoop132:9092 --topic first,意思就是消费hadoop132:9092这个生产者生产的消息。

这个错误是因为在使用较新版本的Kafka时,使用了不再被识别的选项"--zookeeper-server"。\[2\]新版本的Kafka已经不需要依赖Zookeeper来查看或创建主题,而是使用"--bootstrap-server"选项。所以,你需要将命令中的"--zookeeper-server"替换为"--bootstrap-server"。例如,使用以下命令来查看主题:bin/kafka-topics.sh --list --bootstrap-server kafkahost:9092。\[3\]这样就可以避免出现"joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option"的错误了。 #### 引用[.reference_title] - *1* [kafka报错:Exception in thread “main“ joptsimple.UnrecognizedOptionException: zookeeper is not a ...](https://blog.youkuaiyun.com/zhangphil/article/details/123093909)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [Kafka报错:Exception in thread “main“ joptsimple.UnrecognizedOptionException: zookeeper is not a ...](https://blog.youkuaiyun.com/xp871038951/article/details/127334561)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值