flink 1.14版本kafkaconnector问题

flink 1.14版本kafkaconnector问题

Version
flink1.14.5
kafka2.2.1

kafkasource

kafkasource指定offset时,除了earlest和直接指定offset 外latest与时间戳处消费并不生效,reseting offset总是指定当前的offset,看源码并没有找到问题出在哪里,因此退而求,提前获取该消费者组的offset,传入offset实现。

public static OffsetsInitializer getTopicPartitons(OffsetUtils offsetUtil, String groupId, String... topics) {
        prop.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        OffsetsInitializer offsetsInitializer = OffsetsInitializer.earliest();
        if (offsetUtil.getOffestEnum().equals(OffestEnum.EARLIEST)) {
            offsetsInitializer = OffsetsInitializer.earliest();
        } else if (OffestEnum.LATEST.equals(offsetUtil.getOffestEnum())) {
            KafkaConsumer<String, String> consumer = new KafkaConsumer<>(prop);
            HashMap<TopicPartition, Long> topicPartitions = new HashMap<>();
            for (String topic : topics) {
                List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
                Set<TopicPartition> topicPartitionList = new HashSet<>();
                for (PartitionInfo partitionInfo : partitionInfos) {
                    TopicPartition topicPartition = new TopicPartition(topic, partitionInfo.partition());
                    topicPartitionList.add(topicPartition);
                }
                HashMap<TopicPartition, Long> tmepMaps = new HashMap<>();
                Map<TopicPartition, OffsetAndMetadata> committed = consumer.committed(topicPartitionList);
                Map<TopicPartition, Long> begaininge = consumer.endOffsets(topicPartitionList);
                boolean flag = false;
                for (TopicPartition topicPartition : committed.keySet()) {
                    if (committed.get(topicPartition) == null) {
                        flag = true;
                        break;
                    }
                }
                if (!flag) {
                    for (TopicPartition topicPartition : committed.keySet()) {
                        tmepMaps.put(topicPartition, committed.get(topicPartition).offset());
                    }
                } else {
                    System.out.println("-----------------无法找到上次提交offset,建议传入时间----------------------------");
                    for (TopicPartition topicPartition : begaininge.keySet()) {
                        tmepMaps.put(topicPartition, begaininge.get(topicPartition));
                    }
                }
                for (TopicPartition topicPartition:topicPartitions.keySet()){
                    System.out.println(topicPartition.topic()+":"+topicPartition.partition()+":"+topicPartitions.get(topicPartition));
                }
                topicPartitions.putAll(tmepMaps);
            }
            offsetsInitializer = OffsetsInitializer.offsets(topicPartitions);
        } else if (OffestEnum.TIMESTAMP.equals(offsetUtil.getOffestEnum())) {
            KafkaConsumer<String, String> consumer = new KafkaConsumer<>(prop);
            Map<TopicPartition, Long> topicPartitions = new HashMap<>();
            for (String topic : topics) {
                List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
                ArrayList<TopicPartition> topicPartitionList = new ArrayList<>();
                Map<TopicPartition, Long> tmepMaps = new HashMap<TopicPartition, Long>();
                for (PartitionInfo partitionInfo : partitionInfos) {
                    TopicPartition topicPartition = new TopicPartition(topic, partitionInfo.partition());
                    topicPartitionList.add(topicPartition);
                    tmepMaps.put(topicPartition, offsetUtil.getDateO());
                }
                Map<TopicPartition, OffsetAndTimestamp> tme = consumer.offsetsForTimes(tmepMaps);
                Map<TopicPartition, Long> tmepMaps1 = new HashMap<TopicPartition, Long>();
                boolean flag = false;
                for (TopicPartition topicPartition : tme.keySet()) {
                    if (tme.get(topicPartition) == null) {
                        topicPartitions = consumer.endOffsets(topicPartitionList);
                        flag = true;
                        System.out.println("------------------------无法根据时间戳消费,将从最新offset消费----------------------------");
                        break;
                    }
                }
                if (!flag) {
                    for (TopicPartition topicPartition : tme.keySet()) {
                        tmepMaps1.put(topicPartition, tme.get(topicPartition).offset());
                    }
//                    topicPartitions.clear();
                    topicPartitions.putAll(tmepMaps1);
                }
            }
            for (TopicPartition topicPartition:topicPartitions.keySet()){
                System.out.println(topicPartition.topic()+":"+topicPartition.partition()+":"+topicPartitions.get(topicPartition));
            }
            offsetsInitializer = OffsetsInitializer.offsets(topicPartitions);
        }
        if (offsetsInitializer.equals(OffsetsInitializer.earliest()) && !offsetUtil.getOffestEnum().equals(OffestEnum.EARLIEST)) {
            throw new NullPointerException("params is not true");
        }
        return offsetsInitializer;
    }

这样传入kafkaoffset可以解决。

kafkasink

当我们使用上方展示kafkasource时,并没有提前获取offset时,kafkasink 部分消息写入,不同时间写入相同消息时间戳不变,一直是该条数据第一次进入kafka时间,很奇怪,最终只能手写一个kafkaproducer,手动传入时间戳。最后传入方法kafkasource.setStartingOffset 自取后,这个现象现在暂时没有再次遇到。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值