在clickhouse创建了一张kafka引擎表,格式为
kafka_format = 'JSONEachRow',
然后启动kafka发送json消息,第一次发送因为json格式不对,然后重新发送了多次,但是表中都没有记录信息,kafka服务是正常的。
后来查看clickhouse错误日志,发现一直报json解析异常。
原因应该是第一条错误的消息一直处理不掉,导致后边的消息无法消费。
解决方法:
删除了topic然后重新创建,解决问题。
应该是有设置可以规避这种问题,有时间再研究。
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xd3493fc in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x5d02aa9 in /usr/bin/clickhouse
2. ? @ 0x599a2a8 in /usr/bin/clickhouse
3. DB::JSONEachRowRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xaa82545 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xaf75aa9 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xa9d0c97 in /usr/bin/clickhouse
6. ? @ 0xafce755 in /usr/bin/clickhouse
7. DB::KafkaBlockInputStream::readImpl() @ 0xafcf61c in /usr/bin/clickhouse
8. DB::IBlockInputStream::read() @ 0xa1ee01d in /usr/bin/clickhouse
9. DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic<bool>*) @ 0xa20b78a in /usr/bin/clickhouse
10. DB::StorageKafka::streamToViews() @ 0xabe139d in /usr/bin/clickhouse
11. DB::StorageKafka::threadFunc() @ 0xabe1d48 in /usr/bin/clickhouse
12. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0xac0a2a2 in /usr/bin/clickhouse
13. DB::BackgroundSchedulePool::threadFunction() @ 0xac0a6ca in /usr/bin/clickhouse
14. ? @ 0xac0a7cf in /usr/bin/clickhouse
15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x5d0c60d in /usr/bin/clickhouse
16. ? @ 0x5d0acdf in /usr/bin/clickhouse
17. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
18. __clone @ 0xfe02d in /usr/lib64/libc-2.17.so
后开看到有前辈遇到过这个问题,说的比较详细一点
Clickhouse Kafka引擎表使用进阶