- ES版本为2.x,其他版本也是同样道理
- 由于使用logstash向ES传送数据,_tempalte设置的字段默认为不分词,由于日志数据都是一个请求一个响应但是在程序中发现请求的日志数据丢失
- 查看ES日志显示
-
java.lang.IllegalArgumentException: Document contains at least one immense term in field="message" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Pleas e correct the analyzer to not produce such terms. The prefix of the first immense term is: '[60, 63, 120, 109, 108, 32, 118, 101, 114, 115, 105, 111, 110, 61, 34, 49, 46, 48, 34, 32, 101, 11 0, 99, 111, 100, 105, 110, 103, 61, 34]...', original message: bytes can be at most 32766 in length; got 60465 at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:692) at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:365) at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:321) at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234) at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1477) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256) at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:434) at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:375) at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:346) at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:545) at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810) at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:327) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:120) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:648) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:271) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77) at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 60465 at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284) at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:150) at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:682) ... 25 more
-
bytes can be at most 32766 in length(字节的长度最多可以是32766),keyword类型的最大支持的长度为32766个UTF-8类型的字符。text对字符长度没有限制。设置ignore_above后,超过给定长度后的数据将不被索引,无法通过term精确匹配检索返回结果。
-
修改_template,让太字段分词就行了,就可以解决问题
-
-
在次查看日志就不会出现这种情况了,在应用程序中就可以查到数据了
参考链接:
https://www.elastic.co/guide/en/elasticsearch/reference/6.5/ignore-above.html