现象
在向es 写数据的时候,由于用到的默认的全局模板,会对文本类型的字段设置为keyword,也就是不分词(not_analyzed),用于做聚合等操作,同时会产生一个smart字段,该字段用于做分词,但是不分词的字段,它的最大长度和utf-8编码有关,最大长度为32766字节,如果字段长度超过这个最大值,就会报如下错误:
ava.lang.IllegalArgumentException: Document contains at least one immense term in field="groupTemplateValue"
(whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer
to not produce such terms. The prefix of the first immense term is: '[102, 114, 111, 109, 32, 58, 32, 42, 32, 44, 10,
42, 32, 58, 32, 36, 78, 85, 77, 32, 44, 10, 98, 111, 111, 108, 32, 58, 32, 123]...', original message: bytes can be at most
32766 in length; got 102464
解决
既然知道将字段设置为not_analyzed 的时候会有最大长度限制,那么就将其设置为analyzed即可,修改如下:
PUT _template/chameleon02_templatelog04tmplate
{
"template":"chameleon02_templatelog04*",
"settings":{
"index":{
"routing":{
"allocation":{
"require":{
"box_type":"warm"
}
}
},
"refresh_interval":"2s",
"number_of_shards":"5",
"translog":{
"sync_interval":"60s",
"durability":"async"
},
"number_of_replicas":"1"
}
},
"mappings":{
"_default_":{
"dynamic_templates":[
{
"message_field":{
"path_match":"@message",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"message_field":{
"path_match":"groupTemplateValue",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"message_field":{
"path_match":"groupTemplateSeq",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"message_field":{
"path_match":"groupValue",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"message_field":{
"path_match":"content",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"message_field":{
"path_match":"rawLog",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"message_field":{
"path_match":"templateValue",
"mapping":{
"norms":false,
"type":"text"
},
"match_mapping_type":"string"
}
},
{
"string_fields":{
"mapping":{
"type":"keyword",
"fields":{
"smart":{
"norms":false,
"type":"text"
}
}
},
"match_mapping_type":"string",
"match":"*"
}
}
],
"properties":{
"@timestamp":{
"type":"date"
}
}
}
},
"aliases":{
}
}
本文介绍了在使用Elasticsearch时遇到的字段长度超过32766字节的问题及解决方法。通过调整字段分析器设置,避免了因字段过长导致的数据写入失败。
718

被折叠的 条评论
为什么被折叠?



