记一次ELK从5.6.10升级到6.7.0

记一次ELK从5.6.10升级到6.7.0

由于公司要求,原来的产品使用的是elk5.6.10版本,由于现在已经出到elk7,版本过低,且一些重大漏洞已不再进行维护,所以需要升级处理,就研究了一下elk升级事项。
原定升级升到7,但查看elk对应的spring data elasticsearch只更新到对应elk6.8,升级到7无法使用,所以最终决定升级到6.7.0
目前我使用的架构是filebeat+logstash+elasticsearch,kibana也进行了升级,但由于目前项目未用到,所以没深入研究。

升级顺序

elasticsearch-》logstash-》filebeat
升级最重要的是elasticsearch,因为这里涉及到一个旧索引的问题,所以要先升级elasticsearch,并且elasticsearch官方文档有说6版本是支持低版本的logstash,就是说可以只升级elasticsearch,而不升级logstash,所以这样的升级顺序不会造成中间数据丢失,也不会造成旧索引不能用的问题。

elasticsearch升级

参考官方文档,由5.x升级到6.x是可以直接升级的,但如果从5.x升级到7.x则需要先升级到6,再升级到7才行,这点需要注意。
升级步骤官文中都有,我这里不再多说,只列一下我升级的步骤:

  1. 禁用分片分配
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": "primaries"
  }
}
'
  1. 停止不必要的索引并执行同步刷新
curl -X POST "localhost:9200/_flush/synced?pretty"
  1. 停止所有正在运行的机器学习作业.(我这里没有机器学习作业,所以无需执行)
curl -X POST "localhost:9200/_ml/set_upgrade_mode?enabled=true&pretty"
  1. 停服务(注意非root的要用sudo)
service elasticsearch stop #AS6
systemctl stop elasticsearch.service #AS7
  1. 执行升级包
rpm -Uvh elasticsearch-6.7.0.rpm
  1. 如果有插件,需要将对应的插件升级(我这里有用了ik分词器)
rm -rf ...../elasticsearch/plugins/ik #将安装目录plugins下的内容删除
unzip -o elasticsearch-analysis-ik-6.7.0.zip -d ..../elasticsearch/plugins/ik
  1. 重新启用分片分配(要先重启elasticsearch,要等elasticsearch全部启动完成才能执行)
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
'
  1. 重新启动机器学习作业
curl -X POST "localhost:9200/_ml/set_upgrade_mode?enabled=false&pretty"

至此,elasticsearch就升级完成了,到这里还比较顺利,旧的索引还能用,但新索引不能生成,因为logstash的动态入库模板还是匹配5.6.10版本时候的。调整的过程很漫长,这里列一下报错:

[2019-10-14T15:29:25,208][DEBUG][o.e.a.b.TransportShardBulkAction] [Centos6x64] [riskip_2019-10-14][2] failed to execute bulk item (index) index {[riskip_2019-10-14][imap][SooryW0BpHApaHjh-P3R]
...
java.lang.IllegalArgumentException: Rejecting mapping update to [riskip_2019-10-14] as the final mapping would have more than 1 type: [imap, pop3]
        at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:459) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-6.7.0.jar:6.7.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]

这里是因为elasticsearch6之后去除了对type的支持,所有type都为doc,如果模板中包含_default_选项,入库的时候就会报这个错误,说明有多个type将要入库,所以入库失败。需要在模板中把_default_改为doc

[2019-10-14T15:35:44,523][DEBUG][o.e.a.a.i.t.p.TransportPutIndexTemplateAction] [Centos6x64] failed to put template [mta_tmpl]
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping [properties]: Root mapping definition has unsupported parameters: 
.....
        at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:399) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:330) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.validateAndAddTemplate(MetaDataIndexTemplateService.java:253) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.access$300(MetaDataIndexTemplateService.java:65) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService$2.execute(MetaDataIndexTemplateService.java:176) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-6.7.0.jar:6.7.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]

还是模板问题,要注意,6以后不再支持string类型,而要把所有的string改为text才行。
调整模板的时候可以在kibana中进行,根据返回的报错信息随时调整,会方便一些。

logstash 升级

logstash相对elasticsearch容易一些,没有那么多步骤,只要上面把模板调整好,一般是可以顺利升级的。

  1. 停服务(注意要查看进程是否彻底停掉,可以选择直接kill)
  2. 执行升级包
rpm -Uvh logstash-6.7.0.rpm
  1. 升级对应插件
  2. 重启logstash服务即可

重启后注意观察logstash日志是否有报错异常,要即使根据报错进行修改

filebeat 升级

同上logstash的升级,停服务后执行升级包即可
说几点注意的问题:

  1. 升级完logstash后总是会出现filebeat链接超时关闭连接的情况,不知道是否为必现,建议添加配置:
input {
    beats {
       #添加这句配置,增加超时时间
        client_inactivity_timeout => 1200
        port => "${TCP_PORT:5043}"
    }
}

  1. filebeat升级后,因为更改了索引名称方式(详细的看一下官方文档吧)升级后建索引不成功,会报错:
[2019-10-15T13:48:45,277][DEBUG][o.e.a.b.TransportShardBulkAction] [Centos6x64] [da_2019-10-15][2] failed to execute bulk item (index) index {[da_2019-10-15][doc][Utv2            zW0BViIDd0UUKzBa], 
......
org.elasticsearch.index.mapper.MapperParsingException: failed to parse field [host] of type [text] in document with id 'Utv2zW0BViIDd0UUKzBa'
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:303) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:488) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:505) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:395) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:384) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:96) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:69) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:281) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:799) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:775) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:744) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.lambda$executeIndexRequestOnPrimary$3(TransportShardBulkAction.java:454) ~[elasticsearch-6.7.0.jar:6.            7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeOnPrimaryWhileHandlingMappingUpdates(TransportShardBulkAction.java:477) ~[elasticsearch-6.7.0.            jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:452) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:216) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:159) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:151) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:139) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:79) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1050) ~[elasticsearch-            6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1028) ~[elasticsearch-            6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:105) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:424)             ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:370) ~[elasticsear            ch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:61) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:273) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:240) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2561) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:987) [elasticsearch-6.            7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:369) [elasticsearch-6.7.0.j            ar:6.7.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:324            ) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:311            ) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:2            50) [x-pack-security-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportIntercepto            r.java:308) [x-pack-security-6.7.0.jar:6.7.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:686) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.7.0.jar:6.7.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
Caused by: java.lang.IllegalStateException: Can't get text on a START_OBJECT at 1:595
        at org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:85) ~[elasticsearch-x-content-6.7.0.jar:6.7.0]
        at org.elasticsearch.common.xcontent.support.AbstractXContentParser.textOrNull(AbstractXContentParser.java:269) ~[elasticsearch-x-content-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.TextFieldMapper.parseCreateField(TextFieldMapper.java:828) ~[elasticsearch-6.7.0.jar:6.7.0]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:297) ~[elasticsearch-6.7.0.jar:6.7.0]
        ... 42 more

要么改索引名称方式,要么在logstash.conf配置中增加下面的配置

mutate {
        remove_field => [ "[host]" ]
}
mutate {
      add_field => {
      "host" => "%{[beat][hostname]}"
}

至此,全部重启之后数据能够正常入库并可以正常查询,不过需要注意的是,之前通过type过滤查询结果的查询语句需要修改,因为新的索引中只有doc一种type,但数据中会有“type”:“log”这个字段,可以通过terms限定条件查询

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值