Elasticsearch节点磁盘空间耗尽

本文描述了一个Elasticsearch集群中数据节点磁盘空间耗尽导致的严重后果,包括数据损坏和集群状态变为RED。日志显示,磁盘空间不足影响了segment merge操作和translog的完整性,可能导致数据丢失。根据社区讨论,手动删除.recovering文件可能是恢复的一种方法,但未实际验证。由于缺乏副本,当主节点出现问题时,没有备份可用。Elasticsearch依赖本地文件系统存储Lucene文件,因此磁盘空间管理至关重要,可通过cluster.routing.allocation.disk.watermark.low设置来控制。
部署运行你感兴趣的模型镜像
       最近遇到了一个特殊的情况,我们所使用的一个Elasticsearch集群的数据节点磁盘空间耗尽(out of space),啥事会发生呢? 数据损坏,集群RED。下面是相关的日志信息,我将其中一些关键点日志信息重点小时。这里 ES-Data_IN_11是当时的Master节点,ES-Data_IN_12是出现的磁盘耗尽的数据节点,出事儿的index名字为raw_v3.2017_03_22,我们仍然使用的是Elasticsearch 1.7.2。

[2017-03-22 11:57:30,503][WARN ][index.merge.scheduler ] [ES-Data_IN_12] [raw_v3.2017_03_22][1] failed to merge

[2017-03-22 11:57:30,646][WARN ][index.engine ] [ES-Data_IN_12] [raw_v3.2017_03_22][1] failed engine [merge exception]

[2017-03-22 11:57:30,663][WARN ][indices.cluster ] [ES-Data_IN_12] [[raw_v3.2017_03_22][1]] marking and sending shard failed due to [engine failure, reason [merge exception]]

[2017-03-22 11:57:31,677][WARN ][cluster.action.shard ] [ES-Data_IN_11] [raw_v3.2017_03_22][1] received shard failed for [raw_v3.2017_03_22][1], node[h9d1tKJFRtmg1aG-VMkA4A], [P], s[STARTED], indexUUID [2Z-UtAr8Qx2h6Xe9BkxvAA], reason [shard failure [engine failure, reason [merge exception ]] [MergeException[java.io.IOException: There is not enough space on the disk]; nested: IOException[There is not enough space on the disk]; ]]

[2017-03-22 11:57:36,883][WARN ][indices.cluster          ] [ES-Data_IN_12] [[raws_v3.2017_03_22][1]] marking and sending shard failed due to [failed to create shard]
org.elasticsearch.index.shard.IndexShardCreationException: [raw_v3.2017_03_22][1] failed to create shard
Caused by: org.apache.lucene.store.LockObtainFailedException:
Can't lock shard [raw_v3.2017_03_22][1] , timed out after 5000ms

[2017-03-22 11:57:37,883][WARN ][cluster.action.shard     ] [ES-Data_IN_11] [raw_v3.2017_03_22][1] received shard failed for [raw_v3.2017_03_22][1], node[h9d1tKJFRtmg1aG-VMkA4A], [P], s[INITIALIZING], unassigned_info[[reason=ALLOCATION_FAILED], at[2017-03-22T11:57:31.677Z], details[shard failure [engine failure, reason [merge exception]][MergeException[java.io.IOException: There is not enough space on the disk]; nested: IOException[There is not enough space on the disk]; ]]], indexUUID [2Z-UtAr8Qx2h6Xe9BkxvAA], reason [shard failure [failed to create shard][IndexShardCreationException[[raw_v3.2017_03_22][1] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [raw_v3.2017_03_22][1], timed out after 5000ms]; ]]

[2017-03-22 11:58:36,308][WARN ][cluster.action.shard     ] [ES-Data_IN_11] [raw_v3.2017_03_22][1] received shard failed for [raw_v3.2017_03_22][1], node[h9d1tKJFRtmg1aG-VMkA4A], [P], s[INITIALIZING], unassigned_info[[reason=ALLOCATION_FAILED], at[2017-03-22T11:58:20.607Z], details[shard failure [failed to create shard][IndexShardCreationException[[raw_v3.2017_03_22][1] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [raw_v3.2017_03_22][1], timed out after 5000ms]; ]]], indexUUID [2Z-UtAr8Qx2h6Xe9BkxvAA], reason [shard failure [failed to create shard][IndexShardCreationException[[raw_v3.2017_03_22][1] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [raw_v3.2017_03_22][1], timed out after 5000ms]; ]]

[2017-03-22 11:59:57,286][WARN ][cluster.action.shard     ] [ES-Data_IN_11] [raw_v3.2017_03_22][1] received shard failed for [raw_v3.2017_03_22][1], node[h9d1tKJFRtmg1aG-VMkA4A], [P], s[INITIALIZING], unassigned_info[[reason=ALLOCATION_FAILED], at[2017-03-22T11:59:41.200Z], details[shard failure [failed to create shard][IndexShardCreationException[[raw_v3.2017_03_22][1] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [raw_v3.2017_03_22][1], timed out after 5000ms]; ]]], indexUUID [2Z-UtAr8Qx2h6Xe9BkxvAA], reason [master [ES-Data_IN_11][1_-exjNlSsi7h_0lJYCoRA][RD0003FF7D543F][inet[/100.117.132.72:9300]]{fault_domain=2, update_domain=11, data=false, master=true}
marked shard as initializing, but shard is marked as failed, resend shard failure ]

[2017-03-22 12:32:00,716][WARN ][index.engine             ] [ES-Data_IN_12] [raw_v3.2017_03_22][0] failed to sync translog

[2017-03-22 12:32:00,872][WARN ][indices.cluster          ] [ES-Data_IN_12] [[raw_v3.2017_03_22][0]] marking and sending shard failed due to [failed recovery] org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [raw_v3.2017_03_22][0] failed to recover shard

[2017-03-22 12:33:55,854][WARN ][cluster.action.shard     ] [ES-Data_IN_11] [raw_v3.2017_03_22][0] received shard failed for [raw_v3.2017_03_22][0], node[h9d1tKJFRtmg1aG-VMkA4A], [P], s[INITIALIZING], unassigned_info[[reason=ALLOCATION_FAILED], at[2017-03-22T12:32:01.410Z], details[shard failure [failed recovery][IndexShardGatewayRecoveryException[[raw_v3.2017_03_22][0] failed to recover shard]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: ElasticsearchIllegalArgumentException[No version type match [46]]; ]]], indexUUID [2Z-UtAr8Qx2h6Xe9BkxvAA], reason [shard failure [failed recovery][IndexShardGatewayRecoveryException[[raw_v3.2017_03_22][0] failed to recover shard]; nested: TranslogCorruptedException[translog corruption while reading from stream]; nested: ElasticsearchIllegalArgumentException[No version type match [46]]; ]]

         从上面列出的日志不难看出,当节点ES-Data_IN_12的磁盘空间耗尽时,首先出现反应出问题的是后台运行的segment merge操作,因为segment merge需要额外的空间存储merge的结果。磁盘的耗尽也造成了translog的损坏,似乎Elasticsearch无法自动修复。根据#12055的讨论,手动删除.recovering文件可以解决这个问题,但是本人没有尝试过。最后需要指出的是,如果有replica在,可能情况就不是这样。我们索引之所以会RED,是因为一开始创建时唯一的replica没有被分配到任何机器上一直处于unassigned状态,所以在primary也出现问题时,就会没有可用的备份了。
       简单总结一下, Elasticsearch是基于Lucene的分布式索引系统,它以来本地的文件系统来存储Lucene文件,所以存储空间大小对于Elasticsearch至关重要。Elasticsearch提供了cluster.routing.allocation.disk.watermark.low来控制使用了存储空间多少时才分配shard到该节点上,其默认值为85%。





您可能感兴趣的与本文相关的镜像

Stable-Diffusion-3.5

Stable-Diffusion-3.5

图片生成
Stable-Diffusion

Stable Diffusion 3.5 (SD 3.5) 是由 Stability AI 推出的新一代文本到图像生成模型,相比 3.0 版本,它提升了图像质量、运行速度和硬件效率

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值