分批次处理:
在处理大批量数据的时候一次性的插入或删除会对数据库造成压力(还有其他原因),我们分批处理,
private static final int DELETE_COUNT_LIMIT =2000;
if(!CollectionUtils.isEmpty(existTopicIds)){ if (topicIds.size()>DELETE_COUNT_LIMIT){ int size =topicIds.size()-1; int subIndex = (int) (((size)/DELETE_COUNT_LIMIT)); subIndex = subIndex+((((size)%DELETE_COUNT_LIMIT))>0?1:0); for(int x=0; x<subIndex; x++){ int endIndex = (x+1)*DELETE_COUNT_LIMIT; int startIndex = x*DELETE_COUNT_LIMIT; List<Integer> list = topicIds.subList(startIndex, endIndex>size?size+1:endIndex); regionHiddenConfigWriteMapper.deleteByTopicIds(list); topicHiddenForPurchasedWriteMapper.deleteByTopicIds(list); log.info("按照来源添加数据,先删除已经存在的数据,总共size={}条数据,这是第x={}次删除,删除专题list={}", + topicIds.size(),x+1,list); } } else { regionHiddenConfigWriteMapper.deleteByTopicIds(topicIds); topicHiddenForPurchasedWriteMapper.deleteByTopicIds(topicIds); log.info("按照来源添加数据,先删除已经存在的数据,总共size={}条数据,删除专题topicIds={}",topicIds.size(),topicIds); } }
其他方式:
https://blog.youkuaiyun.com/lxxc11/article/details/52817817?winzoom=1
https://www.cnblogs.com/lewisat/p/4339748.html