如何提高Lucene建立索引的速度 How to make indexing faster

本文提供了多种加速Lucene索引速度的方法,包括使用最新版本、采用本地文件系统、增加RAM容量、合理设置合并因子等,并详细介绍了每种方法的具体操作步骤。

How to make indexing faster

Here are some things to try to speed up the indexing speed of your Lucene application. Please see ImproveSearchingSpeed for how to speed up searching.

  • Be sure you really need to speed things up. Many of the ideas here are simple to try, but others will necessarily add some complexity to your application. So be sure your indexing speed is indeed too slow and the slowness is indeed within Lucene.

  • Make sure you are using the latest version of Lucene.

  • Use a local filesystem. Remote filesystems are typically quite a bit slower for indexing. If your index needs to be on the remote fileysystem, consider building it first on the local filesystem and then copying it up to the remote filesystem.

  • Get faster hardware, especially a faster IO system. If possible, use a solid-state disk (SSD). These devices have come down substantially in price recently, and much lower cost of seeking can be a very sizable speedup in cases where the index cannot fit entirely in the OS's IO cache.

  • Open a single writer and re-use it for the duration of your indexing session.

  • Flush by RAM usage instead of document count.

    For Lucene <= 2.2: call writer.ramSizeInBytes() after every added doc then call flush() when it's using too much RAM. This is especially good if you have small docs or highly variable doc sizes. You need to first set maxBufferedDocs large enough to prevent the writer from flushing based on document count. However, don't set it too large otherwise you may hit LUCENE-845. Somewhere around 2-3X your "typical" flush count should be OK.

    For Lucene >= 2.3IndexWriter can flush according to RAM usage itself. Call writer.setRAMBufferSizeMB() to set the buffer size. Be sure you don't also have any leftover calls to setMaxBufferedDocs since the writer will flush "either or" (whichever comes first).

  • Use as much RAM as you can afford.

    More RAM before flushing means Lucene writes larger segments to begin with which means less merging later. Testing in LUCENE-843 found that around 48 MB is the sweet spot for that content set, but, your application could have a different sweet spot.

  • Turn off compound file format.

    Call setUseCompoundFile(false). Building the compound file format takes time during indexing (7-33% in testing for LUCENE-888). However, note that doing this will greatly increase the number of file descriptors used by indexing and by searching, so you could run out of file descriptors if mergeFactor is also large.

  • Re-use Document and Field instances As of Lucene 2.3 there are new setValue(...) methods that allow you to change the value of a Field. This allows you to re-use a single Field instance across many added documents, which can save substantial GC cost. It's best to create a single Document instance, then add multiple Field instances to it, but hold onto these Field instances and re-use them by changing their values for each added document. For example you might have an idField, bodyField, nameField, storedField1, etc. After the document is added, you then directly change the Field values (idField.setValue(...), etc), and then re-add your Document instance.

    Note that you cannot re-use a single Field instance within a Document, and, you should not change a Field's value until the Document containing that Field has been added to the index. See Fieldfor details.

  • Always add fields in the same order to your Document, when using stored fields or term vectors

    Lucene's merging has an optimization whereby stored fields and term vectors can be bulk-byte-copied, but the optimization only applies if the field name -> number mapping is the same across segments. Future Lucene versions may attempt to assign the same mapping automatically (see LUCENE-1737), but until then the only way to get the same mapping is to always add the same fields in the same order to each document you index.

  • Re-use a single Token instance in your analyzer Analyzers often create a new Token for each term in sequence that needs to be indexed from a Field. You can save substantial GC cost by re-using a single Token instance instead.

  • Use the char[] API in Token instead of the String API to represent token Text

    As of Lucene 2.3, a Token can represent its text as a slice into a char array, which saves the GC cost of new'ing and then reclaiming String instances. By re-using a single Token instance and using the char[] API you can avoid new'ing any objects for each term. See Token for details.

  • Use autoCommit=false when you open your IndexWriter

    In Lucene 2.3 there are substantial optimizations for Documents that use stored fields and term vectors, to save merging of these very large index files. You should see the best gains by using autoCommit=false for a single long-running session of IndexWriter. Note however that searchers will not see any of the changes flushed by this IndexWriter until it is closed; if that is important you should stick with autoCommit=true instead or periodically close and re-open the writer.

  • Instead of indexing many small text fields, aggregate the text into a single "contents" field and index only that (you can still store the other fields).

  • Increase mergeFactor, but not too much.

    Larger mergeFactors defers merging of segments until later, thus speeding up indexing because merging is a large part of indexing. However, this will slow down searching, and, you will run out of file descriptors if you make it too large. Values that are too large may even slow down indexing since merging more segments at once means much more seeking for the hard drives.

  • Turn off any features you are not in fact using. If you are storing fields but not using them at query time, don't store them. Likewise for term vectors. If you are indexing many fields, turning off norms for those fields may help performance.

  • Use a faster analyzer.

    Sometimes analysis of a document takes alot of time. For example, StandardAnalyzer is quite time consuming, especially in Lucene version <= 2.2. If you can get by with a simpler analyzer, then try it.

  • Speed up document construction. Often the process of retrieving a document from somewhere external (database, filesystem, crawled from a Web site, etc.) is very time consuming.

  • Don't optimize... ever.

  • Use multiple threads with one IndexWriter. Modern hardware is highly concurrent (multi-core CPUs, multi-channel memory architectures, native command queuing in hard drives, etc.) so using more than one thread to add documents can give good gains overall. Even on older machines there is often still concurrency to be gained between IO and CPU. Test the number of threads to find the best performance point.

  • Index into separate indices then merge. If you have a very large amount of content to index then you can break your content into N "silos", index each silo on a separate machine, then use the writer.addIndexesNoOptimize to merge them all into one final index.

  • Run a Java profiler.

    If all else fails, profile your application to figure out where the time is going. I've had success with a very simple profiler called JMP. There are many others. Often you will be pleasantly surprised to find some silly, unexpected method is taking far too much time.

(1)普通用户端(全平台) 音乐播放核心体验: 个性化首页:基于 “听歌历史 + 收藏偏好” 展示 “推荐歌单(每日 30 首)、新歌速递、相似曲风推荐”,支持按 “场景(通勤 / 学习 / 运动)” 切换推荐维度。 播放页功能:支持 “无损音质切换、倍速播放(0.5x-2.0x)、定时关闭、歌词逐句滚动”,提供 “沉浸式全屏模式”(隐藏冗余控件,突出歌词与专辑封面)。 多端同步:自动同步 “播放进度、收藏列表、歌单” 至所有登录设备(如手机暂停后,电脑端打开可继续播放)。 音乐发现与管理: 智能搜索:支持 “歌曲名 / 歌手 / 歌词片段” 搜索,提供 “模糊匹配(如输入‘晴天’联想‘周杰伦 - 晴天’)、热门搜索词推荐”,结果按 “热度 / 匹配度” 排序。 歌单管理:创建 “公开 / 私有 / 加密” 歌单,支持 “批量添加歌曲、拖拽排序、一键分享到社交平台”,系统自动生成 “歌单封面(基于歌曲风格配色)”。 音乐分类浏览:按 “曲风(流行 / 摇滚 / 古典)、语言(国语 / 英语 / 日语)、年代(80 后经典 / 2023 新歌)” 分层浏览,每个分类页展示 “TOP50 榜单”。 社交互动功能: 动态广场:查看 “关注的用户 / 音乐人发布的动态(如‘分享新歌感受’)、好友正在听的歌曲”,支持 “点赞 / 评论 / 转发”,可直接点击动态中的歌曲播放。 听歌排行:个人页展示 “本周听歌 TOP10、累计听歌时长”,平台定期生成 “全球 / 好友榜”(如 “好友中你本周听歌时长排名第 3”)。 音乐圈:加入 “特定曲风圈子(如‘古典音乐爱好者’)”,参与 “话题讨论(如‘你心中最经典的钢琴曲’)、线上歌单共创”。 (2)音乐人端(创作者中心) 作品管理: 音乐上传:支持 “无损音频(FLAC/WAV)+ 歌词文件(LRC)+ 专辑封面” 上传,填写 “歌曲信息
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值