solr读取word,pdf

本文对比了Lucene和Solr在索引更新的便捷性,通过示例展示了如何使用SolrJ客户端建立索引及查询。包括详细步骤和代码实现,帮助开发者更高效地进行文本检索系统的构建。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自:http://blog.youkuaiyun.com/aidayei/article/details/6535898

 

lucene和solr的对比:

1.http://www.blogjava.net/luopeizhong/articles/321732.html

2.Apache Solr:基于Lucene的可扩展集群搜索服务器

 

lucene对索引的更新比solr麻烦,solr只需要调用一个函数UpdateRequest.setAction(AbstractUpdateRequest.ACTION.COMMIT, false, false)就完成了更新,而lucene需要先删除再更新,否则就变成增量索引了

lucene更新索引:http://langhua9527.iteye.com/blog/582347

 

前面已经简单介绍了solr的安装与使用,下面来看看如何用客户端solrj来建立索引及查询

 

import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.SolrServer; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; import org.apache.solr.client.solrj.request.AbstractUpdateRequest; import org.apache.solr.client.solrj.request.UpdateRequest; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrInputDocument; public class SolrjTest { public static void main(String[] args) throws IOException, SolrServerException { String urlString = " http://localhost:8080/solr"; SolrServer server = new CommonsHttpSolrServer(urlString); SolrInputDocument doc1 = new SolrInputDocument(); doc1.addField("id", 12); doc1.addField("content", "my test is easy,测试solr"); SolrInputDocument doc2 = new SolrInputDocument(); doc2.addField("id", "solrj简单测试"); doc2.addField("content", "doc2"); Collection<SolrInputDocument> docs = new ArrayList<SolrInputDocument>(); docs.add(doc1); docs.add( doc2 ); server.add(docs); UpdateRequest req = new UpdateRequest(); req.setAction(AbstractUpdateRequest.ACTION.COMMIT, false, false); req.add(docs); req.process(server); SolrQuery query = new SolrQuery(); query.setQuery("test"); query.setHighlight(true).setHighlightSnippets(1); query.setParam("hl.fl", "content"); QueryResponse ret = server.query(query); System.out.println(ret); } }

 

solrj要成功运行,需要导入下列包才行

From /dist:

apache-solr-solrj-3.1.0.jar

From /dist/solrj-lib:
commons-codec-1.4.jar
commons-httpclient-3.1.jar
jcl-over-slf4j-1.5.5.jar
slf4j-api-1.5.5.jar

下面这个包需要去官方下载,因为本人在solr3.1中是没发现这个jar包的,估计是在低版本中有
slf4j-jdk14-1.5.5.jar

solr从1.4版本开始,将apache Tika合并进来,Tika是一个内容抽取的工具集合(a toolkit for text extracting)。它集成了POI, Pdfbox 并且为文本抽取工作提供了一个统一的界面。solr中利用这个工具可以很简单实现对pdf、word等富文本的提取

 

我的是3.1版,在实现过程中,走了很多弯路,终于还是自己解决了,下面分享一下

 

  

package test; import java.io.File; import java.io.IOException; import org.apache.solr.client.solrj.SolrServer; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.request.AbstractUpdateRequest; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; import org.apache.solr.client.solrj.request.ContentStreamUpdateRequest; /** * @author aidy 2011.6.9 */ public class SolrExampleTests { public static void main(String[] args) { try { //Solr cell can also index MS file (2003 version and 2007 version) types. String fileName = "D://test//luceneTest//1.pdf"; //this will be unique Id used by Solr to index the file contents. String solrId = "1.pdf"; indexFilesSolrCell(fileName, solrId); } catch (Exception ex) { System.out.println(ex.toString()); } } /** * Method to index all types of files into Solr. * @param fileName * @param solrId * @throws IOException * @throws SolrServerException */ public static void indexFilesSolrCell(String fileName, String solrId) throws IOException, SolrServerException { String urlString = "http://localhost:8080/solr"; SolrServer solr = new CommonsHttpSolrServer(urlString); ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update/extract"); up.addFile(new File(fileName)); up.setParam("literal.id", solrId); up.setParam("fmap.content", "attr_content"); up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true); solr.request(up); QueryResponse rsp = solr.query(new SolrQuery("*:*")); System.out.println(rsp); } }

 

刚开始一直在solr.request(up)这一步报错,看tomcat报错是说没有ignored_meta类型,刚开始一直不理解,因为我的配置文件schema.xml中根本没有这种类型,刚开始还以为是版本原因导致,专门去下了solr1.4版,运行果然不报错,后来才想到是因为前面在入门例子中,我修改了配置文件schema.xml,而solrconfig.xml配置文件在/update/extract节点处,有ignored_类型引用,后来我在schema.xml加入ignored_类型后,运行正常

 

后面研究一下如何用solrj进行查询,并将查询结果展示在web页面上,因为查询结果返回的是xml形式

 

如果solr是1.3版本或以下,请参考:http://wiki.apache.org/solr/UpdateRichDocuments

(或参考:http://lfzhs.iteye.com/blog/770446)

 

 

参考资料:

1.http://wiki.apache.org/solr/ExtractingRequestHandler
2.http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Content-Extraction-Tika

 

 

对PDF索引:

$  curl "http://localhost:8080/solr/update/extract?literal.id=3.pdf\&ext.idx.attr=true\&ext.def.fl=text" -F "myfile=@d:/pdf/3.pdf"

 

提交索引:

$ curl "http://localhost:8080/solr/update/" -H "Content-Type: text/xml" --data-binary '<commit waitFlush="false"/>'

 

 

Input Parameters

 

  • fmap.<source_field>=<target_field> - Maps (moves) one field name to another. Example: fmap.content=text will cause the content field normally generated by Tika to be moved to the "text" field.

  • boost.<fieldname>=<float> - Boost the specified field.

  • literal.<fieldname>=<value> - Create a field with the specified value. May be multivalued if the Field is multivalued.

  • uprefix=<prefix> - Prefix all fields that are not defined in the schema with the given prefix. This is very useful when combined with dynamic field definitions. Example: uprefix=ignored_ would effectively ignore all unknown fields generated by Tika given the example schema contains <dynamicField name="ignored_*" type="ignored"/>

  • defaultField=<Field Name> - If uprefix is not specified and a Field cannot be determined, the default field will be used.

  • extractOnly=true|false - Default is false. If true, return the extracted content from Tika without indexing the document. This literally includes the extracted XHTML as a string in the response. When viewing manually, it may be useful to use a response format other than XML to aid in viewing the embedded XHTML tags. See TikaExtractOnlyExampleOutput.

  • resource.name=<File Name> - The optional name of the file. Tika can use it as a hint for detecting mime type.

  • capture=<Tika XHTML NAME> - Capture XHTML elements with the name separately for adding to the Solr document. This can be useful for grabbing chunks of the XHTML into a separate field. For instance, it could be used to grab paragraphs (<p>) and index them into a separate field. Note that content is also still captured into the overall "content" field.

  • captureAttr=true|false - Index attributes of the Tika XHTML elements into separate fields, named after the element. For example, when extracting from HTML, Tika can return the href attributes in <a> tags as fields named "a". See the examples below.

  • xpath=<XPath expression> - When extracting, only return Tika XHTML content that satisfies the XPath expression. See http://lucene.apache.org/tika/documentation.html for details on the format of Tika XHTML. See also TikaExtractOnlyExampleOutput.

  • lowernames=true|false - Map all field names to lowercase with underscores. For example, Content-Type would be mapped to content_type.

If extractOnly is true, additional input parameters:

  • extractFormat=xml|text - Default is xml. Controls the serialization format of the extract content. xml format is actually XHTML, like passing the -x command to the tika command line application, while text is like the -t command.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值