lucene生成索引,依据div 的id解析html 我在项目中的应用(一)

我把一些重要的东西记载下来

 

1、 用htmlparser解析html

       对htmlparser一个类作了点更改,以实现依据div 的id解析html,创建索引

        说明我的静态页全部自己静态生成,把要索引的内容放在不同id的div中

    

 

对htmlparser一个类作更改 
package com.berheley.components.lucene;

import java.util.HashMap;

import org.htmlparser.Node;
import org.htmlparser.Tag;
import org.htmlparser.tags.Div;
import org.htmlparser.util.NodeList;
import org.htmlparser.visitors.NodeVisitor;

public class DivVisitor extends NodeVisitor
{

 private String         divId;

 private String         url;

 private String         abs;

 private final NodeList      nodeList = new NodeList();

 private final HashMap<String, String> divMap = new HashMap<String, String>();

 public DivVisitor()
 {
 }

 public DivVisitor(String divId, String url, String abs)
 {
  this.divId = divId;
  this.url = url;
  this.abs = abs;
 }

 public String getDivId()
 {
  return divId;
 }

 public void setDivId(String divId)
 {
  this.divId = divId;
 }

 public String getUrl()
 {
  return url;
 }

 public void setUrl(String url)
 {
  this.url = url;
 }

 public String getAbs()
 {
  return abs;
 }

 public void setAbs(String abs)
 {
  this.abs = abs;
 }

 public NodeList getNodeList()
 {
  return nodeList;
 }

 public String toHtml()
 {
  String html = "";
  Node[] nodes = nodeList.toNodeArray();
  for (int i = 0; i < nodes.length; i++)
  {
   Tag tag = (Tag) nodes[i];
   if (tag.getText().equals("html"))
   {
    html = tag.toHtml();
    break;
   }
  }
  return html;
 }

 @Override
 public void visitTag(Tag tag)
 {
  if (tag instanceof Div)
  {
   nodeList.add(tag);
   if (tag.getAttribute("id") != null)
   {
    if (tag.getAttribute("id").equals("n_tit"))    //规定静态页面的div为要做索引的内容,以div 的id(n_tit,n_link。。。。)来标示要做索引的地方
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_link"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_time"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_con"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_author"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_key"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_subTitle"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    } else if (tag.getAttribute("id").equals("n_from"))
    {
     divMap.put(tag.getAttribute("id"), tag.toPlainTextString());
    }
   }
  }
 }

 public HashMap getDivMap()
 {
  return divMap;
 }
}

 

静态页面的例子:

 

<div id="tit"><img src="/common/Image/columnimage/default.gif"/><a href='/tggl/ztzl/index.html'>专题专栏</a>-><a href='/tggl/ztzl/zxswsyq/index.html'>中心商务商业区</a></div>
<div id="list">
 <div id="n_tit">天津滨海中心商业商务区</div>
 <div id="n_link" style="display:none">/tggl/ztzl/zxswsyq/20080906111411076sHn.html</div>
 <div>
  <div id="n_from">来源:</div>
  <div id="n_author">作者:塘沽政务网</div>
  <div id="n_time">日期:2008-09-07 21:47:55</div>
 </div>
 <div id="n_author" style="display:none">塘沽政务网</div>
 <div id="n_key" style="display:none"></div>
 <div id="n_subTitle" style="display:none"></div>
 <div id="n_con">
  <p>滨海新区中心商务商业区位于塘沽城区中部,东起北海路,南接海河,西至和平路、京山铁路,北至泰达大街、津塘公路,总用地规模为 8.58 平方公里。其规划布局结构可概括为"一轴三区"。"一轴"是以南海路、胜利路为主体的中央大道,"三区"是指开发区中心商务区、于家堡和响螺湾中心商务区、解放路和天碱中心商业区。中心商务商业区是滨海新区的七大功能区之一,也是完善核心功能、集中展示现代化国际港口城市标志区形象的重点区域。将立足建设环渤海地区金融中心、国际贸易中心和信息服务中心,突出金融服务、现代商务、高端商业等核心功能,发展金融保险、商务商贸、会展旅游、中介服务以及文化娱乐等服务业,努力建设成为滨海新区现代服务业发展聚集区。</p>
<p>  <strong>中央大道</strong>全长53.5公里,主线路宽60米,双向8车道,北起汉沽南外环,南至大港海景大道,与世纪大道相接,是贯穿滨海新区南北的主动脉。目前正积极筹备建设一期工程,即由北塘经开发区南海路穿过四号路的地道工程,经塘沽区胜利路过海河连接津沽一线的海底隧道工程,总计16公里。</p>
<p>  <strong>开发区商务区</strong>定位为滨海新区商务中心,是集新知识经济产业、会展产业、生产者服务业、消费者服务业及休闲文化设施于一体的综合性商务中心,将为区域内制造业、物流业等优势产业发展提供办公、金融、商务等配套服务。</p>
<p>  <strong>于家堡和响螺湾商务区</strong>定位为滨海新区商务中心,规划建设环渤海地区的金融中心、国际贸易中心、信息服务中心,将集中国内外主要的金融、保险、证券、跨国公司总部或区域总部,大力吸引法律、会计、广告、咨询、信息服务等现代综合服务业入区,与开发区商务区共同构成滨海新区中心商务区。</p>
<p>  <strong>天碱和解放路地区</strong>定位为滨海新区商业中心,天碱搬迁后,将结合解放路步行街,建设城市商业中心;结合高速铁路城际站等城市综合交通枢纽的建设,发展城市大型商业设施和高档公寓。</p>
<p>  我们本着高标准规划设计、高水平开发建设的原则,进一步优化功能布局,完善服务设施,全力推进这几个重点区域的建设。特别是响螺湾商务区建设势头非常强劲,商务区内规划建设51个高层楼座,截止目前,我们收到了40个省、市政府和中央大企业来函,接待了中央驻津单位、外省市驻津办事处、外地在津社团组织110批次,积极要求参与响螺湾商务区建设。我们已与中钢集团、温州商会、深圳和利丰投资公司等单位签订了正式投资协议,建设50个楼座,总投资额173.5亿元,总建筑面积237.7万平方米。2007年全年开工建设20座,其中上年开工建设10座。</p>
<!--BodyEnd//-->
 </div>
</div>

 

创建索引的主要的一个类

 


package com.berheley.components.lucene;

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;

import org.apache.lucene.demo.html.HTMLParser;
import org.apache.lucene.document.DateTools;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.htmlparser.Parser;
import org.htmlparser.util.ParserException;

/** A utility for making Lucene Documents for HTML documents. */
public class HTMLDocument
{

 static char dirSep = System.getProperty("file.separator").charAt(0);

 public static String uid(File f)
 {
  // Append path and date into a string in such a way that lexicographic
  // sorting gives the same results as a walk of the file hierarchy. Thus
  // null (/u0000) is used both to separate directory components and to
  // separate the path from the date.
  return f.getPath().replace(dirSep, '/u0000')
     + "/u0000"
     + DateTools.timeToString(f.lastModified(),
      DateTools.Resolution.SECOND);
 }

 public static String uid2url(String uid)
 {
  String url = uid.replace('/u0000', '/'); // replace nulls with slashes
  return url.substring(0, url.lastIndexOf('/')); // remove date from end
 }

 public static Document Document(File f) throws IOException,
    InterruptedException
 {
  String content = readTextFile(f.getPath(), "UTF-8");
  String title = new String();
  String key = new String();
  String link = new String();
  String subTitle = new String();
  String from = new String();
  String author = new String();
  String time = new String();
  String nr = new String();
  Parser myParser;
  myParser = Parser.createParser(content, "UTF-8");
  DivVisitor visitor = new DivVisitor();
  try
  {
   myParser.visitAllNodesWith(visitor);
  } catch (ParserException e)
  {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }
  Iterator it = visitor.getDivMap().entrySet().iterator();
  Set<Map.Entry<String, String>> enterySet = visitor.getDivMap().entrySet();
  for (Map.Entry<String, String> ent : enterySet)
  {
   if (ent.getKey().equals("n_tit"))
   {
    title = ent.getValue();
   } else if (ent.getKey().equals("n_link"))
   {
    link = ent.getValue();
   } else if (ent.getKey().equals("n_time"))
   {
    time = ent.getValue();
   } else if (ent.getKey().equals("n_subTitle"))
   {
    subTitle = ent.getValue();
   } else if (ent.getKey().equals("n_from"))
   {
    from = ent.getValue();
   } else if (ent.getKey().equals("n_author"))
   {
    author = ent.getValue();
   } else if (ent.getKey().equals("n_key"))
   {
    key = ent.getValue();
   } else if (ent.getKey().equals("n_con"))
   {
    nr = ent.getValue();
   }
  }
  // make a new, empty document
  Document doc = new Document();
  // Add the url as a field named "path". Use a field that is
  // indexed (i.e. searchable), but don't tokenize the field into words.
  doc.add(new Field("path", f.getPath().replace(dirSep, '/'), Field.Store.YES, Field.Index.UN_TOKENIZED));
  // Add the last modified date of the file a field named "modified".
  // Use a field that is indexed (i.e. searchable), but don't tokenize
  // the field into words.
  doc.add(new Field("modified", DateTools.timeToString(f.lastModified(),
   DateTools.Resolution.MINUTE), Field.Store.YES, Field.Index.UN_TOKENIZED));
  // Add the uid as a field, so that index can be incrementally maintained.
  // This field is not stored with document, it is indexed, but it is not
  // tokenized prior to indexing.
  doc.add(new Field("uid", uid(f), Field.Store.NO, Field.Index.UN_TOKENIZED));
  FileInputStream fis = new FileInputStream(f);
  HTMLParser parser = new HTMLParser(fis);
  // Add the tag-stripped contents as a Reader-valued Text field so it will
  // get tokenized and indexed.
  doc.add(new Field("link", link, Field.Store.YES, Field.Index.NO));
  doc.add(new Field("content", nr, Field.Store.YES, Field.Index.TOKENIZED));
  // doc.add(new Field("content", parser.getReader()));
  // doc.add(new Field("content", parser.getReader().toString(), Field.Store.YES,
  // Field.Index.TOKENIZED));
  // Add the summary as a field that is stored and returned with
  // hit documents for display.
  doc.add(new Field("time", time, Field.Store.YES, Field.Index.NO));
  // Add the title as a field that it can be searched and that is stored.
  // doc.add(new Field("title", parser.getTitle(), Field.Store.YES, Field.Index.TOKENIZED));
  doc.add(new Field("title", title, Field.Store.YES, Field.Index.TOKENIZED));
  doc.add(new Field("subTitle", subTitle, Field.Store.YES, Field.Index.TOKENIZED));
  doc.add(new Field("key", key, Field.Store.YES, Field.Index.TOKENIZED));
  doc.add(new Field("author", author, Field.Store.YES, Field.Index.TOKENIZED));
  doc.add(new Field("from", from, Field.Store.YES, Field.Index.TOKENIZED));
  // return the document
  System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++");
  System.out.println("--------111111111111111----Title()=" + title);
  System.out.println("--------222222222222222----nr=" + nr);
  System.out.println("--------333333333333333----link=" + link);
  System.out.println("--------444444444444444----time=" + time);
  System.out.println("--------555555555555555----subTitle=" + subTitle);
  System.out.println("--------666666666666666----key=" + key);
  System.out.println("--------777777777777777----author=" + author);
  System.out.println("--------888888888888888----from=" + from);
  System.out.println("--------999999999999999----Path()="
     + f.getPath().replace(dirSep, '/'));
  System.out.println("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++");
  return doc;
 }

 private HTMLDocument()
 {
 }

 /**
  * 读取一个文件到字符串里.
  *
  * @param sFileName 文件名
  * @param sEncode String
  * @return 文件内容
  */
 public static String readTextFile(String sFileName, String sEncode)
 {
  StringBuffer sbStr = new StringBuffer();
  try
  {
   File ff = new File(sFileName);
   InputStreamReader read = new InputStreamReader(new FileInputStream(ff), sEncode);
   BufferedReader ins = new BufferedReader(read);
   String dataLine = "";
   while (null != (dataLine = ins.readLine()))
   {
    sbStr.append(dataLine);
    sbStr.append("/r/n");
   }
   ins.close();
  } catch (Exception e)
  {
   // LogMan.error("read Text File Error", e);
  }
  return sbStr.toString();
 }
}

 

 

 

 

 


 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值