今天在做Web项目中有一个功能需要从网上数据源获取文献数据,于是想到可以使用URLConnection类来实现。
主要思路:通过URLConnection工具类发送get请求,得到整个html页,将其读入Java程序中,对页面结构进行分析后,用正则表达式获取其中的数据。
首先分析万方的url结构,如搜索关键词“software”的相应url为
http://s.wanfangdata.com.cn/Paper.aspx?q=software&f=top&p=1
参数q即关键词,p为页面,可以用字符串拼接的方式自定义这两个字段。
注意到html页面中有记录数,而且每页的pageSize固定为10,这样可以对内容进行分页处理
/*
* 万方数据上每页大小为10
*/
public List<Article> getListByFuzzySearch(String value, Integer curPage) {
List<Article> articleList = new ArrayList<Article>();
getArticleList(value, curPage, articleList);
return articleList;
}
public int getCountByFuzzySearch(String value) {
return getRowCount(value);
}
public static void getArticleList(String value, int curPage, List<Article> articleList) {
String text = getNetStr(value, curPage);
String s = null;
List<String> result = new ArrayList<String>();
result = RegexString(text, "<div class=\"record-item\">(.*?)</div></div>");
/*
* 如果找到了
*/
if (result.size() > 0) {
for (int i = 0; i < result.size(); i++) {
Article article = new Article();
s = result.get(i).replace("<em>", "");
s = s.replace("</em>", "");
Pattern pattern = Pattern.compile("文</a>");
/*
* 定义一个matcher用来做匹配
* 部分文献没有注明作者
*/
Matcher matcher = pattern.matcher(s);
/*
* 带有全文、原文标签
*/
if (matcher.find()) {
s = s.replaceAll("<div class=\"left-record\">(.+?)文</a>(.+?)target=\"_blank\">", "");
} else {
s = s.replaceAll("<div class=\"left-record\">(.+?)target=\"_blank\">", "");
}
s = s.replaceAll("</a>(.+)", "");
s = s.replace(" ", "");
article.setTitle(s);
pattern = Pattern.compile("class=\"creator\"");
matcher = pattern.matcher(result.get(i));
if (matcher.find()) {
s = result.get(i).replaceAll("<div class=\"left-record\">(.*?)ArticleId=(.*?)\">", "");
s = s.replaceAll("</a>(.+)", "");
s = s.replace(" ", "");
article.setAuthor(s);
} else {
article.setAuthor("佚名");
}
s = result.get(i).replaceAll("<div class=\"left-record\">(.*?)<a class=\"title\" href='", "");
s = s.replaceAll("' target=\"_blank\">(.+)", "");
s = s.replace(" ", "");
article.setUrl(s);
articleList.add(article);
}
}
}
public static String getNetStr(String value, int curPage) {
try {
value = URLEncoder.encode(value, "UTF-8");
} catch (UnsupportedEncodingException e1) {
e1.printStackTrace();
}
String url = "http://s.wanfangdata.com.cn/Paper.aspx?q=" + value
+ "&f=top&p=" + curPage;
StringBuffer resultSb = new StringBuffer();
BufferedReader br = null;
try {
URL realUrl = new URL(url);
URLConnection connection = realUrl.openConnection();
connection.setConnectTimeout(60000);// 设置连接超时时间,ms
connection.setReadTimeout(60000);// 设置读取数据超时时间,ms
connection.setUseCaches(false);// 是否缓存
connection.connect();// 连接
br = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
String line;
while ((line = br.readLine()) != null) {
resultSb.append(line);
}
} catch (Exception e) {
e.printStackTrace();
}
finally {
try {
if (br != null) {
br.close();
}
} catch (Exception e2) {
e2.printStackTrace();
}
}
return resultSb.toString();
}
public static List<String> RegexString(String targetStr, String patternStr) {
List<String> list = new ArrayList<String>();
Pattern pattern = Pattern.compile(patternStr);
Matcher matcher = pattern.matcher(targetStr);
while (matcher.find()) {
list.add(matcher.group(1));
}
return list;
}
public static int getRowCount(String value) {
int rowCount = 0;
String text = getNetStr(value, 1);
Pattern pattern = Pattern.compile("共检索到<span>(.*?)条</span>记录");
Matcher matcher = pattern.matcher(text);
if (matcher.find()) {
String result = matcher.group(1);
result = result.replace("共检索到<span>", "");
result = result.replace("条</span>记录", "");
result = result.replace(",", "");
rowCount = Integer.parseInt(result);
}
return rowCount;
}
(正则表达式用得太菜了。。)
以下为抓取的结果:

本文档介绍如何在JavaWeb项目中利用URLConnection发送GET请求,抓取万方数据上关于‘software’的文献数据。通过分析URL结构,拼接参数,获取HTML页面并用正则表达式提取数据,实现分页处理。
481





