数据源:用爬虫获取。
1.导入依赖jsoup(解析网页)
<dependency>
<artifactId>elasticsearch</artifactId>
<groupId>org.elasticsearch</groupId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>transport-netty4-client</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.15.3</version>
</dependency>
2.京东页面现在有反爬虫机制,需要cookie绕过安全验证界面
需要找到cookie中,thor键对应的值
封装工具类
pojo层
service层
@Autowired
@Qualifier("restHighLevelClient")
private RestHighLevelClient client;
public Boolean parseJD(String keyword) throws Exception{
List<Content> contents = new HtmlParseUtil().parseJd(keyword);
BulkRequest bulkRequest = new BulkRequest();
bulkRequest.timeout("100s");
for (int i = 0; i < contents.size(); i++) {
bulkRequest.add(
new IndexRequest("jdgoods")
.source(JSON.toJSONS

本文介绍了如何使用jsoup库解析网页并配合Elasticsearch进行数据抓取,包括处理京东的反爬虫机制,通过cookie获取关键信息,以及在Java中构建RESTful服务接口进行搜索操作。
最低0.47元/天 解锁文章
1339

被折叠的 条评论
为什么被折叠?



