工作需要,要爬取新浪微博数据,之前一直用java, 但是遇到页面加密很伤,转到python。先拿糗事百科试试python里爬虫的写法。
工具
requests
BeautifulSoup
工具参考
Python爬虫利器一之Requests库的用法
Python爬虫利器二之Beautiful Soup的用法
还有一个据说比较好用的PyQuery, 试用了下,难用的要死!class 里有空格就懵逼了。之前在Java里一直用Jsoup解析,比较顺手,相应的感觉比较适应于BeautifulSoup,废话不多说,搞起!
页面结构
代码
import requests
from bs4 import BeautifulSoup
page = 1
rooturl = 'http://www.qiushibaike.com/hot/page/' + str(page)
# payload = {'key1': 'value1', 'key2': 'value2'}
# r = requests.get( rooturl, params=payload)
pageReq = requests.get(rooturl)
pageString = pageReq.text
doc = BeautifulSoup(pageString, "lxml")
parents = doc.find('div', id='content-left')
for elem in parents.find_all(class_="article block untagged mb15", recursive=False):
authorName = ""
if len(elem.find(class_="author clearfix").select('a')) ==2:
authorName = elem.find(class_="author clearfix").select('a')[1]['title']
content = elem.find(class_="content").get_text().strip()
num_laugh = elem.find_all("i", class_="number")[0].get_text()
num_comments = elem.find_all("i", class_="number")[1].get_text()
print "author: " + authorName + "\n" + "content: " + content + "\n" + num_laugh + " " + num_comments
print "***************************************************"
# target = soup.select('#content-left > .article block untagged mb15')
本文介绍了作者在工作中从Java转到Python爬虫的原因,并选择了requests和BeautifulSoup作为工具来爬取数据。通过实例展示了如何使用这两个库解析糗事百科的页面结构,包括代码展示和输出结果。
1842

被折叠的 条评论
为什么被折叠?



