小白学习爬虫
爬取豆瓣TOP250的书,正好本人也喜欢看书
思路分析:
https://book.douban.com/top250这是TOP250第一页的链接
https://book.douban.com/top250?start=25第二页的链接
https://book.douban.com/top250?start=50第三页的链接
将第一页链接改为https://book.douban.com/top250?start=0也是可以访问的,每一页数字加25,构建10页网址
代码如下:
urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)]
获取的书单信息包括:书名,详细链接,作者,出版社,出版时间,价格,评分和简评
详细代码如下:
import requests from lxml import etree import csv headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36' } f = open('C:/Users/qinhan/Desktop/doubanbook.scv','wt',newline='',encoding='utf-8') writer = csv.writer(f) writer.writerow(('name','url','author','publisher','data','price','rate','comment'))#创建csv urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)] for url in urls: html = requests.get(url,headers = headers)#加入请求头 selector = etree.HTML(html.text) infos = selector.xpath('//tr[@class="item"]') #取大标签(1) for info in infos: name = info.xpath('td/div/a/@title')[0] url = info.xpath('td/div/a/@href')[0] book_info = info.xpath('td/p/text()')[0] #/text()获取标签中文字信息 author = book_info.split('/')[0] publisher = book_info.split('/')[-3] data = book_info.split('/')[-2] price = book_info.split('/')[-1] #(2) rate = info.xpath('td/div/span[2]/text()')[0] comments = info.xpath('td/p/span/text()') comment = comments[0] if len(comments) != 0 else "空" writer.writerow((name,url,author,publisher,data,price,rate,comment)) f.close()
(1)大标签
先抓大后抓小,找到循环点,这里循环点即为:
//tr[@class="item #选取所有tr元素,且这些元素拥有值为item的class属性
(2)最初想的是0 1 2 3
author = book_info.split('/')[0]
publisher = book_info.split('/')[1]
data = book_info.split('/')[2]
price = book_info.split('/')[3]
结果保存下来如图:
这里犯了想当然的错误,不是所有的都是按作者,出版社,出版时间和价格排列的,作者一栏还有翻译作者如图:
故1,2,3,4肯定有错,而1,-3,-2,-1则没问题