一. 数据的提取
1. 控制台打印
import scrapy
class DoubanSpider(scrapy.Spider):
name = 'douban'
allwed_url = 'douban.com'
start_urls = [
'https://movie.douban.com/top250/'
]
def parse(self, response):
movie_name = response.xpath("//div[@class='item']//a/span[1]/text()").extract()
movie_core = response.xpath("//div[@class='star']/span[2]/text()").extract()
yield {
'movie_name':movie_name,
'movie_core':movie_core
}
2. 以文件的方式输出
1)以python原生方式
with open("movie.txt", 'wb') as f:
for n, c in zip(movie_name, movie_core):
str = n+":"+c+"\n"
f.write(str.encode())
2)以scrapy内置方式
scrapy 内置主要有四种:JSON,JSON lines,CSV,XML
将结果用最常用的JSON导出,命令如下:
scrapy crawl dmoz -o douban.json -t json
-o 后面是导出文件名,-t 后面是导出类型
二、提取内容的封装Item
Scrapy进程可通过使用蜘蛛提取来自网页中的数据。Scrapy使用Item类生成输出对象用于收刮数据
Item 对象是自定义的python字典,可以使用标准字典语法获取某个属性的值
1. 定义
import scrapy
class InfoItem(scrapy.Item):
# define the fields for your item here like:
movie_name = scrapy.Field()
movie_core = scrapy.Field()
2. 使用
def parse(self, response):
movie_name = response.xpath("//div[@class='item']//a/span[1]/text()").extract()
movie_core = response.xpath("//div[@class='star']/span[2]/text()").extract()
for n, c in zip(movie_name, movie_core):
movie = InfoItem()
movie['movie_name'] = n
movie['movie_core'] = c
yield movie