Scrapy爬虫入门(一)---爬取猫眼榜单

安装Scrapy

pip3 install scrapy

新建工程

scrapy startapp maoyan

目录结构

  • scrapy.cfg:配置文件
  • spiders:存放你Spider文件,也就是你爬取的py文件
  • items.py:相当于一个容器,和字典较像
  • middlewares.py:定义Downloader Middlewares(下载器中间件)和Spider Middlewares(蜘蛛中间件)的实现
  • pipelines.py:定义Item Pipeline的实现,实现数据的清洗,储存,验证。
  • settings.py:全局配置

定义Item

修改items.py,添加如下内容

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class MaoyanItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    index = scrapy.Field()
    title = scrapy.Field()
    star = scrapy.Field()
    releasetime = scrapy.Field()
    score = scrapy.Field()

定义spider

在spiders文件夹下创建一个maoyan.py文件

也可以按住shift-右键-在此处打开命令窗口,输入:scrapy genspider maoyan 要爬取的网址

# -*- coding: utf-8 -*-
import scrapy
from maoyan.items import MaoyanItem


class MaoyanSpider(scrapy.Spider):
    name = 'maoyan'
    allowed_domains = ['maoyan.com']
    start_urls = ['https://maoyan.com/board/7/']

    def parse(self, response):
        dl = response.css('.board-wrapper dd')
        for dd in dl:
            item = MaoyanItem()
            item['index'] = dd.css('.board-index::text').extract_first()
            item['title'] = dd.css('.name a::text').extract_first()
            item['star'] = dd.css('.star::text').extract_first()
            item['releasetime'] = dd.css('.releasetime::text').extract_first()
            item['score'] = dd.css('.integer::text').extract_first() + dd.css('.fraction::text').extract_first()
            yield item

关于页面元素分析,简单说一下吧

 有兴趣的还是看一下官方文档https://scrapy-chs.readthedocs.io/zh_CN/1.0/topics/selectors.html

Scrapy提供了两个实用的快捷方式: response.xpath() 及 response.css()

.xpath()及 .css() 方法返回一个类 SelectorList 的实例, 它是一个新选择器的列表。这个API可以用来快速的提取嵌套数据

为了提取真实的原文数据,你需要调用 .extract()

运行

在spider目录下,输入命令,scrapy crawl 项目名,就是MaoyanSpider这个类中定义的name的值

scrapy crawl maoyan

此时运行的结果应该是403,说明猫眼有一点的反爬手段

修改settings.py,把这一行的注释去掉,改成

USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'

继续运行,就ok了

 

保存

scrapy crawl maoyan -o maoyan.csv

scrapy crawl maoyan -o maoyan.xml

scrapy crawl maoyan -o maoyan.json

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值