scrapy信号 signals 笔记

Scrapy使用信号机制通知引擎启动、停止等事件。本文详细介绍了内置信号如engine_started、engine_stopped、item_scraped等,以及如何通过扩展(extension)捕获这些信号以增强Scrapy的功能。例如,当item被成功爬取、丢弃或遇到错误时,都会触发相应的信号,允许开发者进行定制化的处理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Scrapy使用信号来通知scrapy启动和关闭等事情的发生。可以在Scrapy项目中捕捉一些信号(使用 extension)来完成额外的工作或添加额外的功能,扩展Scrapy。

scrapy内置信号

engine_started = object()
engine_stopped = object()
spider_opened = object()
spider_idle = object()
spider_closed = object()
spider_error = object()
request_scheduled = object()
request_dropped = object()
response_received = object()
response_downloaded = object()
item_scraped = object()
item_dropped = object()

engine_started

当Scrapy引擎启动爬取时发送该信号。

该信号支持返回deferreds。

该信号可能会在信号 spider_opened 之后被发送,取决于spider的启动方式。 所以不要根据该信号来比较哪个更早被发送。

engine_stopped

当Scrapy引擎停止时发送该信号(例如,爬取结束)。

该信号支持返回deferreds。

item_scraped

当item被爬取,并通过所有 Item Pipeline 后(没有被丢弃(dropped),发送该信号。

该信号支持返回deferreds。

scrapy.signals.item_scraped(itemresponsespider)

  • item (Item 对象) – 爬取到的item
  • spider (Spider 对象) – 爬取item的spider
  • response (Response 对象) – 提取item的response

item_dropped

当item通过 Item Pipeline ,有些pipeline抛出 DropItem 异常,丢弃item时,该信号被发送。

该信号支持返回deferreds。

scrapy.signals.item_dropped(itemexceptionspider)

  • item (Item 对象) – Item Pipeline 丢弃的item
  • spider (Spider 对象) – 爬取item的spider
  • exception (DropItem 异常) – 导致item被丢弃的异常(必须是 DropItem 的子类)

spider_closed

当某个spider被关闭时,该信号被发送。该信号可以用来释放每个spider在 spider_opened 时占用的资源。

该信号支持返回deferreds。

scrapy.signals.spider_closed(spiderreason)

  • spider (Spider 对象) – 关闭的spider
  • reason (str) – 描述spider被关闭的原因的字符串。如果spider是由于完成爬取而被关闭,则其为 'finished' 。否则,如果spider是被引擎的 close_spider 方法所关闭,则其为调用该方法时传入的 reason 参数(默认为 'cancelled')。如果引擎被关闭(例如, 输入Ctrl-C),则其为 'shutdown' 。

spider_opened

当spider开始爬取时发送该信号。该信号一般用来分配spider的资源,不过其也能做任何事。

该信号支持返回deferreds。

scrapy.signals.spider_opened(spider)

spider_idle

当spider进入空闲(idle)状态时该信号被发送。空闲状态指:

  • requests正在等待被下载
  • requests被调度
  • items正在item pipeline中被处理

当该信号的所有处理器(handler)被调用后,如果spider仍然保持空闲状态, 引擎将会关闭该spider。当spider被关闭后, spider_closed 信号将被发送。

可以在 spider_idle 处理器中调度某些请求来避免spider被关闭。

该信号 不支持 返回deferreds。

spider_error

当spider的回调函数产生错误时(例如,抛出异常),该信号被发送。

scrapy.signals.spider_error(failureresponsespider)

  • failure (Failure 对象) – 以Twisted Failure 对象抛出的异常
  • response (Response 对象) – 当异常被抛出时被处理的response
  • spider (Spider 对象) – 抛出异常的spider

request_scheduled

当引擎调度一个 Request 对象用于下载时,该信号被发送。

该信号 不支持 返回deferreds。

scrapy.signals.request_scheduled(requestspider)

  • request (Request 对象) – 到达调度器的request
  • spider (Spider 对象) – 产生该request的spider

request_dropped

当调度器拒绝稍后将要下载的请求时发送。

该信号 不支持 返回deferreds。

scrapy.signals.request_dropped(requestspider)

response_received

当引擎从downloader获取到一个新的 Response 时发送该信号。

该信号 不支持 返回deferreds。

scrapy.signals.response_received(responserequestspider)

  • response (Response 对象) – 接收到的response
  • request (Request 对象) – 生成response的request
  • spider (Spider 对象) – response所对应的spider

response_downloaded

当一个 HTTPResponse 被下载时,由downloader发送该信号。

该信号 不支持 返回deferreds。

scrapy.signals.response_downloaded(responserequestspider)

  • response (Response 对象) – 下载的response
  • request (Request 对象) – 生成response的request
  • spider (Spider 对象) – response所对应的spider

举个例子:

spider

from scrapy.spider import Spider
from scrapy.item import Item, Field


class HooksasyncItem(Item):
    name = Field()


class TestSpider(Spider):
    name = "test"
    allowed_domains = ["example.com"]
    start_urls = ('http://www.example.com',)

    def parse(self, response):
        for i in range(2):
            item = HooksasyncItem()
            item['name'] = "Hello %d" % i
            yield item
        raise Exception("dead")

extensions

import logging

from scrapy import signals
from scrapy.exceptions import DropItem


class HooksasyncExtension(object):
    @classmethod
    def from_crawler(cls, crawler):
        logging.info("HooksasyncExtension from_crawler")
        return cls(crawler)

    def __init__(self, crawler):
        logging.info("HooksasyncExtension Constructor called")

        # connect the extension object to signals
        cs = crawler.signals.connect
        cs(self.engine_started, signal=signals.engine_started)
        cs(self.engine_stopped, signal=signals.engine_stopped)
        cs(self.spider_opened, signal=signals.spider_opened)
        cs(self.spider_idle, signal=signals.spider_idle)
        cs(self.spider_closed, signal=signals.spider_closed)
        cs(self.spider_error, signal=signals.spider_error)
        cs(self.request_scheduled, signal=signals.request_scheduled)
        cs(self.response_received, signal=signals.response_received)
        cs(self.response_downloaded, signal=signals.response_downloaded)
        cs(self.item_scraped, signal=signals.item_scraped)
        cs(self.item_dropped, signal=signals.item_dropped)

    def engine_started(self):
        logging.info("HooksasyncExtension, signals.engine_started fired")

    def engine_stopped(self):
        logging.info("HooksasyncExtension, signals.engine_stopped fired")

    def spider_opened(self, spider):
        logging.info("HooksasyncExtension, signals.spider_opened fired")

    def spider_idle(self, spider):
        logging.info("HooksasyncExtension, signals.spider_idle fired")

    def spider_closed(self, spider, reason):
        logging.info("HooksasyncExtension, signals.spider_closed fired")

    def spider_error(self, failure, response, spider):
        logging.info("HooksasyncExtension, signals.spider_error fired")

    def request_scheduled(self, request, spider):
        logging.info("HooksasyncExtension, signals.request_scheduled fired")

    def response_received(self, response, request, spider):
        logging.info("HooksasyncExtension, signals.response_received fired")

    def response_downloaded(self, response, request, spider):
        logging.info("HooksasyncExtension, signals.response_downloaded fired")

    def item_scraped(self, item, response, spider):
        logging.info("HooksasyncExtension, signals.item_scraped fired")

    def item_dropped(self, item, spider, exception):
        logging.info("HooksasyncExtension, signals.item_dropped fired")

    @classmethod
    def from_settings(cls, settings):
        logging.info("HooksasyncExtension from_settings")
        # This is never called - but would be called if from_crawler()
        # didn't exist. from_crawler() can access the settings via
        # crawler.settings but also has access to everything that
        # crawler object provides like signals, stats and the ability
        # to schedule new requests with crawler.engine.download()


class HooksasyncDownloaderMiddleware(object):
    """ Downloader middlewares *are* middlewares and as a result can do
    everything any middleware can do (see HooksasyncExtension).
    The main thing that makes them different are the process_*() methods"""

    @classmethod
    def from_crawler(cls, crawler):
        logging.info("HooksasyncDownloaderMiddleware from_crawler")
        # Here the constructor is actually called and the class returned
        return cls(crawler)

    def __init__(self, crawler):
        logging.info("HooksasyncDownloaderMiddleware Constructor called")

    def process_request(self, request, spider):
        logging.info(("HooksasyncDownloaderMiddleware process_request "
                      "called for %s") % request.url)

    def process_response(self, request, response, spider):
        logging.info(("HooksasyncDownloaderMiddleware process_response "
                      "called for %s") % request.url)
        return response

    def process_exception(self, request, exception, spider):
        logging.info(("HooksasyncDownloaderMiddleware process_exception "
                      "called for %s") % request.url)


class HooksasyncSpiderMiddleware(object):
    """ Spider middlewares *are* middlewares and as a result can do
    everything any middleware can do (see HooksasyncExtension).
    The main thing that makes them different are the process_*() methods"""

    @classmethod
    def from_crawler(cls, crawler):
        logging.info("HooksasyncSpiderMiddleware from_crawler")
        # Here the constructor is actually called and the class returned
        return cls(crawler)

    def __init__(self, crawler):
        logging.info("HooksasyncSpiderMiddleware Constructor called")

    def process_spider_input(self, response, spider):
        logging.info(("HooksasyncSpiderMiddleware process_spider_input "
                      "called for %s") % response.url)

    def process_spider_output(self, response, result, spider):
        logging.info(("HooksasyncSpiderMiddleware process_spider_output "
                      "called for %s") % response.url)
        return result

    def process_spider_exception(self, response, exception, spider):
        logging.info(("HooksasyncSpiderMiddleware process_spider_exception "
                      "called for %s") % response.url)

    def process_start_requests(self, start_requests, spider):
        logging.info("HooksasyncSpiderMiddleware process_start_requests"
                     " called")
        return start_requests


class HooksasyncPipeline(object):
    """ Pipelines *are* middlewares and as a result can do
    everything any middlewares can do (see HooksasyncExtension).
    The main thing that makes them different is that they have
    the process_item() method"""

    @classmethod
    def from_crawler(cls, crawler):
        logging.info("HooksasyncPipeline from_crawler")
        # Here the constructor is actually called and the class returned
        return cls(crawler)

    def __init__(self, crawler):
        logging.info("HooksasyncPipeline Constructor called")

    def process_item(self, item, spider):
        if item['name'] == "Hello 1":
            raise DropItem("Not good")
        logging.info(("HooksasyncPipeline process_item() called for "
                      "item: %s") % item['name'])
        return item

    # This function overrides the default for Item Pipelines
    def open_spider(self, spider):
        logging.info("HooksasyncPipeline spider_opened")

    # This function overrides the default for Item Pipelines
    def close_spider(self, spider):
        logging.info("HooksasyncPipeline spider_closed")

参考:https://scrapy-chs.readthedocs.io/zh_CN/latest/topics/signals.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值