【Scrapy】Scrapy的items.py用法

本文介绍了Scrapy中items.py的使用,主要用于数据清洗。通过ItemLoader获取数据后,利用自定义的MapComposeCustom处理可能为空的数据,避免了因某些字段缺失导致的错误。同时,说明了Join()函数的应用,将返回的数组转换为字符串,满足数据格式需求。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

之前写了pipelines.py的一些用法:【Scrapy】Scrapy的pipelines管道使用方法 ,主要是用来处理获取数据后做的操作。

而这次介绍的items.py,它的作用主要是用来处理获取的的数据,做数据清洗用的,具体也很难一时讲清,先看代码。

1.首先我们通过ItemLoader 获取到数据

import sys
sys.path.append(r'E:\projects\python-spider')
import scrapy
from  scrapy.loader import ItemLoader
from spiderJob.items import SpiderjobItem
def getUrl():
    return 'https://search.51job.com/list/030200,000000,0000,00,9,99,%2520,2,1.html?lang=c&stype=&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&providesalary=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare='

class itemSpider(scrapy.Spider):
    name = 'argsSpider'
    def start_requests(self):
        url = getUrl()
        yield scrapy.Request(url, self.parse)  # 发送请求爬取参数内容

    def parse(self, response):
        mingyan = response.css('div#resultList>.el')  # 提取首页所有数据,保存至变量mingyan
        for index ,v in enumerate(mingyan):  # 循环获取
            if(index > 0):
                load = ItemLoader(item= SpiderjobItem(), selector=v)
                load.add_css("t1", ".t1 a::text")
                load.add_css("t2", ".t2 a::attr(title)")
                load.add_css("t3", ".t3::text")
                load.add_css("t4", ".t4::text")
                item =  load.load_item()
                yield item;

2.处理并返回

import scrapy

# ItemLoader
# 是分离数据的另一种方式,可以将数据的分离和提取分为两部分,
# 默认使用xpath, css数据提取方式,
# 让代码更加整洁,更加清晰。
from  scrapy.loader import ItemLoader
from scrapy.loader.processors import *

# 如果list为空则不会进行处理,这种情况需要重载MapCompose类的__call__方法
class MapComposeCustom(MapCompose):
    #自定義MapCompose,當value沒元素時傳入" "
    def __call__(self, value, loader_context=None):
        if not value:
            value.append(" ")
        values = arg_to_iter(value)
        if loader_context:
            context = MergeDict(loader_context, self.default_loader_context)
        else:
            context = self.default_loader_context
        wrapped_funcs = [wrap_loader_context(f, context) for f in self.functions]
        for func in wrapped_funcs:
            next_values = []
            for v in values:
                next_values += arg_to_iter(func(v))
            values = next_values
        return values

#TakeFirst来作为默认输出处理器,但该函数会筛掉空字符,因此重载该类的__call__
class TakeFirstCustom(TakeFirst):
    def __call__(self, values):
        for value in values:
            if value is not None and value != '':
                return value.strip() if isinstance(value, str) else value


# 过滤空格
def cutSpace(value):
    result = str(value).replace(" ","");
    result = str(result).replace("\n", "");
    result = str(result).replace("\r", "");
    return result


class SpiderjobItem(scrapy.Item):
    t1 = scrapy.Field(
        input_processor=MapComposeCustom(cutSpace),
        output_processor=Join()
    )  # 标题
    t2 = scrapy.Field(
        input_processor = MapComposeCustom(cutSpace),
        output_processor = Join()
    )  # 标题
    t3 = scrapy.Field(
        input_processor = MapComposeCustom(cutSpace),
        output_processor = Join()
    )  # 标题
    t4 = scrapy.Field(
        input_processor=MapComposeCustom(cutSpace),
        output_processor=Join()
    )  # 标题
    pass

这里遇到几个坑:

1.一开始我用的并不是 MapComposeCustom ,而是 MapCompose ,但是为什么之后改了? 因为它不能处理传入来的数据为空导致会出现下面的情况,有些项没有了,我获取item['1']报错了,所以通过重载MapCompose方法的__call__,处理这个情况。而TakeFirstCustom我没用到,但别人一般都会用到,我就也列出来。

{
't1':['1'],
't2':['1'],
't3':['1'],
't4':['1'],
},
{
't2':['1'],
't3':['1'],
}

2.用为什么要用上Join()

因为返回的是t1等是数组,而我需要的是字符串。

 

 

2025-07-08 15:43:37 [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: scrapybot) 2025-07-08 15:43:37 [scrapy.utils.log] INFO: Versions: {'lxml': '6.0.0', 'libxml2': '2.11.9', 'cssselect': '1.3.0', 'parsel': '1.10.0', 'w3lib': '2.3.1', 'Twisted': '25.5.0', 'Python': '3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 ' '64 bit (AMD64)]', 'pyOpenSSL': '25.1.0 (OpenSSL 3.5.1 1 Jul 2025)', 'cryptography': '45.0.5', 'Platform': 'Windows-10-10.0.22631-SP0'} 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [asyncio] DEBUG: Using selector: SelectSelector 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: 8a6ca1391bfb9949 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: 671a36aa7bc330e0 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: 76f044bac415a70c 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: fc500ad4454da624 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6025 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6026 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://xxgk.nepu.edu.cn/xxgklm/xxgk.htm> (referer: None) 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/jgsz/jxdw.htm> (referer: None) 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://zsxxw.nepu.edu.cn/> (referer: None) 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/xxgk/xxjj.htm> (referer: None) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 314, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 4815, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.265455, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 4643, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 18235, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 8, 'log_count/INFO': 26, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 739188, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 311, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 5880, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.282532, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 21720, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 18387, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 6, 'log_count/INFO': 22, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 739188, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 300, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 9026, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.284539, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 22730, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 32943, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 10, 'log_count/INFO': 39, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 738191, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 311, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 9736, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.285536, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 22730, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 25723, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 13, 'log_count/INFO': 49, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 737194, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished)
最新发布
07-09
<think>我们正在分析Scrapy爬虫的日志输出,特别是Scrapy 2.13.3版本的日志。日志分析是爬虫调试和性能优化的重要环节。以下将详细解析Scrapy日志的各个部分,并说明如何从中获取爬虫运行的关键信息。 ### 1. Scrapy日志级别 Scrapy支持多种日志级别,可以通过`LOG_LEVEL`设置调整: - `DEBUG`:最详细的日志,用于调试 - `INFO`:一般信息(默认级别) - `WARNING`:警告信息 - `ERROR`:错误信息 - `CRITICAL`:严重错误 在`settings.py`中设置: ```python LOG_LEVEL = 'INFO' # 推荐生产环境使用INFO级别 ``` ### 2. 日志核心结构解析 #### (1) 引擎启动信息 ``` [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: project_name) [scrapy.utils.log] INFO: Versions: lxml 4.9.4.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.8.1, ... # 依赖库版本[^2] [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'project_name', 'LOG_LEVEL': 'DEBUG', ...} ``` 此处显示Scrapy版本、关键组件版本和生效的设置项。若出现版本不兼容警告(如lxml版本过低)需特别注意[^2]。 #### (2) 中间件和扩展加载 ``` [scrapy.middleware] INFO: Enabled extensions: {'scrapy.extensions.corestats.CoreStats', ...} [scrapy.middleware] INFO: Enabled downloader middlewares: {'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', ...} ``` 此部分列出所有启用的扩展和中间件。若自定义中间件未出现在列表中,检查是否在`settings.py`中正确注册。 #### (3) 爬虫启动与请求调度 ``` [scrapy.core.engine] INFO: Spider opened # 爬虫开始运行 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) # 初始统计 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/page1> # 成功请求 ``` 关键指标: - `Crawled pages`:已下载页面数 - `scraped items`:成功解析的Item数 - 括号中的数字为HTTP状态码 #### (4) 数据处理与异常 **正常数据处理:** ``` [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/item> {'title': 'Sample Item', 'price': 100} ``` **常见异常类型:** - `IgnoreRequest`:请求被忽略(通常由中间件触发) ``` [scrapy.core.engine] DEBUG: Ignoring request <GET https://example.com/bad-url>: filtered by dupefilter ``` - `HttpError`:HTTP错误(如404/503) ``` [scrapy.core.engine] DEBUG: Crawled (404) <GET https://example.com/not-found> ``` - `ParseError`:解析失败 ``` [scrapy.core.scraper] ERROR: Error processing <GET https://example.com/bad-page> Traceback (most recent call last): File "scrapy/core/scraper.py", line 157, in _process_spidermw_output ``` ### 3. 性能关键指标分析 #### (1) 实时统计日志 默认每60秒输出一次(通过`LOGSTATS_INTERVAL`调整): ``` [scrapy.extensions.logstats] INFO: Crawled 120 pages (at 20 pages/min), scraped 115 items (at 18 items/min) ``` 计算实际吞吐量: $$ \text{页面下载速度} = \frac{\Delta\text{爬取页面数}}{\text{时间间隔}} \quad \text{(pages/min)} $$ $$ \text{数据处理速度} = \frac{\Delta\text{解析Item数}}{\text{时间间隔}} \quad \text{(items/min)} $$ #### (2) 阻塞延迟检测 若日志中出现大量`DEBUG: Crawled`间隔不均匀,可能由以下原因导致: - **同步阻塞操作**:在回调函数中执行耗时同步操作(如数据库写入)[^3] ``` [scrapy.core.engine] DEBUG: Crawled (200) <GET url1> [wait: 3.2s] # 响应延迟高 ``` - **下载器过载**:`CONCURRENT_REQUESTS`设置过高导致服务器响应变慢 #### (3) 内存使用监控 开启`MEMUSAGE_ENABLED`后,日志显示: ``` [scrapy.extensions.memusage] INFO: Memory usage at startup: 1024MB [scrapy.extensions.memusage] INFO: Memory usage: 2048MB (delta: 1024MB) ``` 若内存持续增长,需检查是否存在内存泄漏(如未正确关闭文件句柄)。 ### 4. 错误调试技巧 #### (1) 定位异常源头 示例错误日志: ``` [scrapy.core.scraper] ERROR: Spider error processing <GET https://example.com/page> Traceback (most recent call last): File "my_spider.py", line 25, in parse price = response.xpath('//span[@class="price"]/text()').extract_first() AttributeError: 'NoneType' object has no attribute 'xpath' ``` 诊断步骤: 1. 确认错误发生在`my_spider.py`第25行 2. 检查`response.xpath('//span[@class="price"]')`返回`None`的原因 3. 使用Scrapy shell验证选择器:`scrapy shell https://example.com/page` #### (2) 反爬应对日志 当触发网站反爬时: ``` [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://example.com> (failed 3 times): 503 Service Unavailable [scrapy.core.engine] DEBUG: Crawled (429) <GET https://example.com> [<http://proxy-pool>] ``` 建议操作: - 启用自动限速:`AUTOTHROTTLE_ENABLED = True` - 增加代理中间件:`DOWNLOADER_MIDDLEWARES = {'scrapy_proxy_pool.middlewares.ProxyPoolMiddleware': 610}` ### 5. 日志持久化配置 将日志保存到文件(`settings.py`): ```python LOG_FILE = 'crawl.log' # 输出到文件 LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s' # 自定义格式 ``` 使用`RotatingFileHandler`防止日志过大: ```python from logging.handlers import RotatingFileHandler LOG_HANDLERS = { 'file': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': 'crawl.log', 'maxBytes': 1024*1024*10, # 10MB 'backupCount': 5, } } ``` ### 6. 高级调试工具 #### (1) Telnet控制台 日志中显示Telnet端口: ``` [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 ``` 连接后执行: ```bash telnet localhost 6023 >>> est() # 查看引擎状态 >>> prefs() # 查看性能统计 ``` #### (2) Scrapy Stats API 通过`scrapy.statscollectors`获取统计数据: ```python from scrapy.statscollectors import MemoryStatsCollector stats = MemoryStatsCollector() print(stats.get_stats()) ``` 输出示例: ```json { "downloader/request_count": 120, "item_scraped_count": 115, "response_received_count": 120, "scheduler/dequeued": 120 } ``` ### 性能优化公式参考 1. **请求并发度计算**: $$ \text{实际并发数} = \min(\text{CONCURRENT_REQUESTS}, \text{可用代理数}) $$ 2. **合理延迟设置**: $$ \text{建议延迟} = \frac{1}{\text{目标请求速率}} \quad \text{(秒/请求)} $$ 例如目标速率5请求/秒,则延迟=0.2秒 3. **内存警戒线**(根据`MEMUSAGE_LIMIT_MB`): $$ \text{内存使用率} = \frac{\text{当前内存}}{\text{总物理内存}} \times 100\% > 80\% \quad \Rightarrow \text{触发警告} $$ --- ### 相关问题 1. 如何解析Scrapy日志中的HTTP错误码分布? 2. 怎样通过日志判断Scrapy爬虫是否被目标网站封禁? 3. Scrapy日志中出现大量`Ignoring request`警告该如何处理? 4. 如何从日志中分析Scrapy爬虫的瓶颈所在? 5. Scrapy如何配置日志自动分割和归档?</think>### Scrapy 2.13.3 爬虫日志分析指南 Scrapy日志是监控爬虫运行状态的核心工具,以下是关键日志信息的解析方法和优化建议: #### 一、核心日志结构解析 1. **框架初始化日志** ```log [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: your_project) [scrapy.utils.log] INFO: Versions: lxml 4.9.4.0, libxml2 2.9.14, ... # 依赖库版本[^2] ``` - 验证环境依赖是否匹配(如lxml/libxml2版本) - 若出现`ImportError`,需检查虚拟环境配置 2. **请求调度日志** ```log [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/page1> [scrapy.core.engine] DEBUG: Crawled (404) <GET https://example.com/invalid_page> ``` - HTTP状态码分析: - `200`:成功响应 - `404`:页面不存在(需检查URL生成逻辑) - `403`:禁止访问(可能触发反爬) - `503`:服务不可用(需添加重试机制) 3. **数据处理日志** ```log [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/item> {'title': 'Sample', 'price': 100} # 成功提取的数据 [scrapy.core.scraper] ERROR: Spider error processing <GET https://example.com/bad_page> ``` - 出现`ERROR`时需检查: - XPath/CSS选择器是否失效 - 数据管道处理异常 4. **性能统计日志** ```log [scrapy.extensions.logstats] INFO: Crawled 120 pages (at 20 pages/min) [scrapy.core.engine] INFO: Closing spider (finished) ``` - 关键指标: - **页面爬取速率**:`pages/min` - **Item生成数**:`item_scraped_count` - **请求深度**:`request_depth_max` #### 二、典型问题诊断 1. **反爬封锁迹象** ```log [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET url> (failed 3 times) [scrapy.core.scraper] ERROR: 403 Forbidden ``` **解决方案**: - 启用随机User-Agent: ```python DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None, 'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400, } ``` - 添加代理中间件(参考引用[1]的反爬手段) 2. **数据提取失败** ```log [scrapy.core.scraper] ERROR: ValueError('Missing required field') ``` **解决方案**: - 使用Scrapy Shell实时调试: ```bash scrapy shell 'https://target-url.com' >>> response.css('div.price::text').get() ``` - 添加异常处理: ```python def parse(self, response): try: yield {'price': response.css('div.price::text').get()} except Exception as e: self.logger.error(f"Extraction failed: {e}") ``` 3. **性能瓶颈定位** ```log [scrapy.core.engine] DEBUG: Crawled (200) <GET url> [wait: 5.2s] # 高延迟 ``` **优化方法**: - 调整并发参数(引用[3]案例): ```python CONCURRENT_REQUESTS = 32 # 默认16 DOWNLOAD_DELAY = 0.5 # 防止被封 ``` - 启用自动限速: ```python AUTOTHROTTLE_ENABLED = True AUTOTHROTTLE_TARGET_CONCURRENCY = 4.0 ``` #### 三、日志配置优化 1. **日志级别设置**(`settings.py`): ```python LOG_LEVEL = 'INFO' # DEBUG/INFO/WARNING/ERROR LOG_FILE = 'crawl.log' # 保存到文件 ``` 2. **关键统计扩展**: ```python EXTENSIONS = { 'scrapy.extensions.corestats.CoreStats': 500, 'scrapy.extensions.memusage.MemoryUsage': 501, } ``` 输出示例: ```log [scrapy.extensions.memusage] INFO: Memory usage: 1024MB [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_count': 120} ``` 3. **自定义日志格式**: ```python LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s' ``` #### 四、高级分析工具 1. **Telnet实时监控**: ```log [scrapy.extensions.telnet] INFO: Telnet console on 6023 ``` 连接后执行: ```bash telnet localhost 6023 >>> est() # 查看引擎状态 >>> prefs() # 性能统计 ``` 2. **Stats API获取指标**: ```python class MySpider(scrapy.Spider): def closed(self, reason): stats = self.crawler.stats.get_stats() print(f"总请求数: {stats.get('downloader/request_count')}") ``` --- ### 相关问题 1. 如何解决Scrapy日志中出现大量`Ignored duplicate request`警告? 2. Scrapy统计信息中`item_dropped_count`异常增高的原因是什么? 3. 如何配置Scrapy日志自动分割和归档? 4. Telnet监控显示`engine.slot.inprogress`持续很高该如何优化? 5. Scrapy日志中出现`ReactorNotRestartable`错误如何解决?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值