scrapy <一>_____开启多个爬虫

本文介绍了使用Scrapy框架批量运行爬虫的实现方法,包括命令语法、参数配置及环境设置,适合Python开发者快速上手。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1 spiders同级目录创建commands 

crawlall.py

#-*-coding:utf-8-*-

from scrapy.commands import ScrapyCommand
from scrapy.crawler import CrawlerRunner
from scrapy.utils.conf import arglist_to_dict
from scrapy.exceptions import UsageError


class Command(ScrapyCommand):
    requires_project = True

    def syntax(self):
        return '[options]'

    def short_desc(self):
        return 'Runs all of the spiders'

    def add_options(self, parser):
        ScrapyCommand.add_options(self, parser)
        parser.add_option("-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE",
                          help="set spider argument (may be repeated)")
        parser.add_option("-o", "--output", metavar="FILE",
                          help="dump scraped items into FILE (use - for stdout)")
        parser.add_option("-t", "--output-format", metavar="FORMAT",
                          help="format to use for dumping items with -o")

    def process_options(self, args, opts):
        ScrapyCommand.process_options(self, args, opts)
        try:
            opts.spargs = arglist_to_dict(opts.spargs)
        except ValueError:
            raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False)

    def run(self, args, opts):
        # settings = get_project_settings()

        spider_loader = self.crawler_process.spider_loader
        for spidername in args or spider_loader.list():
            print("*********cralall spidername************" + spidername)
            self.crawler_process.crawl(spidername, **opts.spargs)

        self.crawler_process.start()

setup.py

#-*-coding:utf-8-*-
from setuptools import setup, find_packages

setup(name='scrapy-mymodule',
  entry_points={
    'scrapy.commands': [
      'crawlall=cnblogs.commands:crawlall',
    ],
  },
 )

setting.py

#-------同时启动多个爬虫
COMMANDS_MODULE = 'BaiduSpider.commands'#爬虫名

cmd 运行 scrapy crawlall 

windows设置计划任务   创建run.py 加入基本任务中

#-*-coding:utf-8-*-
from scrapy import cmdline

cmdline.execute(['scrapy','crawlall'])

转载于:https://my.oschina.net/u/3771014/blog/1932601

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值