python scrapy 多线程_Python实现在线程里运行scrapy的方法

本文展示了如何在Python程序中使用线程运行Scrapy爬虫。通过创建`CrawlerThread`类并利用`scrapymanager`和`reactor`,可以在多线程环境下启动和停止爬虫。示例代码包括设置环境变量、连接信号以及启动和停止爬虫的线程操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

本文实例讲述了Python实现在线程里运行scrapy的方法。分享给大家供大家参考。具体如下:

如果你希望在一个写好的程序里调用scrapy,就可以通过下面的代码,让scrapy运行在一个线程里。

"""

Code to run Scrapy crawler in a thread - works on Scrapy 0.8

"""

import threading,Queue

from twisted.internet import reactor

from scrapy.xlib.pydispatch import dispatcher

from scrapy.core.manager import scrapymanager

from scrapy.core.engine import scrapyengine

from scrapy.core import signals

class CrawlerThread(threading.Thread):

def __init__(self):

threading.Thread.__init__(self)

self.running = False

def run(self):

self.running = True

scrapymanager.configure(control_reactor=False)

scrapymanager.start()

reactor.run(installSignalHandlers=False)

def crawl(self,*args):

if not self.running:

raise RuntimeError("CrawlerThread not running")

self._call_and_block_until_signal(signals.spider_closed,\

scrapymanager.crawl,*args)

def stop(self):

reactor.callFromThread(scrapyengine.stop)

def _call_and_block_until_signal(self,signal,f,*a,**kw):

q = Queue.Queue()

def unblock():

q.put(None)

dispatcher.connect(unblock,signal=signal)

reactor.callFromThread(f,**kw)

q.get()

# Usage example below:

import os

os.environ.setdefault('SCRAPY_SETTINGS_MODULE','myproject.settings')

from scrapy.xlib.pydispatch import dispatcher

from scrapy.core import signals

from scrapy.conf import settings

from scrapy.crawler import CrawlerThread

settings.overrides['LOG_ENABLED'] = False # avoid log noise

def item_passed(item):

print "Just scraped item:",item

dispatcher.connect(item_passed,signal=signals.item_passed)

crawler = CrawlerThread()

print "Starting crawler thread..."

crawler.start()

print "Crawling somedomain.com...."

crawler.crawl('somedomain.com) # blocking call

print "Crawling anotherdomain.com..."

crawler.crawl('anotherdomain.com') # blocking call

print "Stopping crawler thread..."

crawler.stop()

希望本文所述对大家的Python程序设计有所帮助。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值