Scrapy中有多个内置的下载器中间件,HttpProxyMiddleware 就是其中的代理中间件。
在scrapy中使用自己的代理中间件主要有2个步骤
1:编写自己的代理中间件:
# -*- coding: utf-8 -*-
import base64
import random
import logging
from dcs.settings import PROXIES
class ProxyMiddleware(object):
"""cover scrapy's HttpProxyMiddleware.
if 'proxy' in request.meta, HttpProxyMiddleware don't do anything.
"""
def process_request(self, request, spider):
"""overwrite method"""
if 'proxy' in request.meta:
return
proxy = random.choice(PROXIES)
request.meta['proxy'] = "http://%s" % proxy['ip_port']
encoded_user_pass = base64.encodestring(proxy['user_pass'])
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
logging.info('[ProxyMiddleware] proxy:%s is used', proxy)
2:在配置settings.py文件中启用自己的代理中间件,且配置的执行顺序要在HttpProxyMiddleware 前面。(配置为dict, key为类路径,value为执行顺序。if 'proxy' in request.meta 内置的代理中间件就不会做操作了。内置中间件都是默认开启的。)
DOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110, 'pythontab.middlewares.ProxyMiddleware': 100,}
本文介绍了在Scrapy中如何自定义并使用代理中间件HttpProxyMiddleware。通过编写ProxyMiddleware类,从设置的PROXIES列表中随机选择代理,并设置请求的`proxy`元数据及`Proxy-Authorization`头。在settings.py中配置中间件,确保其执行顺序在内置HttpProxyMiddleware之前。
1万+

被折叠的 条评论
为什么被折叠?



