上次使用Selenium+scrapy爬取京东,但是效率真的很低,而且很慢,容易报错
这次使用splash作为js引擎,加载动态网页
1. 安装spalsh——我是ubbuntu系统,使用apt进行安装,并启动splash
sudo apt install docker.io
sudo docker pull scrapinghub/splash
sudo docker run -it -p 8050:8050 scrapinghub/splash
2. 安装scrapy-splash,将splash整合到scrapy中
pip3 install scrapy-splash
3. 配置scrapy中setting.py
# splash 安装splash的网络地址
SPLASH_URLSPLASH_URL = 'http://localhost:8050/'
# 配置middleware,将splash整合进scrapy框架
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
# 用于去重
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
4. 将scrapy中抛出的Request改成SplashRequest