Python爬虫踩坑记录 _pickle.PicklingError: Can‘t pickle <class>

博客讲述了在Windows环境下运行Python爬虫时遇到的pickle模块报错,原因是进程不可序列化。为解决此问题,博主选择在Ubuntu虚拟机中运行,但在配置MongoDB时因内存不足导致错误。通过使用--smallfiles选项成功启动MongoDB,最终实现了爬虫的正常运行。

做大作业老师要求帮他们组运行一个爬虫程序,下载源码后在Anaconda里运行,发现了奇怪的报错。
在这里插入图片描述

Traceback (most recent call last):
  File "ccf_crawler.py", line 118, in <module>
    save_dblp_papers()
  File "ccf_crawler.py", line 102, in save_dblp_papers
    watcher_process.start()
  File "E:\jupter\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "E:\jupter\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "E:\jupter\lib\multiprocessing\context.py", line 326, in _Popen
    return Popen(process_obj)
  File "E:\jupter\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "E:\jupter\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'paper_crawler.crawler_manager.CrawlerManager'>: it's not the same object as paper_crawler.crawler_manager.CrawlerManager
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "E:\jupter\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "E:\jupter\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

是python pickle报错,pickle是一个简单的持久化功能,将对象序列化后以文件的形式保存。为什么在windows里运行不了呢?查阅资料,发现关于pickle和进程

原来这个爬虫是用进程编写的,而在windows系统中,进程使用socket对象,而socket对象是不可序列化的;在linux系统中,进程使用的是fork对象,因此可以被序列化。
所以解决方案有二:

  1. 重构代码,用线程而不是进程编写,对性能不会造成大的影响,这样就可以在windows上运行
  2. 使用linux系统运行即可

因此,打开了ubuntu虚拟机,配置环境,安装mongoDB,运行代码,成功!
在这里插入图片描述
没内存啦,唉
在这里插入图片描述

插播:Ubuntu 14.04系统安装MongoDB的报错
安装网络教程安装MongoDB,后出现报错

 mongod --dbpath data/db
Mon Dec  7 19:03:14.143 [initandlisten] MongoDB starting : pid=129519 port=27017 dbpath=data/db 64-bit host=ubuntu
Mon Dec  7 19:03:14.143 [initandlisten] db version v2.4.9
Mon Dec  7 19:03:14.143 [initandlisten] git version: nogitversion
Mon Dec  7 19:03:14.143 [initandlisten] build info: Linux orlo 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 BOOST_LIB_VERSION=1_54
Mon Dec  7 19:03:14.143 [initandlisten] allocator: tcmalloc
Mon Dec  7 19:03:14.143 [initandlisten] options: { dbpath: "data/db" }
Mon Dec  7 19:03:14.149 [initandlisten] journal dir=data/db/journal
Mon Dec  7 19:03:14.149 [initandlisten] recover : no journal files present, no recovery needed
Mon Dec  7 19:03:14.149 [initandlisten] 
Mon Dec  7 19:03:14.149 [initandlisten] ERROR: Insufficient free space for journal files
Mon Dec  7 19:03:14.149 [initandlisten] Please make at least 3379MB available in data/db/journal or use --smallfiles
Mon Dec  7 19:03:14.149 [initandlisten] 
Mon Dec  7 19:03:14.150 [initandlisten] exception in initAndListen: 15926 Insufficient free space for journals, terminating
Mon Dec  7 19:03:14.150 dbexit: 
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: going to close listening sockets...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: going to flush diaglog...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: going to close sockets...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: waiting for fs preallocator...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: lock for final commit...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: final commit...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: closing all files...
Mon Dec  7 19:03:14.150 [initandlisten] closeAllFiles() finished
Mon Dec  7 19:03:14.150 [initandlisten] journalCleanup...
Mon Dec  7 19:03:14.150 [initandlisten] removeJournalFiles
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: removing fs lock...
Mon Dec  7 19:03:14.150 dbexit: really exiting now

是因为内存不够了啊,orz,没办法,虚拟机太小了,只能使用替代命令–smallfiles运行了
注意空格之类的格式(这几天空格踩雷好多,疲惫)(要注意细节啊),
运行命令

 mongod --dbpath data/db --smallfiles


成功!

Mon Dec  7 19:03:59.144 [initandlisten] MongoDB starting : pid=129886 port=27017 dbpath=data/db 64-bit host=ubuntu
Mon Dec  7 19:03:59.145 [initandlisten] db version v2.4.9
Mon Dec  7 19:03:59.145 [initandlisten] git version: nogitversion
Mon Dec  7 19:03:59.145 [initandlisten] build info: Linux orlo 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 BOOST_LIB_VERSION=1_54
Mon Dec  7 19:03:59.145 [initandlisten] allocator: tcmalloc
Mon Dec  7 19:03:59.145 [initandlisten] options: { dbpath: "data/db", smallfiles: true }
Mon Dec  7 19:03:59.148 [initandlisten] journal dir=data/db/journal
Mon Dec  7 19:03:59.148 [initandlisten] recover : no journal files present, no recovery needed
Mon Dec  7 19:03:59.215 [FileAllocator] allocating new datafile data/db/local.ns, filling with zeroes...
Mon Dec  7 19:03:59.215 [FileAllocator] creating directory data/db/_tmp
Mon Dec  7 19:03:59.217 [FileAllocator] done allocating datafile data/db/local.ns, size: 16MB,  took 0 secs
Mon Dec  7 19:03:59.218 [FileAllocator] allocating new datafile data/db/local.0, filling with zeroes...
Mon Dec  7 19:03:59.219 [FileAllocator] done allocating datafile data/db/local.0, size: 16MB,  took 0 secs
Mon Dec  7 19:03:59.223 [initandlisten] waiting for connections on port 27017
Mon Dec  7 19:03:59.225 [websvr] admin web console waiting for connections on port 28017

Mon Dec  7 19:07:59.266 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 8ms
Mon Dec  7 19:48:17.641 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 5ms


2025-07-06 22:15:17 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet Password: 94478ec1879f1a75 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.ContentCleanPipeline', 'nepu_spider.pipelines.DeduplicatePipeline', 'nepu_spider.pipelines.SQLServerPipeline'] 2025-07-06 22:15:17 [scrapy.core.engine] INFO: Spider opened 2025-07-06 22:15:17 [nepu_spider.pipelines] INFO: ✅ 数据库表 'knowledge_base' 已创建或已存在 2025-07-06 22:15:17 [nepu_info] INFO: ✅ 成功连接到 SQL Server 数据库 2025-07-06 22:15:17 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-06 22:15:17 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in C:\Users\Lenovo\nepu_qa_project\.scrapy\httpcache 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-06 22:15:17 [nepu_info] INFO: 🚀 开始爬取东北石油大学官网... 2025-07-06 22:15:17 [nepu_info] INFO: 初始URL数量: 4 2025-07-06 22:15:17 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/robots.txt> from <GET http://www.nepu.edu.cn/robots.txt> 2025-07-06 22:15:24 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/robots.txt> (referer: None) 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 12 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 13 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 14 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 15 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 16 without any user agent to enforce it on. 2025-07-06 22:15:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/index.htm> from <GET http://www.nepu.edu.cn/> 2025-07-06 22:15:34 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/tzgg.htm> from <GET http://www.nepu.edu.cn/tzgg.htm> 2025-07-06 22:15:38 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xwzx.htm> from <GET http://www.nepu.edu.cn/xwzx.htm> 2025-07-06 22:15:45 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xxgk.htm> from <GET http://www.nepu.edu.cn/xxgk.htm> 2025-07-06 22:15:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/index.htm> (referer: None) 2025-07-06 22:15:51 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'www.gov.cn': <GET https://www.gov.cn/gongbao/content/2001/content_61066.htm> 2025-07-06 22:15:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/tzgg.htm> (referer: None) 2025-07-06 22:16:01 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xwzx.htm> (referer: None) 2025-07-06 22:16:01 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xwzx.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:03 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xxgk.htm> (referer: None) 2025-07-06 22:16:03 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xxgk.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28877.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28877.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:05 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.nepu.edu.cn/info/1049/28867.htm> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates) 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/16817.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/16817.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/17517.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:06 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/17517.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/19127.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:07 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/19127.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28867.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:58 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28867.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-06 22:16:58 [nepu_info] INFO: ✅ 数据库连接已关闭 2025-07-06 22:16:58 [nepu_info] INFO: 🛑 爬虫结束,原因: finished 2025-07-06 22:16:58 [nepu_info] INFO: 总计爬取页面: 86 2025-07-06 22:16:58 [scrapy.utils.signal] ERROR: Error caught on signal handler: <function Spider.close at 0x000001FF77BA2C00> Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 312, in maybeDeferred_coro result = f(*args, **kw) File "D:\annaCONDA\Lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply return receiver(*arguments, **named) File "D:\annaCONDA\Lib\site-packages\scrapy\spiders\__init__.py", line 92, in close return closed(reason) File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 323, in closed json.dump(stats, f, ensure_ascii=False, indent=2) File "D:\annaCONDA\Lib\json\__init__.py", line 179, in dump for chunk in iterable: File "D:\annaCONDA\Lib\json\encoder.py", line 432, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 439, in _iterencode o = _default(o) File "D:\annaCONDA\Lib\json\encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type SettingsAttribute is not JSON serializable 2025-07-06 22:16:58 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 35146, 'downloader/request_count': 96, 'downloader/request_method_count/GET': 96, 'downloader/response_bytes': 729404, 'downloader/response_count': 96, 'downloader/response_status_count/200': 88, 'downloader/response_status_count/302': 5, 'downloader/response_status_count/404': 3, 'dupefilter/filtered': 184, 'elapsed_time_seconds': 101.133916, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 6, 14, 16, 58, 758524), 'httpcache/firsthand': 96, 'httpcache/miss': 96, 'httpcache/store': 96, 'httpcompression/response_bytes': 2168438, 'httpcompression/response_count': 88, 'log_count/DEBUG': 120, 'log_count/ERROR': 89, 'log_count/INFO': 18, 'log_count/WARNING': 1, 'offsite/domains': 2, 'offsite/filtered': 4, 'request_depth_max': 3, 'response_received_count': 91, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/404': 1, 'scheduler/dequeued': 94, 'scheduler/dequeued/memory': 94, 'scheduler/enqueued': 94, 'scheduler/enqueued/memory': 94, 'start_time': datetime.datetime(2025, 7, 6, 14, 15, 17, 624608)} 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Spider closed (finished)
07-07
### Scrapy 爬虫日志分析与错误解决方案 #### CSS Selector 错误:`Expected selector got DELIM` 此错误通常出现在使用 `scrapy.Selector` 对象进行解析时,CSS 选择器格式存在问题。常见的原因包括: - **非法字符或语法错误**:CSS 选择器中存在未被正确转义的特殊字符(如空格、冒号、点号等),导致解析器在预期选择器的位置遇到了意外的分隔符(DELIM)。 - **字符串拼接问题**:动态构建选择器字符串时,由于格式不正确,导致最终构造的选择器无效。 解决方法包括: - **检查选择器语法**:确保选择器符合 CSS 标准,例如 `div.content > p.main` 是合法的,而 `div. content > p.main` 中的多余空格会导致解析失败。 - **转义特殊字符**:如果选择器中包含特殊字符,例如 `.` 或 `#`,需要使用反斜杠 `\` 进行转义,例如 `.class\.name`。 - **调试输出选择器**:在代码中打印出最终生成的选择器字符串,确认其格式是否正确[^1]。 #### TypeError: `Object of type SettingsAttribute is not JSON serializable` 该错误表示尝试使用 `json.dumps()` 序列化一个非标准 Python 数据类型对象(如 `SettingsAttribute` 类型)。Scrapy 的配置对象 `settings` 中的某些字段可能不是基本数据类型,因此无法直接序列化。 解决办法包括: - **转换为字典并过滤不可序列化字段**:将对象转换为字典形式,并手动移除或转换其中的不可序列化字段。例如: ```python import json def process_item(self, item, spider): try: data = json.dumps(dict(item), ensure_ascii=False) self.file.write(data + '\n') except TypeError as e: # 处理异常,例如记录日志或跳过不可序列化的字段 pass return item ``` - **自定义 JSON 编码器**:继承 `json.JSONEncoder` 并重写 `default()` 方法,以支持自定义对象的序列化。例如: ```python class CustomEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, SomeCustomType): return str(obj) # 或者返回可序列化的结构 return super().default(obj) data = json.dumps(obj, cls=CustomEncoder, ensure_ascii=False) ``` - **使用第三方库进行序列化**:如 `jsonpickle` 可用于序列化任意类型的对象,尽管这可能会牺牲一定的性能和安全性。 #### 日志分析技巧 - **启用详细日志记录**:在 `settings.py` 中设置 `LOG_LEVEL = 'DEBUG'`,以便捕获更详细的运行时信息,帮助定位问题源头。 - **查看爬虫启动阶段日志**:关注爬虫初始化阶段的日志,尤其是中间件和扩展加载部分,可以发现潜在的配置冲突或依赖问题。 - **使用 `scrapy shell` 调试选择器**:通过命令行工具交互式测试 CSS 或 XPath 表达式,快速验证选择器是否有效。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值