python ImportError: No module named spiders

本文分享了在运行Python爬虫项目时遇到的一个常见问题——因spiders文件夹下缺失__init__.py文件而导致的ImportError。通过添加该文件解决了模块无法被正确识别的问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

临时接过一个以前同事写的python爬虫项目,因为接触python也不多,更是第一次接触python爬虫。

在运行项目时,报错:
ImportError: No module named spides

找了半天原因,最后才发现
项目中spiders文件夹下缺少__init__.py文件
__init__.py文件

这个__init__.py文件虽然是个空文件,但却是必不可少的。
当然tutorial文件夹下的__init__.py 也是不能少的

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 157, in execute cmd.crawler_process = CrawlerProcess(settings) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 318, in __init__ super().__init__(settings) File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 190, in __init__ self.spider_loader = self._get_spider_loader(settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 184, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 69, in from_settings return cls(settings) ^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 24, in __init__ self._load_all_spiders() File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 53, in _load_all_spiders for module in walk_modules(name): ^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\misc.py", line 87, in walk_modules submod = import_module(fullpath) ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 2, in <module> from nepu_Spider.items import NepuSpiderItem # 导入定义好的 Item ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'nepu_Spider' (scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 157, in execute cmd.crawler_process = CrawlerProcess(settings) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 318, in __init__ super().__init__(settings) File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 190, in __init__ self.spider_loader = self._get_spider_loader(settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 184, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 69, in from_settings return cls(settings) ^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 24, in __init__ self._load_all_spiders() File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 53, in _load_all_spiders for module in walk_modules(name): ^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\misc.py", line 87, in walk_modules submod = import_module(fullpath) ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 3, in <module> from nepu_spider.items import NewsItem ImportError: cannot import name 'NewsItem' from 'nepu_spider.items' (C:\Users\Lenovo\nepu_spider\nepu_spider\items.py)
最新发布
07-05
2025-06-29 10:48:57 [scrapy.utils.log] INFO: Scrapy 2.13.2 started (bot: scrapybot) 2025-06-29 10:48:57 [scrapy.utils.log] INFO: Versions: {'lxml': '5.4.0', 'libxml2': '2.11.9', 'cssselect': '1.3.0', 'parsel': '1.10.0', 'w3lib': '2.3.1', 'Twisted': '25.5.0', 'Python': '3.9.23 (main, Jun 5 2025, 13:25:08) [MSC v.1929 64 bit (AMD64)]', 'pyOpenSSL': '25.1.0 (OpenSSL 3.5.0 8 Apr 2025)', 'cryptography': '45.0.4', 'Platform': 'Windows-10-10.0.22631-SP0'} Traceback (most recent call last): File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\spiderloader.py", line 106, in load return self._spiders[spider_name] KeyError: 'taobao_spider' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\黎晓容\.conda\envs\pythonProject8\Scripts\scrapy.exe\__main__.py", line 7, in <module> File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\cmdline.py", line 205, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\cmdline.py", line 158, in _run_print_help func(*a, **kw) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\cmdline.py", line 213, in _run_command cmd.run(args, opts) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\commands\crawl.py", line 33, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\crawler.py", line 338, in crawl crawler = self.create_crawler(crawler_or_spidercls) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\crawler.py", line 374, in create_crawler return self._create_crawler(crawler_or_spidercls) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\crawler.py", line 458, in _create_crawler spidercls = self.spider_loader.load(spidercls) File "C:\Users\黎晓容\.conda\envs\pythonProject8\lib\site-packages\scrapy\spiderloader.py", line 108, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: taobao_spider'
06-30
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值