(base) PS D:\2025\internship\pachong\py\my12306> scrapy crawl train
2025-07-15 16:31:48 [scrapy.utils.log] INFO: Scrapy 2.11.1 started (bot: my12306)
2025-07-15 16:31:48 [scrapy.utils.log] INFO: Versions: lxml 5.2.1.0, libxml2 2.13
.1, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.2, Twisted 23.10.0, Python 3.12.7 |
packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AM
D64)], pyOpenSSL 24.2.1 (OpenSSL 3.0.15 3 Sep 2024), cryptography 43.0.0, Platform Windows-11-10.0.26100-SP0
2025-07-15 16:31:48 [scrapy.addons] INFO: Enabled addons:
[]
2025-07-15 16:31:48 [py.warnings] WARNING: D:\anaconda\Lib\site-packages\scrapy\u
tils\request.py:254: ScrapyDeprecationWarning: '2.6' is a deprecated value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting.
It is also the default value. In other words, it is normal to get this warning if
you have not defined a value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' sett
ing. This is so for backward compatibility reasons, but it will change in a future version of Scrapy.
See the documentation of the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting for information on how to handle this deprecation.
return cls(crawler)
2025-07-15 16:31:48 [scrapy.extensions.telnet] INFO: Telnet Password: b500c51afa127fa5
2025-07-15 16:31:48 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.throttle.AutoThrottle']
2025-07-15 16:31:48 [scrapy.crawler] INFO: Overridden settings:
{'AUTOTHROTTLE_ENABLED': True,
'AUTOTHROTTLE_START_DELAY': 10,
'BOT_NAME': 'my12306',
'DOWNLOADER_CLIENT_TLS_METHOD': 'TLSv1.2',
'DOWNLOAD_DELAY': 5,
'DOWNLOAD_TIMEOUT': 15,
'LOG_LEVEL': 'INFO',
'NEWSPIDER_MODULE': 'my12306.spiders',
'RETRY_HTTP_CODES': [302, 403, 404, 500, 502, 503, 504],
'RETRY_TIMES': 5,
'SPIDER_MODULES': ['my12306.spiders']}
2025-07-15 16:31:49 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'my12306.middlewares.RandomUserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2025-07-15 16:31:49 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2025-07-15 16:31:49 [scrapy.middleware] INFO: Enabled item pipelines:
['my12306.pipelines.JsonWriterPipeline']
2025-07-15 16:31:49 [scrapy.core.engine] INFO: Spider opened
2025-07-15 16:31:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2025-07-15 16:31:49 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2025-07-15 16:31:55 [scrapy.core.scraper] ERROR: Spider error processing <GET htt
ps://kyfw.12306.cn/otn/login/init> (referer: https://kyfw.12306.cn/otn/index/init)
Traceback (most recent call last):
File "D:\2025\internship\pachong\py\my12306\my12306\spiders\train_spider.py", line 16, in safe_selector
return Selector(response)
^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\selector\unified.py", line 97, in __init__
super().__init__(text=text, type=st, **kwargs)
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 496, in __init__
root, type = _get_root_and_type_from_text(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 377, in _get_root_and_type_from_text
root = _get_root_from_text(text, type=type, **lxml_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 329, in _get_root_from_text
return create_root_node(text, _ctgroup[type]["_parser"], **lxml_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 110, in create_root_node
parser = parser_cls(recover=True, encoding=encoding, huge_tree=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\lxml\html\__init__.py", line 1887, in __init__
super().__init__(**kwargs)
File "src\\lxml\\parser.pxi", line 1806, in lxml.etree.HTMLParser.__init__
File "src\\lxml\\parser.pxi", line 858, in lxml.etree._BaseParser.__init__
LookupError: unknown encoding: 'b'utf8''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\anaconda\Lib\site-packages\scrapy\utils\defer.py", line 279, in iter_errback
yield next(it)
^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\utils\python.py", line 350, in __next__
return next(self.data)
^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\utils\python.py", line 350, in __next__
return next(self.data)
^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync
for r in iterable:
File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync
for r in iterable:
File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 352, in <genexpr>
return (self._set_referer(r, response) for r in result or ())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync
for r in iterable:
File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync
for r in iterable:
File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr>
return (r for r in result or () if self._filter(r, response, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync
for r in iterable:
File "D:\2025\internship\pachong\py\my12306\my12306\spiders\train_spider.py", line 241, in login_page
sel = safe_selector(response)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\2025\internship\pachong\py\my12306\my12306\spiders\train_spider.py", line 31, in safe_selector
return ParselSelector(text=text, type='html', encoding=encoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 496, in __init__
root, type = _get_root_and_type_from_text(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 377, in _get_root_and_type_from_text
root = _get_root_from_text(text, type=type, **lxml_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 329, in _get_root_from_text
return create_root_node(text, _ctgroup[type]["_parser"], **lxml_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 110, in create_root_node
parser = parser_cls(recover=True, encoding=encoding, huge_tree=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\lxml\html\__init__.py", line 1887, in __init__
super().__init__(**kwargs)
File "src\\lxml\\parser.pxi", line 1806, in lxml.etree.HTMLParser.__init__
File "src\\lxml\\parser.pxi", line 858, in lxml.etree._BaseParser.__init__
LookupError: unknown encoding: 'b'utf8''
2025-07-15 16:32:02 [train] INFO: 成功加载 3399 个车站信息
2025-07-15 16:32:02 [train] INFO: 部分车站示例: [('北京北', 'VAP'), ('北京东', 'BOP'), ('北京', 'BJP'), ('北京南', 'VNP'), ('北京大兴', 'IPP')]
请输入出发站: 北京
请输入到达站: 北京北
请输入日期(格式: yyyymmdd): 20250720
2025-07-15 16:33:01 [scrapy.extensions.logstats] INFO: Crawled 3 pages (at 3 pages/min), scraped 0 items (at 0 items/min)
2025-07-15 16:33:38 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying
<GET https://www.12306.cn/mormhweb/logFiles/error.html> (failed 6 times): [<twist
ed.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2025-07-15 16:33:38 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.12306.cn/mormhweb/logFiles/error.html>
Traceback (most recent call last):
File "D:\anaconda\Lib\site-packages\scrapy\core\downloader\middleware.py", line 54, in process_request
return (yield download_func(request=request, spider=spider))
twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure tw
isted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2025-07-15 16:33:38 [scrapy.core.engine] INFO: Closing spider (finished)
2025-07-15 16:33:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 6,
'downloader/request_bytes': 5916,
'downloader/request_count': 10,
'downloader/request_method_count/GET': 10,
'downloader/response_bytes': 86615,
'downloader/response_count': 4,
'downloader/response_status_count/200': 3,
'downloader/response_status_count/302': 1,
'elapsed_time_seconds': 109.671222,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2025, 7, 15, 8, 33, 38, 966742, tzinfo=datetime.timezone.utc),
'httpcompression/response_bytes': 230430,
'httpcompression/response_count': 3,
'log_count/ERROR': 3,
'log_count/INFO': 13,
'log_count/WARNING': 1,
'request_depth_max': 2,
'response_received_count': 3,
'retry/count': 5,
'retry/max_reached': 1,
'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 5,
'scheduler/dequeued': 10,
'scheduler/dequeued/memory': 10,
'scheduler/enqueued': 10,
'scheduler/enqueued/memory': 10,
'spider_exceptions/LookupError': 1,
'start_time': datetime.datetime(2025, 7, 15, 8, 31, 49, 295520, tzinfo=datetime.timezone.utc)}
2025-07-15 16:33:38 [scrapy.core.engine] INFO: Spider closed (finished)
(base) PS D:\2025\internship\pachong\py\my12306>
最新发布