这是什么意思? 为什么会出现这个问题
PS D:\project\quality-inspection> celery -A app.celery_app:celery_app worker --loglevel=info --concurrency=4
2025-10-20 12:06:03.529 | INFO | app.clients.redis_client:__init__:53 - Redis 客户端初始化成功: 127.0.0.1:6379
-------------- celery@a2025 v5.5.3 (immunity)
--- ***** -----
-- ******* ---- Windows-10-10.0.26100-SP0 2025-10-20 12:06:03
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: audio_quality_check:0x202d2c78c70
- ** ---------- .> transport: redis://:**@127.0.0.1:6379/0
- ** ---------- .> results: redis://:**@127.0.0.1:6379/0
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. quality_check.perform_full_process
[2025-10-20 12:06:03,657: INFO/MainProcess] Connected to redis://:**@127.0.0.1:6379/0
[2025-10-20 12:06:03,666: INFO/MainProcess] mingle: searching for neighbors
[2025-10-20 12:06:04,133: INFO/SpawnPoolWorker-1] child process 28652 calling self.run()
[2025-10-20 12:06:04,135: INFO/SpawnPoolWorker-2] child process 30656 calling self.run()
[2025-10-20 12:06:04,149: INFO/SpawnPoolWorker-3] child process 14980 calling self.run()
[2025-10-20 12:06:04,150: INFO/SpawnPoolWorker-4] child process 19972 calling self.run()
[2025-10-20 12:06:04,719: INFO/MainProcess] mingle: all alone
[2025-10-20 12:06:04,787: INFO/MainProcess] celery@a2025 ready.
2025-10-20 12:06:06.093 | INFO | app.clients.redis_client:__init__:53 - Redis 客户端初始化成功: 127.0.0.1:6379
2025-10-20 12:06:06.096 | INFO | app.clients.redis_client:__init__:53 - Redis 客户端初始化成功: 127.0.0.1:6379
2025-10-20 12:06:06.116 | INFO | app.clients.redis_client:__init__:53 - Redis 客户端初始化成功: 127.0.0.1:6379
2025-10-20 12:06:06.165 | INFO | app.clients.redis_client:__init__:53 - Redis 客户端初始化成功: 127.0.0.1:6379
[2025-10-20 12:08:04,030: INFO/MainProcess] Task quality_check.perform_full_process[quality_check_76a6a5d0b1b342de9d362971897b72e9] received
[2025-10-20 12:08:04,031: WARNING/MainProcess] C:\Users\yhp1\AppData\Local\Programs\Python\Python310\lib\site-packages\billiard\pool.py:1503: UserWarning: Soft timeouts are not supported: on this platform: It does not have the SIGUSR1 signal.
warnings.warn(UserWarning(
[2025-10-20 12:08:04,060: ERROR/MainProcess] Task handler raised error: ValueError('not enough values to unpack (expected 3, got 0)')
billiard.einfo.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\yhp1\AppData\Local\Programs\Python\Python310\lib\site-packages\billiard\pool.py", line 362, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "C:\Users\yhp1\AppData\Local\Programs\Python\Python310\lib\site-packages\celery\app\trace.py", line 640, in fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\yhp1\AppData\Local\Programs\Python\Python310\lib\site-packages\billiard\pool.py", line 362, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "C:\Users\yhp1\AppData\Local\Programs\Python\Python310\lib\site-packages\celery\app\trace.py", line 640, in fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
• 这台机器是 Windows,Celery 用的是 prefork(billiard 的多进程)。Windows 没有 POSIX 信号和 fork,所以
1. 会提示:Soft timeouts are not supported...(软超时在 Windows 不支持,只能用硬超时);
2. 更关键:在接到任务后,Celery 内部走 fast_trace_task 的优化路径,在 Windows 的 spawn 池里偶发拿不到初始化的三元组 _loc,于是报:
ValueError: not enough values to unpack (expected 3, got 0)。这是 Windows + prefork 的已知兼容性问题,不是你任务代码本身抛的异常。
怎么解决(从“最快可用”到“最稳妥生产级”):
1. 本地开发(最快)——改用线程池
celery -A app.celery_app:celery_app worker -P threads --concurrency=8 --loglevel=info
• 线程池在 Windows 下稳定,不走 prefork/spawn,自然不会触发这个报错。
• 你的任务如果是 I/O 密集(调用 API、读写文件/网络),线程池 OK;如果是 CPU 密集,线程池提升不大,但至少能跑。
2. 本地开发(可选)——单进程池
celery -A app.celery_app:celery_app worker -P solo --loglevel=info
• 只有一个 worker,最稳但没有并发,适合先验证功能。
3. 生产环境(推荐)——在 Linux/WSL2/Docker 下跑 prefork
• Celery 官方一直把 Windows 当“有限支持”,生产建议上 Linux。
• 你现在 Redis、FastAPI 都在本机,完全可以把 worker 放到 WSL2 或 Docker 容器里(Ubuntu),命令保持原来的 prefork 就能稳定运行。
4. 如果必须在 Windows 下高并发:考虑 gevent/eventlet 池(I/O 密集场景)
pip install gevent
celery -A app.celery_app:celery_app worker -P gevent -c 100 --loglevel=info
或
pip install eventlet
celery -A app.celery_app:celery_app worker -P eventlet -c 100 --loglevel=info
• 需要你的任务是 I/O 密集;某些三方库需要配合 monkey-patch,复杂度略高。
5. 关于“软超时”警告
• Windows 没有 SIGUSR1,软超时(soft_time_limit)无效。请改用“硬超时”(time_limit):
@celery_app.task(time_limit=1800) # 只设置硬超时
def perform_full_process(...):
...
或在全局配置里只用 task_time_limit,不要设 task_soft_time_limit。
6. 额外小检查(不是主因,但建议)
• 确保任务模块被 worker 正确导入(你已经看到 [tasks] . quality_check.perform_full_process,说明 OK)。
• 若你在任务/初始化里有多进程/多线程启动逻辑,Windows 下要避免在模块顶层执行,最好放进函数里再调用。
• 版本回退也可作为兜底(例如 celery==5.3.x + billiard==4.1.x),但根因仍是 Windows + prefork 的不兼容,治标不治本。
⸻
TL;DR 一条命令先跑通
在你当前环境直接换线程池即可:
celery -A app.celery_app:celery_app worker -P threads --concurrency=8 --loglevel=info
同时把所有 soft_time_limit 相关配置去掉或改为 time_limit。长期建议把 worker 放到 Linux(容器/WSL2)里跑。
4802






