Python3 in _check_not_importing_main is not going to be frozen to produce an executable.''')

本文介绍了在Windows环境下使用Python3进行多进程编程时遇到的问题及解决方案。通过将代码置于`if __name__ == '__main__':`下,解决了启动新进程前的引导阶段错误。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

    

    知道自己知道什么,也知道自己不知道什么,这就是真正的知识。--------梭罗


   在进行python3多进程multiprocessing学习过程中,在Windows环境下测试分布式代码过程中,利用fork()生成child processes的代码须放在main模块中,否则将会报错:

    代码如下:

import queue
import random
from multiprocessing.managers import BaseManager

task_queue = queue.Queue()
result_queue = queue.Queue()

def return_task_queue():
    global task_queue
    return task_queue

def return_result_queue():
    global result_queue
    return result_queue

class QueueManager(BaseManager):
    pass


QueueManager.register('get_task_queue', callable=return_task_queue)
QueueManager.register('get_result_queue', callable=return_result_queue)

manager = QueueManager(address=('127.0.0.1', 5000), authkey=b'abc')
manager.start()

task = manager.get_task_queue()
result = manager.get_result_queue()
for i in range(10):
    n = random.randint(0, 10000)
    print('Put task %d' % n)
    task.put(n)

print('Try get results..')
for i in range(10):
    r = result.get(timeout=10)
    print('Result:%s' % r)

manager.shutdown()
print('master exit.')
    报错信息:

"E:\python\python project\myfirst\venv\Scripts\python.exe" "E:/python/python project/myfirst/vari/distibuted_master.py"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "D:\program files\Python3.6\Lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "D:\program files\Python3.6\Lib\multiprocessing\spawn.py", line 114, in _main
    prepare(preparation_data)
  File "D:\program files\Python3.6\Lib\multiprocessing\spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "D:\program files\Python3.6\Lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "D:\program files\Python3.6\Lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "D:\program files\Python3.6\Lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "D:\program files\Python3.6\Lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "E:\python\python project\myfirst\vari\distibuted_master.py", line 24, in <module>
    manager.start()
  File "D:\program files\Python3.6\Lib\multiprocessing\managers.py", line 513, in start
    self._process.start()
  File "D:\program files\Python3.6\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "D:\program files\Python3.6\Lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "D:\program files\Python3.6\Lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "D:\program files\Python3.6\Lib\multiprocessing\spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()
  File "D:\program files\Python3.6\Lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
    is not going to be frozen to produce an executable.''')
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

Process finished with exit code 1
    所以需要调整代码,把相应调用代码放在main模块中:

import queue
import random
from multiprocessing.managers import BaseManager

task_queue = queue.Queue()
result_queue = queue.Queue()

def return_task_queue():
    global task_queue
    return task_queue

def return_result_queue():
    global result_queue
    return result_queue

class QueueManager(BaseManager):
    pass

if __name__=='__main__':
    QueueManager.register('get_task_queue', callable=return_task_queue)
    QueueManager.register('get_result_queue', callable=return_result_queue)

    manager = QueueManager(address=('127.0.0.1', 5000), authkey=b'abc')
    manager.start()

    task = manager.get_task_queue()
    result = manager.get_result_queue()
    for i in range(10):
        n = random.randint(0, 10000)
        print('Put task %d' % n)
        task.put(n)

    print('Try get results..')
    for i in range(10):
        r = result.get(timeout=10)
        print('Result:%s' % r)

    manager.shutdown()
    print('master exit.')
    结果如下:

"E:\python\python project\myfirst\venv\Scripts\python.exe" "E:/python/python project/myfirst/vari/distibuted_master.py"
Put task 5292
Put task 5986
Put task 3500
Put task 2385
Put task 8343
Put task 6482
Put task 4331
Put task 6104
Put task 9341
Put task 5156
Try get results..

Process finished with exit code 1

喜欢的朋友可以扫描以下二维码进行关注,公众号将每天更新文章:





C:\Users\wb159\anaconda3\envs\torch_gpu\python.exe C:\develop\python\pycode\pythonProject7\test.py 使用设备: cuda Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Failed to download (trying next): HTTP Error 404: Not Found Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz to ./data\MNIST\raw\train-images-idx3-ubyte.gz 100.0% Extracting ./data\MNIST\raw\train-images-idx3-ubyte.gz to ./data\MNIST\raw Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Failed to download (trying next): HTTP Error 404: Not Found Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz to ./data\MNIST\raw\train-labels-idx1-ubyte.gz 102.8% Extracting ./data\MNIST\raw\train-labels-idx1-ubyte.gz to ./data\MNIST\raw Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz Failed to download (trying next): HTTP Error 404: Not Found Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz to ./data\MNIST\raw\t10k-images-idx3-ubyte.gz 100.0% Extracting ./data\MNIST\raw\t10k-images-idx3-ubyte.gz to ./data\MNIST\raw Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz Failed to download (trying next): HTTP Error 404: Not Found Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz to ./data\MNIST\raw\t10k-labels-idx1-ubyte.gz 112.7% Extracting ./data\MNIST\raw\t10k-labels-idx1-ubyte.gz to ./data\MNIST\raw 使用设备: cuda 使用设备: cuda Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data[&#39;init_main_from_path&#39;]) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\develop\python\pycode\pythonProject7\test.py", line 118, in <module> train(epoch) File "C:\develop\python\pycode\pythonProject7\test.py", line 81, in train for batch_idx, (data, target) in enumerate(train_loader): File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(&#39;&#39;&#39; RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == &#39;__main__&#39;: freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data[&#39;init_main_from_path&#39;]) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\develop\python\pycode\pythonProject7\test.py", line 118, in <module> train(epoch) File "C:\develop\python\pycode\pythonProject7\test.py", line 81, in train for batch_idx, (data, target) in enumerate(train_loader): File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(&#39;&#39;&#39; RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == &#39;__main__&#39;: freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. 使用设备: cuda Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data[&#39;init_main_from_path&#39;]) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\develop\python\pycode\pythonProject7\test.py", line 118, in <module> train(epoch) File "C:\develop\python\pycode\pythonProject7\test.py", line 81, in train for batch_idx, (data, target) in enumerate(train_loader): File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(&#39;&#39;&#39; RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == &#39;__main__&#39;: freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. 使用设备: cuda Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data[&#39;init_main_from_path&#39;]) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\develop\python\pycode\pythonProject7\test.py", line 118, in <module> train(epoch) File "C:\develop\python\pycode\pythonProject7\test.py", line 81, in train for batch_idx, (data, target) in enumerate(train_loader): File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "C:\Users\wb159\anaconda3\envs\torch_gpu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(&#39;&#39;&#39; RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == &#39;__main__&#39;: freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable.
最新发布
06-10
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

TPCloud

你的鼓励将是我创作的最大动力!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值