Pycharm导入Anaconda环境时,出现“Non-zero exit code”的解决办法

本文解决了在导入Anaconda环境时常见的错误,指出应选择Script文件夹下的conda.exe而非根目录下的python.exe,同时提供了正确的配置步骤及耐心等待的建议。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

大家在导入Anaconda环境时可能会出现如下错误:
在这里插入图片描述
**
这是因为导入的executable有误。导入的应该是Script文件夹下的conda.exe,而不是根目录下的python.exe
在这里插入图片描述
Location的内容不用管,Conda executable选对之后它会自动补全的。

加载Conda环境需要比较长的时间,请耐心等待。

之后就创建完成了,不会报错啦。

Traceback (most recent call last): File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 2107, in _run_ninja_build subprocess.run( File "D:\anaconda\envs\pytorch2.3.1\lib\subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\TrainCNO.py", line 160, in <module> output_pred_batch = model(input_batch) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\CNOModule.py", line 476, in forward x = self.lift(x) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\CNOModule.py", line 151, in forward x = self.inter_CNOBlock(x) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\CNOModule.py", line 104, in forward return self.activation(x) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\training\filtered_networks.py", line 392, in forward x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype), File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\torch_utils\ops\filtered_lrelu.py", line 114, in filtered_lrelu if impl == 'cuda' and x.device.type == 'cuda' and _init(): File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\torch_utils\ops\filtered_lrelu.py", line 26, in _init _plugin = custom_ops.get_plugin( File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\torch_utils\custom_ops.py", line 136, in get_plugin torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir, File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 1309, in load return _jit_compile( File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 1719, in _jit_compile _write_ninja_file_and_build_library( File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 1832, in _write_ninja_file_and_build_library _run_ninja_build( File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 2123, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error building extension 'filtered_lrelu_plugin': [1/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output filtered_lrelu_ns.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=filtered_lrelu_plugin -DTORCH_API_INCLUDE_EXTENSION_H -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\torch\csrc\api\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\TH -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\anaconda\envs\pytorch2.3.1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17 --use_fast_math --allow-unsupported-compiler -c C:\Users\Administrator\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu118\filtered_lrelu_plugin\2e9606d7cf844ec44b9f500eaacd35c0-nvidia-geforce-rtx-4060-ti\filtered_lrelu_ns.cu -o filtered_lrelu_ns.cuda.o [2/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output filtered_lrelu_rd.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=filtered_lrelu_plugin -DTORCH_API_INCLUDE_EXTENSION_H -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\torch\csrc\api\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\TH -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\anaconda\envs\pytorch2.3.1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17 --use_fast_math --allow-unsupported-compiler -c C:\Users\Administrator\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu118\filtered_lrelu_plugin\2e9606d7cf844ec44b9f500eaacd35c0-nvidia-geforce-rtx-4060-ti\filtered_lrelu_rd.cu -o filtered_lrelu_rd.cuda.o [3/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output filtered_lrelu_wr.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=filtered_lrelu_plugin -DTORCH_API_INCLUDE_EXTENSION_H -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\torch\csrc\api\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\TH -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\anaconda\envs\pytorch2.3.1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17 --use_fast_math --allow-unsupported-compiler -c C:\Users\Administrator\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu118\filtered_lrelu_plugin\2e9606d7cf844ec44b9f500eaacd35c0-nvidia-geforce-rtx-4060-ti\filtered_lrelu_wr.cu -o filtered_lrelu_wr.cuda.o [4/4] "D:\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64/link.exe" filtered_lrelu.o filtered_lrelu_wr.cuda.o filtered_lrelu_rd.cuda.o filtered_lrelu_ns.cuda.o /nologo /DLL c10.lib c10_cuda.lib torch_cpu.lib torch_cuda.lib -INCLUDE:?warp_size@cuda@at@@YAHXZ torch.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\lib torch_python.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\libs "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib\x64" cudart.lib /out:filtered_lrelu_plugin.pyd FAILED: filtered_lrelu_plugin.pyd "D:\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64/link.exe" filtered_lrelu.o filtered_lrelu_wr.cuda.o filtered_lrelu_rd.cuda.o filtered_lrelu_ns.cuda.o /nologo /DLL c10.lib c10_cuda.lib torch_cpu.lib torch_cuda.lib -INCLUDE:?warp_size@cuda@at@@YAHXZ torch.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\lib torch_python.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\libs "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib\x64" cudart.lib /out:filtered_lrelu_plugin.pyd 正在创建库 filtered_lrelu_plugin.lib 和对象 filtered_lrelu_plugin.exp filtered_lrelu.o : error LNK2019: 无法解析的外部符号 __std_find_trivial_1,函数 "char const * __cdecl std::_Find_vectorized<char const ,char>(char const * const,char const * const,char)" (??$_Find_vectorized@$$CBDD@std@@YAPEBDQEBD0D@Z) 中引用了该符号 ninja: build stopped: subcommand failed.这是怎么回事
最新发布
07-06
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值