翻译:《实用的Python编程》03_03_Error_checking

本文详细介绍了Python中的错误检查和异常处理机制,包括如何触发异常、捕获异常、处理异常,以及最佳实践等内容。

目录 | 上一节 (3.2 深入函数) | 下一节 (3.4 模块)

3.3 错误检查

虽然前面已经介绍了异常,但本节补充一些有关错误检查和异常处理的其它细节。

程序是如何运行失败的

Python 不对函数参数类型或值进行检查或者校验。函数可以处理与函数内部语句兼容的任何数据。

def add(x, y):
    return x + y

add(3, 4)               # 7
add('Hello', 'World')   # 'HelloWorld'
add('3', '4')           # '34'

如果函数中有错误,它们将(作为异常)在运行时出现。

def add(x, y):
    return x + y

>>> add(3, '4')
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for +:
'int' and 'str'
>>>

为了验证代码,强烈建议进行测试(稍后介绍)。

异常

异常用于发出错误信号。

要自己触发异常,请使用 raise 语句:

if name not in authorized:
    raise RuntimeError(f'{name} not authorized')

要捕获异常,请使用 try-except 语句:

try:
    authenticate(username)
except RuntimeError as e:
    print(e)

异常处理

异常传递到第一个匹配的 except

def grok():
    ...
    raise RuntimeError('Whoa!')   # Exception raised here

def spam():
    grok()                        # Call that will raise exception

def bar():
    try:
       spam()
    except RuntimeError as e:     # Exception caught here
        ...

def foo():
    try:
         bar()
    except RuntimeError as e:     # Exception does NOT arrive here
        ...

foo()

要处理异常,请将语句放到 except 块里面。 except 块里面可以添加要处理该错误的任何语句。

def grok(): ...
    raise RuntimeError('Whoa!')

def bar():
    try:
      grok()
    except RuntimeError as e:   # Exception caught here
        statements              # Use this statements
        statements
        ...

bar()

异常处理之后,从 try-except 之后的第一个语句继续执行。

def grok(): ...
    raise RuntimeError('Whoa!')

def bar():
    try:
      grok()
    except RuntimeError as e:   # Exception caught here
        statements
        statements
        ...
    statements                  # Resumes execution here
    statements                  # And continues here
    ...

bar()

内置异常

有非常多的內建异常。通常,异常名称表明出了什么问题(例如,因为提供错误的值而触发 ValueError)。下述列表不是一份详尽的清单,请访问 文档 以获取更多信息。

ArithmeticError
AssertionError
EnvironmentError
EOFError
ImportError
IndexError
KeyboardInterrupt
KeyError
MemoryError
NameError
ReferenceError
RuntimeError
SyntaxError
SystemError
TypeError
ValueError

异常值

异常具有一个关联值。它包含有关错误的更明确的信息。

raise RuntimeError('Invalid user name')

这个值是异常实例的一部分,它被放置在提供给 except 的变量中。

try:
    ...
except RuntimeError as e:   # `e` holds the exception raised
    ...

e 是异常类型的一个实例。但是,当打印的时候,它通常看起来像一个字符串。

except RuntimeError as e:
    print('Failed : Reason', e)

捕获多个异常

可以使用多个 except 块捕获不同类型的异常:

try:
  ...
except LookupError as e:
  ...
except RuntimeError as e:
  ...
except IOError as e:
  ...
except KeyboardInterrupt as e:
  ...

或者,如果处理不同异常的语句是相同的,则可以对它们进行分组:

try:
  ...
except (IOError,LookupError,RuntimeError) as e:
  ...

捕获所有的异常

要捕获所有的异常,请使用 Exception 。如下所示:

try:
    ...
except Exception:       # DANGER. See below
    print('An error occurred')

通常,像这样编写代码是个坏主意,因为这说明不知道程序为什么会失败。

捕获异常的错误方式

这里是一个使用异常的错误方式。

try:
    go_do_something()
except Exception:
    print('Computer says no')

这将捕获所有可能的错误,并且,当代码因为某些根本没想到的原因(如卸载 Python 模块等)运行失败时,可能无法进行调试。

更好的方式

如果想要捕获所有的错误,这有一个更明智的方法。

try:
    go_do_something()
except Exception as e:
    print('Computer says no. Reason :', e)

它报告了失败的明确原因。当编写捕获所有可能异常的代码时,拥有查看/报告错误的机制几乎总是一个好主意。

不过,通常来说,最好在合理的范围内尽量窄地捕获异常。仅捕获能处理的异常。让其它错误通过——也许其它代码可以处理。

重新触发异常

使用 raise 传递捕获的错误。

try:
    go_do_something()
except Exception as e:
    print('Computer says no. Reason :', e)
    raise

这允许你采取措施(例如:记录日志)并将错误传递给调用者。

异常的最佳实践

不要捕获异常,而是失败发生时“停止运行,发出预警”(Fail fast and loud)。如果重要的话,别人会处理的。只有你是那个人的时候才捕获异常。即,只捕获可以恢复并正常运行的错误。

finally 语句

finally 语句指定无论是否发生异常都必须运行的代码。

lock = Lock()
...
lock.acquire()
try:
    ...
finally:
    lock.release()  # this will ALWAYS be executed. With and without exception.

通常使用 finally 语句安全地管理资源(尤其是锁,文件等)。

with 语句

在现代代码中,try-finally 语句通常被 with 语句取代。

lock = Lock()
with lock:
    # lock acquired
    ...
# lock released

一个更熟悉的例子:

with open(filename) as f:
    # Use the file
    ...
# File closed

with 语句定义资源的使用上下文。当执行离开上下文时,资源被释放。with 语句仅适用于经过专门编程以支持它的某些对象。

练习

练习 3.8:触发异常

在上一节中编写的 parse_csv() 函数允许选择用户指定的列,但是只有输入数据文件具有列标题时才会生效。

请修改代码,以便在同时传递 selecthas_headers=False 参数时触发异常。例如:

>>> parse_csv('Data/prices.csv', select=['name','price'], has_headers=False)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "fileparse.py", line 9, in parse_csv
    raise RuntimeError("select argument requires column headers")
RuntimeError: select argument requires column headers
>>>

添加此检查后,你可能会问是否应该在函数中执行其它类型的完整性检查。例如,检查文件名是字符串,列表还是其它类型?

一般来说,最好是跳过此类测试,输入错误的时候让程序运行失败。回溯信息会指出问题的根源,并且帮助调试。

添加上述检查的主要原因是为了避免在无意义的模式下运行代码(例如,使用要求列标题的特性,但是同时指定没有标题)。

这表明调用代码部分出现一个编程错误。检查“不应发生”的情况通常是个好主意。

练习 3.9:捕获异常

你编写的 parse_csv() 函数用于处理文件的全部内容。但是,在现实世界中,输入文件可能包含损坏的数据,丢失的数据或者脏数据。尝试下面这个实验:

>>> portfolio = parse_csv('Data/missing.csv', types=[str, int, float])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "fileparse.py", line 36, in parse_csv
    row = [func(val) for func, val in zip(types, row)]
ValueError: invalid literal for int() with base 10: ''
>>>

请修改 parse_csv() 函数以便捕获所有在记录创建期间生成的 ValueError 异常,并为无法转换的行打印警告消息。

错误消息应该包括行号以及有关失败原因的信息。要测试函数,尝试读取上面的 Data/missing.csv 文件,例如:

>>> portfolio = parse_csv('Data/missing.csv', types=[str, int, float])
Row 4: Couldn't convert ['MSFT', '', '51.23']
Row 4: Reason invalid literal for int() with base 10: ''
Row 7: Couldn't convert ['IBM', '', '70.44']
Row 7: Reason invalid literal for int() with base 10: ''
>>>
>>> portfolio
[{'price': 32.2, 'name': 'AA', 'shares': 100}, {'price': 91.1, 'name': 'IBM', 'shares': 50}, {'price': 83.44, 'name': 'CAT', 'shares': 150}, {'price': 40.37, 'name': 'GE', 'shares': 95}, {'price': 65.1, 'name': 'MSFT', 'shares': 50}]
>>>

练习 3.10:隐藏错误

请修改 parse_csv()函数,以便用户明确需要时可以隐藏解析的错误消息,例如:

>>> portfolio = parse_csv('Data/missing.csv', types=[str,int,float], silence_errors=True)
>>> portfolio
[{'price': 32.2, 'name': 'AA', 'shares': 100}, {'price': 91.1, 'name': 'IBM', 'shares': 50}, {'price': 83.44, 'name': 'CAT', 'shares': 150}, {'price': 40.37, 'name': 'GE', 'shares': 95}, {'price': 65.1, 'name': 'MSFT', 'shares': 50}]
>>>

在大部分的程序中,错误处理是最难做好的事情之一。一般来说,不应该默默地忽略错误。相反,最好是报告问题,并且让用户选择是否隐藏错误信息(如果它们选择这样做)。

目录 | 上一节 (3.2 深入函数) | 下一节 (3.4 模块)

注:完整翻译见 https://github.com/codists/practical-python-zh

... logging to /home/youzi/.ros/log/69838d96-0e3f-11f0-8a03-1b9b5063719e/roslaunch-youzi-virtual-machine-2222.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://youzi-virtual-machine:35111/ SUMMARY ======== PARAMETERS * /lslidar_driver_node/compensation: False * /lslidar_driver_node/frame_id: laser * /lslidar_driver_node/high_reflection: False * /lslidar_driver_node/interface_selection: serial * /lslidar_driver_node/lidar_name: N10 * /lslidar_driver_node/max_range: 100.0 * /lslidar_driver_node/min_range: 0.15 * /lslidar_driver_node/pubPointCloud2: False * /lslidar_driver_node/pubScan: True * /lslidar_driver_node/scan_topic: scan * /lslidar_driver_node/serial_port: /dev/ttyUSB0 * /lslidar_driver_node/use_gps_ts: False * /rosdistro: noetic * /rosversion: 1.17.0 NODES / lslidar_driver_node (lslidar_driver/lslidar_driver_node) ROS_MASTER_URI=http://localhost:11311 process[lslidar_driver_node-1]: started with pid [2236] [ INFO] [1743432770.757589499]: Lidar is N10 [ INFO] [1743432770.758497725]: Opening PCAP file port = /dev/ttyUSB0, baud_rate = 230400 open_port /dev/ttyUSB0 ERROR ! [lslidar_driver_node-1] process has died [pid 2236, exit code -11, cmd /home/youzi/桌面/catkin_ws/devel/lib/lslidar_driver/lslidar_driver_node __name:=lslidar_driver_node __log:=/home/youzi/.ros/log/69838d96-0e3f-11f0-8a03-1b9b5063719e/lslidar_driver_node-1.log]. log file: /home/youzi/.ros/log/69838d96-0e3f-11f0-8a03-1b9b5063719e/lslidar_driver_node-1*.log
04-01
PowerShell 7 环境已加载 (版本: 7.5.2) PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> # 退出虚拟环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> deactivate PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 删除旧环境 PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force .\pytorch_env PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force .\cuda_env PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 创建新虚拟环境 PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装基础编译工具 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install -U pip setuptools wheel ninja cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Collecting wheel Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl (72 kB) Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) ERROR: To modify pip, please run the following command: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -m pip install -U pip setuptools wheel ninja cmake [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证 CUDA 安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch> nvcc --version # 应显示 CUDA 12.x nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 正确更新 pip 和工具链 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m pip install -U pip setuptools wheel ninja cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Collecting wheel Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl (72 kB) Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) Installing collected packages: wheel, setuptools, pip, ninja, cmake Attempting uninstall: setuptools Found existing installation: setuptools 65.5.0 Uninstalling setuptools-65.5.0: Successfully uninstalled setuptools-65.5.0 Attempting uninstall: pip Found existing installation: pip 22.3.1 Uninstalling pip-22.3.1: Successfully uninstalled pip-22.3.1 Successfully installed cmake-4.1.0 ninja-1.13.0 pip-25.2 setuptools-80.9.0 wheel-0.45.1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip --version # 应显示 25.2+ pip 25.2 from E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\pip (python 3.10) (rtx5070_env) PS E:\PyTorch_Build\pytorch> cmake --version # 应显示 4.1.0+ cmake version 4.1.0 CMake suite maintained and supported by Kitware (kitware.com/cmake). (rtx5070_env) PS E:\PyTorch_Build\pytorch> ninja --version # 应显示 1.13.0+ 1.13.0.git.kitware.jobserver-pipe-1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 CUDA 12.1 环境变量 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:PATH = "$env:CUDA_PATH\bin;" + $env:PATH (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证 CUDA 版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> nvcc --version # 应显示 release 12.1 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN 路径(根据实际安装位置) (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$env:CUDA_PATH\include" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$env:CUDA_PATH\lib\x64\cudnn.lib" (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting pyyaml Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b5/84/0fa4b06f6d6c958d207620fc60005e241ecedceee58931bb20138e1e5776/PyYAML-6.0.2-cp310-cp310-win_amd64.whl (161 kB) Collecting numpy Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dd/4b822569d6b96c39d1215dbae0582fd99954dcbcf0c1a13c61783feaca3f/numpy-2.2.6-cp310-cp310-win_amd64.whl (12.9 MB) Collecting typing_extensions Using cached https://pypi.tuna.tsinghua.edu.cn/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Installing collected packages: typing_extensions, pyyaml, numpy Successfully installed numpy-2.2.6 pyyaml-6.0.2 typing_extensions-4.15.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装 GPU 相关依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl Using cached https://pypi.tuna.tsinghua.edu.cn/packages/91/ae/025174ee141432b974f97ecd2aea529a3bdb547392bde3dd55ce48fe7827/mkl-2025.2.0-py2.py3-none-win_amd64.whl (153.6 MB) Collecting mkl-include Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/87/3eee37bf95c6b820b6394ad98e50132798514ecda1b2584c71c2c96b973c/mkl_include-2025.2.0-py2.py3-none-win_amd64.whl (1.3 MB) Collecting intel-openmp Using cached https://pypi.tuna.tsinghua.edu.cn/packages/89/ed/13fed53fcc7ea17ff84095e89e63418df91d4eeefdc74454243d529bf5a3/intel_openmp-2025.2.1-py2.py3-none-win_amd64.whl (34.0 MB) Collecting tbb==2022.* (from mkl) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/4e/d2/01e2a93f9c644585088188840bf453f23ed1a2838ec51d5ba1ada1ebca71/tbb-2022.2.0-py3-none-win_amd64.whl (420 kB) Collecting intel-cmplr-lib-ur==2025.2.1 (from intel-openmp) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a8/70/938e81f58886fd4e114d5a5480d98c1396e73e40b7650f566ad0c4395311/intel_cmplr_lib_ur-2025.2.1-py2.py3-none-win_amd64.whl (1.2 MB) Collecting umf==0.11.* (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/33/a0/c8d755f08f50ddd99cb4a29a7e950ced7a0903cb72253e57059063609103/umf-0.11.0-py2.py3-none-win_amd64.whl (231 kB) Collecting tcmlib==1.* (from tbb==2022.*->mkl) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/91/7b/e30c461a27b97e0090e4db822eeb1d37b310863241f8c3ee56f68df3e76e/tcmlib-1.4.0-py2.py3-none-win_amd64.whl (370 kB) Installing collected packages: tcmlib, mkl-include, umf, tbb, intel-cmplr-lib-ur, intel-openmp, mkl Successfully installed intel-cmplr-lib-ur-2025.2.1 intel-openmp-2025.2.1 mkl-2025.2.0 mkl-include-2025.2.0 tbb-2022.2.0 tcmlib-1.4.0 umf-0.11.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装 GPU 相关依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: intel-openmp in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) (0.11.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDA=1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDNN=1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CMAKE_GENERATOR="Ninja" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:MAX_JOBS=8 # 根据 CPU 核心数设置 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --cmake-only ` >> --cmake-generator="Ninja" ` >> --verbose ` >> -DCMAKE_CUDA_COMPILER="${env:CUDA_PATH}\bin\nvcc.exe" ` >> -DCUDNN_INCLUDE_DIR="${env:CUDNN_INCLUDE_DIR}" ` >> -DCUDNN_LIBRARY="${env:CUDNN_LIBRARY}" ` >> -DTORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" Building wheel torch-2.9.0a0+git2d31c3d option --cmake-generator not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> python rtx5070_test.py ============================================================ Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\rtx5070_test.py", line 39, in <module> verify_gpu_support() File "E:\PyTorch_Build\pytorch\rtx5070_test.py", line 6, in verify_gpu_support if not torch.cuda.is_available(): AttributeError: module 'torch' has no attribute 'cuda' (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译架构参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:TORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用正确的编译命令 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --verbose ` >> -DCMAKE_CUDA_COMPILER="${env:CUDA_PATH}\bin\nvcc.exe" ` >> -DCUDNN_INCLUDE_DIR="${env:CUDNN_INCLUDE_DIR}" ` >> -DCUDNN_LIBRARY="${env:CUDNN_LIBRARY}" ` >> -DCMAKE_GENERATOR="Ninja" ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON Building wheel torch-2.9.0a0+git2d31c3d option -D not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> python enhanced_test.py ============================================================ Python 版本: 3.10.10 Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\enhanced_test.py", line 64, in <module> verify_installation() File "E:\PyTorch_Build\pytorch\enhanced_test.py", line 11, in verify_installation print(f"\nPyTorch 版本: {torch.__version__}") AttributeError: module 'torch' has no attribute '__version__' (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 清除之前的构建 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py clean --all Building wheel torch-2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\config\_apply_pyprojecttoml.py:82: SetuptoolsDeprecationWarning: `project.license` as a TOML table is deprecated !! ******************************************************************************** Please use a simple string containing a SPDX expression for `project.license`. You can also use `project.license-files`. (Both options available on setuptools>=77.0.0). By 2026-Feb-18, you need to update your project and remove deprecated calls or your builds will no longer be supported. See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! corresp(dist, value, root_dir) usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: option --all not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译架构参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:TORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用正确的编译命令(Windows专用) (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --cmake-args="-DCMAKE_CUDA_COMPILER='$env:CUDA_PATH\bin\nvcc.exe' ` >> -DCUDNN_INCLUDE_DIR='$env:CUDNN_INCLUDE_DIR' ` >> -DCUDNN_LIBRARY='$env:CUDNN_LIBRARY' ` >> -DCMAKE_GENERATOR='Ninja' ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON" ` >> --verbose ` >> --jobs=$env:MAX_JOBS Building wheel torch-2.9.0a0+git2d31c3d option --cmake-args not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用 PyTorch 官方构建工具 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install -U setuptools wheel Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (80.9.0) Requirement already satisfied: wheel in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (0.45.1) (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py bdist_wheel Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( -- Checkout nccl release tag: v2.27.5-1 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_GENERATOR=Ninja -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages -DCUDNN_INCLUDE_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include -DCUDNN_LIBRARY=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\lib\x64\cudnn.lib -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -DPython_NumPy_INCLUDE_DIR=E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\numpy\_core\include -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DTORCH_CUDA_ARCH_LIST=8.9;9.0;12.0 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module NumPy -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages;E:/Program Files/NVIDIA/CUNND/v9.12;E:\Program Files\NVIDIA\CUNND\v9.12;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/rtx5070_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/rtx5070_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at torch/CMakeLists.txt:3 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/csrc does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages;E:/Program Files/NVIDIA/CUNND/v9.12;E:\Program Files\NVIDIA\CUNND\v9.12;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : 1 -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_89,code=sm_89 -gencode arch=compute_90,code=sm_90 -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装生成的包 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $wheelPath = Get-ChildItem dist\*.whl | Select-Object -First 1 Get-ChildItem: Cannot find path 'E:\PyTorch_Build\pytorch\dist' because it does not exist. (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install $wheelPath --force-reinstall --no-deps ERROR: You must give at least one requirement to install (see "pip help install") (rtx5070_env) PS E:\PyTorch_Build\pytorch> python diagnostic_test.py ================================================== CUDA Toolkit 验证: ✅ NVCC 版本: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 ✅ NVIDIA-SMI 输出: Mon Sep 1 20:54:10 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.97 Driver Version: 580.97 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5070 WDDM | 00000000:01:00.0 On | N/A | | 0% 35C P3 16W / 250W | 1328MiB / 12227MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1124 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 1288 C+G ...les\Tencent\Weixin\Weixin.exe N/A | | 0 N/A N/A 1776 C+G C:\Windows\System32\dwm.exe N/A | | 0 N/A N/A 2272 C+G ...t\Edge\Application\msedge.exe N/A | | 0 N/A N/A 3268 C+G ...em32\ApplicationFrameHost.exe N/A | | 0 N/A N/A 7860 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 8004 C+G ...indows\System32\ShellHost.exe N/A | | 0 N/A N/A 8156 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 8852 C+G ..._cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 8876 C+G ...y\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 10540 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 12380 C+G ...5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 15340 C+G ...acted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 18600 C+G ...ntrolPanel\SystemSettings.exe N/A | +-----------------------------------------------------------------------------------------+ ================================================== ❌ 严重错误发生: Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 116, in <module> check_cuda_toolkit() File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 21, in check_cuda_toolkit cuda_path = os.environ.get('CUDA_PATH', '未设置') NameError: name 'os' is not defined 按 Enter 键退出... (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 卸载现有版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y torch torchvision torchaudio WARNING: Skipping torch as it is not installed. WARNING: Skipping torchvision as it is not installed. WARNING: Skipping torchaudio as it is not installed. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装支持 RTX 5070 的预编译版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install --pre torch torchvision torchaudio ` >> --index-url https://download.pytorch.org/whl/nightly/cu121 ` >> --no-deps Looking in indexes: https://download.pytorch.org/whl/nightly/cu121 Collecting torch Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.6.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (2456.2 MB) Collecting torchvision Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.20.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (6.2 MB) Collecting torchaudio Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.5.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (4.2 MB) Installing collected packages: torchaudio, torchvision, torch Successfully installed torch-2.6.0.dev20241112+cu121 torchaudio-2.5.0.dev20241112+cu121 torchvision-0.20.0.dev20241112+cu121 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: intel-openmp in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) (0.11.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 执行诊断测试 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python diagnostic_test.py ================================================== CUDA Toolkit 验证: ✅ NVCC 版本: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 ✅ NVIDIA-SMI 输出: Mon Sep 1 20:55:52 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.97 Driver Version: 580.97 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5070 WDDM | 00000000:01:00.0 On | N/A | | 0% 35C P3 19W / 250W | 1346MiB / 12227MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1124 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 1288 C+G ...les\Tencent\Weixin\Weixin.exe N/A | | 0 N/A N/A 1776 C+G C:\Windows\System32\dwm.exe N/A | | 0 N/A N/A 2272 C+G ...t\Edge\Application\msedge.exe N/A | | 0 N/A N/A 3268 C+G ...em32\ApplicationFrameHost.exe N/A | | 0 N/A N/A 7860 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 8004 C+G ...indows\System32\ShellHost.exe N/A | | 0 N/A N/A 8156 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 8852 C+G ..._cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 8876 C+G ...y\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 10540 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 12380 C+G ...5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 15340 C+G ...acted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 18600 C+G ...ntrolPanel\SystemSettings.exe N/A | +-----------------------------------------------------------------------------------------+ ================================================== ❌ 严重错误发生: Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 116, in <module> check_cuda_toolkit() File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 21, in check_cuda_toolkit cuda_path = os.environ.get('CUDA_PATH', '未设置') NameError: name 'os' is not defined 按 Enter 键退出... (rtx5070_env) PS E:\PyTorch_Build\pytorch>
最新发布
09-02
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值