Vesions ignore & ld: library not found for -l问题总结

本文分享了关于SVN和Git版本控制系统的一些实用技巧,包括如何递归删除指定目录下的版本控制文件、如何配置Versions for Mac忽略特定文件、解决Xcode编译时遇到的链接库问题等。

本文转载于;http://my.oschina.net/gexun/blog/317325
1.递归删除指定目录下的 .git、.svn 文件
find . -name .git | xargs rm -fr
find . -name .svn | xargs rm -rf

第一条倒还不常用,因为用 git 做版本管理的时候,
只在根目录下生成一个 .git 目录,删掉这一个就行了~
因此,删除 .git 字需要 rm -rf .git 命令就够了。

第二条才是真的,svn 做版本管理的时候,受管理的每个目录下面都会有一个 .svn 隐藏目录。
因此,如果要去掉 svn 文件的话,则上面的第二条命令乃不二之选~

2.Versions for Mac 忽略文件设置
一、打开配置文件 mvim ~/.subversion/config
二、找到 global-ignores 一行,去掉注释,编辑成:
global-ignores = build ~.nib .so .pbxuser .mode(在此添加呢要忽略的文件后缀)
三、找到 enable-auto-props = yes 把注释去掉
(让立即生效?这条没有试过,当时按第二条改了以后发现还是没有反应)

3.ld: library not found for -lcurl
XCode 里面有时候会报出如下的错误(今天我就遇到了一遭):
clang: warning: argument unused during compilation: ‘-websockets’
ld: library not found for -lcurl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
解决方案:
第一时间点击 XCode 工程文件,在 Build Phases 中查看 “Link binary With Libraries”
如果不出所料,你应该能发现有一到数个的条目是用红色字体来显示的。
什么意思?被工程引用的这些文件,其物理文件已经不处于之前所记录的地方了。
怎么办?右键点击红色条目,选择 “Reveal in Project Navigator”,
此时大抵能在左侧栏中定位到缺失文件所在的路径,接下来就好办了,找到缺失的文件,挪回它应该待的地方。

4.ld: library not found for -lcurl
现象:在编译工程时,有时会遇到类似“ld: library not found for -l…” 的错误提示。
原因:通常这是由于工程在编译时找不到需要的链接库而导致的。
解决方法:
一般可以通过如下的方法解决,在工程的 Target 中选中要执行编译的某个target,
然后 “get info”,打开 Build 设置页面,在 “Library Search Path” 中添加缺失链接库的所在文件夹的路径即可。

5.安装 Versions for Mac 以后,全局忽略的列表中默认是包含 *.a 文件类型的,
这样可能会导致一些问题,就拿 cocosd-x 库来说吧,
cocos2d-x 创建的模板工程中就包含着三个 .a 类型的文件:libcurl.a、libwebp.a、libwebsockets.a。
如果用 Versions 来做版本管理的话,这些 .a 文件就可能被跳过。
当其他人从 svn 服务器 checkout 该工程进行编译的时候,就可能会出现上一点中所描述的情形。

6.svn 默认 ignore(忽略)一些文件,例如 “*.o”,怎么取消这种默认忽略让文件能正常提交呢?
切换到指定目录,使用 “svn add * –force –no-ignore” 命令即可。
其中,“–no-ignore” 是取消忽略,“* –force” 是添加当前目录及所有子目录下的所有文件。

7.受版本控制的工程,对其做粘贴目录的操作须谨慎!
如果粘贴进去的某个目录中也包含了 .svn 隐藏目录,就可能会导致目标工程的版本控制出错,
直观的表现就是某目录下明明存在一些文件,但在 Versions 中却看不到,日常的 svn 操作也可能会受到一定的影响。

怎么解决这个问题?在控制台下切换到出问题的目录,使用 “find . -name .svn | xargs rm -rf” 命令即可。

好像卡住了”(.venv) PS E:\PyTorch_Build\pytorch> # 1. 准备工作空间 (.venv) PS E:\PyTorch_Build\pytorch> $sourceDir = "E:\PyTorch_Build\pytorch" (.venv) PS E:\PyTorch_Build\pytorch> $buildDir = "$sourceDir\build" (.venv) PS E:\PyTorch_Build\pytorch> $installDir = "$sourceDir\install" (.venv) PS E:\PyTorch_Build\pytorch> (.venv) PS E:\PyTorch_Build\pytorch> # 清理工作区 (.venv) PS E:\PyTorch_Build\pytorch> Remove-Item -Path $buildDir -Recurse -Force -ErrorAction SilentlyContinue (.venv) PS E:\PyTorch_Build\pytorch> Remove-Item -Path $installDir -Recurse -Force -ErrorAction SilentlyContinue (.venv) PS E:\PyTorch_Build\pytorch> New-Item -Path $buildDir -ItemType Directory -Force | Out-Null (.venv) PS E:\PyTorch_Build\pytorch> New-Item -Path $installDir -ItemType Directory -Force | Out-Null (.venv) PS E:\PyTorch_Build\pytorch> (.venv) PS E:\PyTorch_Build\pytorch> # 2. 设置环境变量 (.venv) PS E:\PyTorch_Build\pytorch> $env:Path = "C:\Program Files\CMake\bin;E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin;E:\Python310;$env:Path" (.venv) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0" (.venv) PS E:\PyTorch_Build\pytorch> (.venv) PS E:\PyTorch_Build\pytorch> # 3. 安装正确的 CMake 版本 (.venv) PS E:\PyTorch_Build\pytorch> Write-Host "安装 CMake 3.28..." 安装 CMake 3.28... (.venv) PS E:\PyTorch_Build\pytorch> Invoke-WebRequest -Uri "https://github.com/Kitware/CMake/releases/download/v3.28.3/cmake-3.28.3-windows-x86_64.msi" -OutFile "$env:TEMP\cmake.msi" Invoke-WebRequest: One or more errors occurred. (The response ended prematurely. (ResponseEnded)) (.venv) PS E:\PyTorch_Build\pytorch> Start-Process msiexec -ArgumentList "/i `"$env:TEMP\cmake.msi`" /quiet" -Wait (.venv) PS E:\PyTorch_Build\pytorch> $env:Path = "C:\Program Files\CMake\bin;$env:Path" (.venv) PS E:\PyTorch_Build\pytorch> (.venv) PS E:\PyTorch_Build\pytorch> # 4. 生成 CMake 配置 (.venv) PS E:\PyTorch_Build\pytorch> Set-Location $buildDir (.venv) PS E:\PyTorch_Build\pytorch\build> (.venv) PS E:\PyTorch_Build\pytorch\build> $cmakeCommand = @" >> cmake $sourceDir ` >> -G "Visual Studio 17 2022" -A x64 ` >> -DCMAKE_INSTALL_PREFIX="$installDir" ` >> -DCMAKE_TOOLCHAIN_FILE="$sourceDir/cmake/modules/win_toolchain.cmake" ` >> -DBUILD_PYTHON=ON ` >> -DPython_EXECUTABLE="E:\Python310\python.exe" ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON ` >> -DCUDNN_INCLUDE_DIR="E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include" ` >> -DCUDNN_LIBRARY="E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\lib\x64\cudnn.lib" ` >> -DUSE_MKLDNN=OFF ` >> -DUSE_DNNL=OFF ` >> -DUSE_STATIC_OPENMP=ON ` >> -DBUILD_TEST=OFF ` >> -DBUILD_TORCH=ON ` >> -DCMAKE_BUILD_TYPE=Release >> "@ (.venv) PS E:\PyTorch_Build\pytorch\build> (.venv) PS E:\PyTorch_Build\pytorch\build> Invoke-Expression $cmakeCommand 2>&1 | Tee-Object -FilePath "$buildDir\cmake_configure.log" -- Building for: Visual Studio 17 2022 -- Selecting Windows SDK version 10.0.26100.0 to target Windows . -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found -- Build type not set - defaulting to Release -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR) (found version "13.0") -- Building using own protobuf under third_party per request. -- Use custom protobuf build. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ -- Using third party subdirectory Eigen. -- Found Python: E:/Python310/python.exe (found version "3.10.10") found components: Interpreter Development.Module NumPy -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:/Python310/python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/Python310/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /MP /bigobj /FS /utf-8 /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : C:/Program Files (x86)/Torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. -- MIOpen not found. Compiling without MIOpen support -- {fmt} version: 11.2.0 -- Build type: Release -- Using CPU-only version of Kineto -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- Found PythonInterp: E:/Python310/python.exe (found version "3.10.10") -- CUDA_SOURCE_DIR = -- ROCM_SOURCE_DIR = -- CUPTI unavailable or disabled - not building GPU profilers -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = /extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto (CPU) -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- -- Note: when building with Visual Studio the build type is specified when building. -- For example: 'cmake --build . --config=Release -- Architecture: -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/Python310/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /MP /bigobj /FS /utf-8 -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : C:/Program Files (x86)/Torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : ON -- Python version : 3.10.10 -- Python executable : E:/Python310/python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/include -- Python site-package : E:\Python310\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : OFF -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : OFF -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : OFF -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : OFF -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;cpuinfo;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (.venv) PS E:\PyTorch_Build\pytorch\build> (.venv) PS E:\PyTorch_Build\pytorch\build> # 5. 编译和安装 (.venv) PS E:\PyTorch_Build\pytorch\build> cmake --build . --config Release --target install -j 8 2>&1 | Tee-Object -FilePath "$buildDir\cmake_build.log" 閫傜敤浜?.NET Framework MSBuild 鐗堟湰 17.14.19+164abd434 MSBUILD : error MSB1009: 椤圭洰鏂囦欢涓嶅瓨鍦ㄣ€? 寮€鍏?install.vcxproj (.venv) PS E:\PyTorch_Build\pytorch\build> (.venv) PS E:\PyTorch_Build\pytorch\build> # 6. 安装 Python 绑定 (.venv) PS E:\PyTorch_Build\pytorch\build> Set-Location $sourceDir (.venv) PS E:\PyTorch_Build\pytorch> & "E:\Python310\python.exe" setup.py develop --cmake --no-deps Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d -- Checkout nccl release tag: v2.27.5-1 cmake -GVisual Studio 17 2022 -Ax64 -Thost=x64 -DBUILD_ONLY_CORE=1 -DBUILD_PYTHON=True -DBUILD_SHARED_LIBS=OFF -DBUILD_TEST=True -DBUILD_TORCH=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXE_LINKER_FLAGS=-static -DCMAKE_GENERATOR=Visual Studio 17 2022 -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\Python310\Lib\site-packages -DCMAKE_TOOLCHAIN_FILE=E:\PyTorch_Build\pytorch\cmake\modules\win_toolchain.cmake -DPython_EXECUTABLE=E:\Python310\python.exe -DPython_NumPy_INCLUDE_DIR=E:\Python310\lib\site-packages\numpy\core\include -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_NINJA=0 -DUSE_NUMPY=True -DUSE_STATIC_CUDNN=ON -DUSE_STATIC_NCCL=ON -DUSE_STATIC_OPENMP=ON E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- Selecting Windows SDK version 10.0.26100.0 to target Windows . -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR) (found version "13.0") CMake Warning at cmake/public/cuda.cmake:31 (message): PyTorch: CUDA cannot be found. Depending on whether you are building PyTorch or a PyTorch dependent library, the next warning / error will give you more info. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:76 (message): Not compiling with CUDA. Suppress this warning with -DUSE_CUDA=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Warning at cmake/Dependencies.cmake:351 (message): Target architecture "" is not supported in {Q/X}NNPACK. Supported architectures are x86, x86-64, ARM, and ARM64. Turn this warning off by USE_{Q/X}NNPACK=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at third_party/cpuinfo/CMakeLists.txt:93 (MESSAGE): Target processor architecture is not specified. cpuinfo will compile, but cpuinfo_initialize() will always fail. -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Cross-compiling to test HAVE_STD_REGEX CMake Warning at third_party/benchmark/cmake/CXXFeatureCheck.cmake:49 (message): If you see build failures due to cross compilation, try setting HAVE_STD_REGEX to 0 Call Stack (most recent call first): third_party/benchmark/CMakeLists.txt:311 (cxx_feature_check) -- Performing Test HAVE_STD_REGEX -- success -- Cross-compiling to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Cross-compiling to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Cross-compiling to test HAVE_STEADY_CLOCK CMake Warning at third_party/benchmark/cmake/CXXFeatureCheck.cmake:49 (message): If you see build failures due to cross compilation, try setting HAVE_STEADY_CLOCK to 0 Call Stack (most recent call first): third_party/benchmark/CMakeLists.txt:322 (cxx_feature_check) -- Performing Test HAVE_STEADY_CLOCK -- success -- Cross-compiling to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\Python310\python.exe (found version "3.10.10") found components: Interpreter Development.Module NumPy -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\Python310\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/Python310/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /MP /bigobj /FS /utf-8 /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\Python310\Lib\site-packages -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags CMake Warning at cmake/Dependencies.cmake:1418 (message): Not compiling with MAGMA. Suppress this warning with -DUSE_MAGMA=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling CUDA because NOT USE_CUDA is set disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using CPU-only version of Kineto -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/Python310/python.exe (found version "3.10.10") -- CUDA_SOURCE_DIR = -- ROCM_SOURCE_DIR = -- CUPTI unavailable or disabled - not building GPU profilers -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = /extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto (CPU) -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Note: when building with Visual Studio the build type is specified when building. -- For example: 'cmake --build . --config=Release -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): add_subdirectory given source "torch/headeronly" which is not an existing directory. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP disabling CUDA because USE_CUDA is set false -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows Target processor: Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: C:/Program Files/Microsoft Visual Studio/2022/Community/MSBuild/Current/Bin/amd64/MSBuild.exe Crosscompiling SLEEF. Native build dir: -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h CMake Error: File E:/PyTorch_Build/pytorch/torch/_utils_internal.py does not exist. CMake Error at caffe2/CMakeLists.txt:241 (configure_file): configure_file Problem configuring file CMake Error: File E:/PyTorch_Build/pytorch/torch/csrc/api/include/torch/version.h.in does not exist. CMake Error at caffe2/CMakeLists.txt:246 (configure_file): configure_file Problem configuring file CMake Error at caffe2/CMakeLists.txt:1398 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. CMake Warning at CMakeLists.txt:1349 (message): Generated cmake files are only available when building shared libs. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/Python310/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /MP /bigobj /FS /utf-8 -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\Python310\Lib\site-packages -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\Python310\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/include -- Python site-package : E:\Python310\Lib\site-packages -- BUILD_SHARED_LIBS : OFF -- CAFFE2_USE_MSVC_STATIC_RUNTIME : ON -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : OFF -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : OFF -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : OFF -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;cpuinfo;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (.venv) PS E:\PyTorch_Build\pytorch> (.venv) PS E:\PyTorch_Build\pytorch> # 7. 验证安装 (.venv) PS E:\PyTorch_Build\pytorch> & "E:\Python310\python.exe" -c @" >> import torch >> print(f'PyTorch版本: {torch.__version__}') >> print(f'CUDA可用: {torch.cuda.is_available()}') >> print(f'CUDA版本: {torch.version.cuda}') >> print(f'cuDNN版本: {torch.backends.cudnn.version()}') >> "@ Traceback (most recent call last): File "<string>", line 1, in <module> File "E:\Python310\lib\site-packages\torch-2.9.0a0+git2d31c3d-py3.10-win-amd64.egg\torch\__init__.py", line 281, in <module> _load_dll_libraries() File "E:\Python310\lib\site-packages\torch-2.9.0a0+git2d31c3d-py3.10-win-amd64.egg\torch\__init__.py", line 277, in _load_dll_libraries raise err OSError: [WinError 126] 找不到指定的模块。 Error loading "E:\Python310\lib\site-packages\torch-2.9.0a0+git2d31c3d-py3.10-win-amd64.egg\torch\lib\aoti_custom_ops.dll" or one of its dependencies. (.venv) PS E:\PyTorch_Build\pytorch> Set-Content -Path manual_build.ps1 -Value (Get-Content manual_build.ps1) Get-Content: Cannot find path 'E:\PyTorch_Build\pytorch\manual_build.ps1' because it does not exist. (.venv) PS E:\PyTorch_Build\pytorch> .\manual_build.ps1 (.venv) PS E:\PyTorch_Build\pytorch> # 安装 Docker Desktop (.venv) PS E:\PyTorch_Build\pytorch> Invoke-WebRequest -Uri "https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe" -OutFile "$env:TEMP\docker_install.exe" (.venv) PS E:\PyTorch_Build\pytorch> Start-Process -FilePath "$env:TEMP\docker_install.exe" -ArgumentList "install --quiet" -Wait “
09-02
(pytorch_env) PS E:\PyTorch_Build\pytorch> # 查找实际安装位置 (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = Get-ChildItem -Path C:\ -Recurse -Filter conda.bat -ErrorAction SilentlyContinue | >> Select-Object -First 1 | >> ForEach-Object { $_.DirectoryName } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> if ($condaPath) { >> $env:PATH = "$condaPath;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> Write-Host "Conda found at: $condaPath" -ForegroundColor Green >> } else { >> # 如果找不到,使用新安装的Miniconda >> $env:PATH = "C:\Miniconda3\Scripts;$env:PATH" >> } Conda found at: C:\Miniconda3\condabin (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version conda 25.7.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> # 使用conda安装必要组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge -y ` >> libuv=1.46 ` >> openssl=3.1 ` >> numpy ` >> mkl=2024.1 ` >> mkl-include=2024.1 CondaToSNonInteractiveError: Terms of Service have not been accepted for the following channels. Please accept or remove them before proceeding: - https://repo.anaconda.com/pkgs/main - https://repo.anaconda.com/pkgs/r - https://repo.anaconda.com/pkgs/msys2 To accept these channels' Terms of Service, run the following commands: conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2 For information on safely removing channels from your conda configuration, please see the official documentation: https://www.anaconda.com/docs/tools/working-with-conda/channels (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证MKL安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print(f'MKL version: {mkl.__version__}')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> # 使用conda安装必要组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge -y ` >> libuv=1.46 ` >> openssl=3.1 ` >> numpy ` >> mkl=2024.1 ` >> mkl-include=2024.1 CondaToSNonInteractiveError: Terms of Service have not been accepted for the following channels. Please accept or remove them before proceeding: - https://repo.anaconda.com/pkgs/main - https://repo.anaconda.com/pkgs/r - https://repo.anaconda.com/pkgs/msys2 To accept these channels' Terms of Service, run the following commands: conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2 For information on safely removing channels from your conda configuration, please see the official documentation: https://www.anaconda.com/docs/tools/working-with-conda/channels (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证MKL安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print(f'MKL version: {mkl.__version__}')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> # 清理构建缓存 (pytorch_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build, dist Remove-Item: Cannot find path 'E:\PyTorch_Build\pytorch\dist' because it does not exist. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置构建参数 (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDNN = "1" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:MAX_JOBS = [Environment]::ProcessorCount (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 开始构建(添加详细日志) (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install --cmake 2>&1 | Tee-Object -FilePath build_log.txt Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -DCUDNN_INCLUDE_DIR=E:\Program Files\NVIDIA\CUNND\v9.12\include -DCUDNN_LIBRARY=E:\Program Files\NVIDIA\CUNND\v9.12\lib\x64\cudnn.lib -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_CUDNN=1 -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support -- Autodetected CUDA architecture(s): 12.0 CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at pytorch_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module missing components: NumPy CMake Warning at cmake/Dependencies.cmake:826 (message): NumPy could not be found. Not building with NumPy. Suppress this warning with -DUSE_NUMPY=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA/CUNND/v9.12;E:\Program Files\NVIDIA\CUNND\v9.12;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. -- MIOpen not found. Compiling without MIOpen support disabling ROCM because NOT USE_ROCM is set disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h CMake Error: File E:/PyTorch_Build/pytorch/torch/_utils_internal.py does not exist. CMake Error at caffe2/CMakeLists.txt:241 (configure_file): configure_file Problem configuring file CMake Error: File E:/PyTorch_Build/pytorch/torch/csrc/api/include/torch/version.h.in does not exist. CMake Error at caffe2/CMakeLists.txt:246 (configure_file): configure_file Problem configuring file -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at caffe2/CMakeLists.txt:1398 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA/CUNND/v9.12;E:\Program Files\NVIDIA\CUNND\v9.12;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : ON -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : OFF -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! -- Checkout nccl release tag: v2.27.5-1 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证构建 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; print(f'cuDNN version: {torch.backends.cudnn.version()}')" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'torch' has no attribute 'backends' (pytorch_env) PS E:\PyTorch_Build\pytorch> # 检查核心组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; >> print(f'PyTorch: {torch.__version__}'); >> print(f'CUDA available: {torch.cuda.is_available()}'); >> print(f'cuDNN: {torch.backends.cudnn.version()}'); >> print(f'MKL: {torch.__config__.mkl_is_available()}'); >> print(f'Libuv: {torch.distributed.is_available()}')" Traceback (most recent call last): File "<string>", line 2, in <module> AttributeError: module 'torch' has no attribute '__version__' (pytorch_env) PS E:\PyTorch_Build\pytorch>
09-02
PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> # 1. 激活虚拟环境 PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 修复conda路径(执行一次即可) (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = "${env:USERPROFILE}\miniconda3\Scripts" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:PATH += ";$condaPath" (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version # 应显示conda版本 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 1. 安装正确版本的MKL (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y mkl-static mkl-include Found existing installation: mkl-static 2024.1.0 Uninstalling mkl-static-2024.1.0: Successfully uninstalled mkl-static-2024.1.0 Found existing installation: mkl-include 2024.1.0 Uninstalling mkl-include-2024.1.0: Successfully uninstalled mkl-include-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install mkl-static==2024.1 mkl-include==2024.1 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl-static==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d8/f0/3b9976df82906d8f3244213b6d8beb67cda19ab5b0645eb199da3c826127/mkl_static-2024.1.0-py2.py3-none-win_amd64.whl (220.8 MB) Collecting mkl-include==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/1b/f05201146f7f12bf871fa2c62096904317447846b5d23f3560a89b4bbaae/mkl_include-2024.1.0-py2.py3-none-win_amd64.whl (1.3 MB) Requirement already satisfied: intel-openmp==2024.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb-devel==2021.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2021.13.1) Requirement already satisfied: intel-cmplr-lib-ur==2024.2.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from intel-openmp==2024.*->mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb==2021.13.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from tbb-devel==2021.*->mkl-static==2024.1) (2021.13.1) Installing collected packages: mkl-include, mkl-static Successfully installed mkl-include-2024.1.0 mkl-static-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 安装libuv (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge libuv=1.46 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 安装OpenSSL (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge openssl=3.1 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 4. 验证安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('MKL版本:', mkl.__version__)" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证所有关键组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('✓ MKL已安装')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> dir "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\cudnn*" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import os; print('环境变量检查:'); >> print('CUDNN_PATH:', os.getenv('CUDA_PATH')); >> print('CONDA_PREFIX:', os.getenv('CONDA_PREFIX'))" 环境变量检查: CUDNN_PATH: E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0 CONDA_PREFIX: None (pytorch_env) PS E:\PyTorch_Build\pytorch> # 清理并重建 (pytorch_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( -- Checkout nccl release tag: v2.27.5-1 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support -- Autodetected CUDA architecture(s): 12.0 CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at pytorch_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module missing components: NumPy CMake Warning at cmake/Dependencies.cmake:826 (message): NumPy could not be found. Not building with NumPy. Suppress this warning with -DUSE_NUMPY=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h CMake Error: File E:/PyTorch_Build/pytorch/torch/_utils_internal.py does not exist. CMake Error at caffe2/CMakeLists.txt:241 (configure_file): configure_file Problem configuring file CMake Error: File E:/PyTorch_Build/pytorch/torch/csrc/api/include/torch/version.h.in does not exist. CMake Error at caffe2/CMakeLists.txt:246 (configure_file): configure_file Problem configuring file -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at caffe2/CMakeLists.txt:1398 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : ON -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : OFF -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久修复conda命令不可用问题 (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPaths = @( >> "$env:USERPROFILE\miniconda3\Scripts", >> "$env:USERPROFILE\anaconda3\Scripts", >> "C:\ProgramData\miniconda3\Scripts" >> ) (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> foreach ($path in $condaPaths) { >> if (Test-Path $path) { >> $env:PATH = "$path;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> break >> } >> } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN v9.12 路径 (pytorch_env) PS E:\PyTorch_Build\pytorch> $cudnnPath = "E:\Program Files\NVIDIA\CUNND\v9.12" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 添加到环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_ROOT_DIR = $cudnnPath (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$cudnnPath\include" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$cudnnPath\lib\x64\cudnn.lib" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久生效 (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_ROOT_DIR", $cudnnPath, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_INCLUDE_DIR", "$cudnnPath\include", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_LIBRARY", "$cudnnPath\lib\x64\cudnn.lib", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> # 原始代码大约在 190 行左右 (pytorch_env) PS E:\PyTorch_Build\pytorch> # 替换为以下内容强制使用 v9.12: (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_VERSION "9.12.0") # 手动指定版本 CUDNN_VERSION: The term 'CUDNN_VERSION' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_FOUND TRUE) CUDNN_FOUND: The term 'CUDNN_FOUND' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_INCLUDE_DIR $ENV{CUDNN_INCLUDE_DIR}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_LIBRARY $ENV{CUDNN_LIBRARY}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS "Using manually configured cuDNN v${CUDNN_VERSION}") InvalidOperation: The variable '$CUDNN_VERSION' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Include path: ${CUDNN_INCLUDE_DIR}") InvalidOperation: The variable '$CUDNN_INCLUDE_DIR' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Library path: ${CUDNN_LIBRARY}") InvalidOperation: The variable '$CUDNN_LIBRARY' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 精确查找 conda.bat (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = Get-ChildItem -Path C:\ -Recurse -Filter conda.bat -ErrorAction SilentlyContinue | >> Select-Object -First 1 | >> ForEach-Object { $_.DirectoryName } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> if ($condaPath) { >> $env:PATH = "$condaPath;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> Write-Host "Conda found at: $condaPath" -ForegroundColor Green >> } else { >> Write-Host "Conda not found! Installing miniconda..." -ForegroundColor Yellow >> # 自动安装 miniconda >> Invoke-WebRequest -Uri "https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe" -OutFile "$env:TEMP\miniconda.exe" >> Start-Process -FilePath "$env:TEMP\miniconda.exe" -ArgumentList "/S", "/AddToPath=1", "/InstallationType=AllUsers", "/D=C:\Miniconda3" -Wait >> $env:PATH = "C:\Miniconda3\Scripts;$env:PATH" >> } Conda not found! Installing miniconda... /AddToPath=1 is disabled and ignored in 'All Users' installations Welcome to Miniconda3 py313_25.7.0-2 By continuing this installation you are accepting this license agreement: C:\Miniconda3\EULA.txt Please run the installer in GUI mode to read the details. Miniconda3 will now be installed into this location: C:\Miniconda3 Unpacking payload... Setting up the package cache... Setting up the base environment... Installing packages for base, creating shortcuts if necessary... Initializing conda directories... Setting installation directory permissions... Done! (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch>
09-02
PowerShell 7 环境已加载 (版本: 7.5.2) PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> # 退出虚拟环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> deactivate PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 删除旧环境 PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force .\pytorch_env PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force .\cuda_env PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 创建新虚拟环境 PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装基础编译工具 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install -U pip setuptools wheel ninja cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Collecting wheel Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl (72 kB) Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) ERROR: To modify pip, please run the following command: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -m pip install -U pip setuptools wheel ninja cmake [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证 CUDA 安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch> nvcc --version # 应显示 CUDA 12.x nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 正确更新 pip 和工具链 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m pip install -U pip setuptools wheel ninja cmake Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Collecting wheel Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl (72 kB) Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Collecting cmake Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) Installing collected packages: wheel, setuptools, pip, ninja, cmake Attempting uninstall: setuptools Found existing installation: setuptools 65.5.0 Uninstalling setuptools-65.5.0: Successfully uninstalled setuptools-65.5.0 Attempting uninstall: pip Found existing installation: pip 22.3.1 Uninstalling pip-22.3.1: Successfully uninstalled pip-22.3.1 Successfully installed cmake-4.1.0 ninja-1.13.0 pip-25.2 setuptools-80.9.0 wheel-0.45.1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip --version # 应显示 25.2+ pip 25.2 from E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\pip (python 3.10) (rtx5070_env) PS E:\PyTorch_Build\pytorch> cmake --version # 应显示 4.1.0+ cmake version 4.1.0 CMake suite maintained and supported by Kitware (kitware.com/cmake). (rtx5070_env) PS E:\PyTorch_Build\pytorch> ninja --version # 应显示 1.13.0+ 1.13.0.git.kitware.jobserver-pipe-1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 CUDA 12.1 环境变量 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:PATH = "$env:CUDA_PATH\bin;" + $env:PATH (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证 CUDA 版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> nvcc --version # 应显示 release 12.1 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN 路径(根据实际安装位置) (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$env:CUDA_PATH\include" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$env:CUDA_PATH\lib\x64\cudnn.lib" (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting pyyaml Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b5/84/0fa4b06f6d6c958d207620fc60005e241ecedceee58931bb20138e1e5776/PyYAML-6.0.2-cp310-cp310-win_amd64.whl (161 kB) Collecting numpy Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dd/4b822569d6b96c39d1215dbae0582fd99954dcbcf0c1a13c61783feaca3f/numpy-2.2.6-cp310-cp310-win_amd64.whl (12.9 MB) Collecting typing_extensions Using cached https://pypi.tuna.tsinghua.edu.cn/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Installing collected packages: typing_extensions, pyyaml, numpy Successfully installed numpy-2.2.6 pyyaml-6.0.2 typing_extensions-4.15.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装 GPU 相关依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl Using cached https://pypi.tuna.tsinghua.edu.cn/packages/91/ae/025174ee141432b974f97ecd2aea529a3bdb547392bde3dd55ce48fe7827/mkl-2025.2.0-py2.py3-none-win_amd64.whl (153.6 MB) Collecting mkl-include Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/87/3eee37bf95c6b820b6394ad98e50132798514ecda1b2584c71c2c96b973c/mkl_include-2025.2.0-py2.py3-none-win_amd64.whl (1.3 MB) Collecting intel-openmp Using cached https://pypi.tuna.tsinghua.edu.cn/packages/89/ed/13fed53fcc7ea17ff84095e89e63418df91d4eeefdc74454243d529bf5a3/intel_openmp-2025.2.1-py2.py3-none-win_amd64.whl (34.0 MB) Collecting tbb==2022.* (from mkl) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/4e/d2/01e2a93f9c644585088188840bf453f23ed1a2838ec51d5ba1ada1ebca71/tbb-2022.2.0-py3-none-win_amd64.whl (420 kB) Collecting intel-cmplr-lib-ur==2025.2.1 (from intel-openmp) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a8/70/938e81f58886fd4e114d5a5480d98c1396e73e40b7650f566ad0c4395311/intel_cmplr_lib_ur-2025.2.1-py2.py3-none-win_amd64.whl (1.2 MB) Collecting umf==0.11.* (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/33/a0/c8d755f08f50ddd99cb4a29a7e950ced7a0903cb72253e57059063609103/umf-0.11.0-py2.py3-none-win_amd64.whl (231 kB) Collecting tcmlib==1.* (from tbb==2022.*->mkl) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/91/7b/e30c461a27b97e0090e4db822eeb1d37b310863241f8c3ee56f68df3e76e/tcmlib-1.4.0-py2.py3-none-win_amd64.whl (370 kB) Installing collected packages: tcmlib, mkl-include, umf, tbb, intel-cmplr-lib-ur, intel-openmp, mkl Successfully installed intel-cmplr-lib-ur-2025.2.1 intel-openmp-2025.2.1 mkl-2025.2.0 mkl-include-2025.2.0 tbb-2022.2.0 tcmlib-1.4.0 umf-0.11.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装 GPU 相关依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: intel-openmp in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) (0.11.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDA=1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:USE_CUDNN=1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CMAKE_GENERATOR="Ninja" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:MAX_JOBS=8 # 根据 CPU 核心数设置 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --cmake-only ` >> --cmake-generator="Ninja" ` >> --verbose ` >> -DCMAKE_CUDA_COMPILER="${env:CUDA_PATH}\bin\nvcc.exe" ` >> -DCUDNN_INCLUDE_DIR="${env:CUDNN_INCLUDE_DIR}" ` >> -DCUDNN_LIBRARY="${env:CUDNN_LIBRARY}" ` >> -DTORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" Building wheel torch-2.9.0a0+git2d31c3d option --cmake-generator not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> python rtx5070_test.py ============================================================ Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\rtx5070_test.py", line 39, in <module> verify_gpu_support() File "E:\PyTorch_Build\pytorch\rtx5070_test.py", line 6, in verify_gpu_support if not torch.cuda.is_available(): AttributeError: module 'torch' has no attribute 'cuda' (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译架构参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:TORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用正确的编译命令 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --verbose ` >> -DCMAKE_CUDA_COMPILER="${env:CUDA_PATH}\bin\nvcc.exe" ` >> -DCUDNN_INCLUDE_DIR="${env:CUDNN_INCLUDE_DIR}" ` >> -DCUDNN_LIBRARY="${env:CUDNN_LIBRARY}" ` >> -DCMAKE_GENERATOR="Ninja" ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON Building wheel torch-2.9.0a0+git2d31c3d option -D not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> python enhanced_test.py ============================================================ Python 版本: 3.10.10 Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\enhanced_test.py", line 64, in <module> verify_installation() File "E:\PyTorch_Build\pytorch\enhanced_test.py", line 11, in verify_installation print(f"\nPyTorch 版本: {torch.__version__}") AttributeError: module 'torch' has no attribute '__version__' (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 清除之前的构建 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py clean --all Building wheel torch-2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\config\_apply_pyprojecttoml.py:82: SetuptoolsDeprecationWarning: `project.license` as a TOML table is deprecated !! ******************************************************************************** Please use a simple string containing a SPDX expression for `project.license`. You can also use `project.license-files`. (Both options available on setuptools>=77.0.0). By 2026-Feb-18, you need to update your project and remove deprecated calls or your builds will no longer be supported. See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! corresp(dist, value, root_dir) usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: option --all not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置编译架构参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:TORCH_CUDA_ARCH_LIST="8.9;9.0;12.0" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用正确的编译命令(Windows专用) (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --cmake-args="-DCMAKE_CUDA_COMPILER='$env:CUDA_PATH\bin\nvcc.exe' ` >> -DCUDNN_INCLUDE_DIR='$env:CUDNN_INCLUDE_DIR' ` >> -DCUDNN_LIBRARY='$env:CUDNN_LIBRARY' ` >> -DCMAKE_GENERATOR='Ninja' ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON" ` >> --verbose ` >> --jobs=$env:MAX_JOBS Building wheel torch-2.9.0a0+git2d31c3d option --cmake-args not recognized (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用 PyTorch 官方构建工具 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install -U setuptools wheel Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (80.9.0) Requirement already satisfied: wheel in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (0.45.1) (rtx5070_env) PS E:\PyTorch_Build\pytorch> python setup.py bdist_wheel Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( -- Checkout nccl release tag: v2.27.5-1 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_GENERATOR=Ninja -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages -DCUDNN_INCLUDE_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include -DCUDNN_LIBRARY=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\lib\x64\cudnn.lib -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -DPython_NumPy_INCLUDE_DIR=E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\numpy\_core\include -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DTORCH_CUDA_ARCH_LIST=8.9;9.0;12.0 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module NumPy -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages;E:/Program Files/NVIDIA/CUNND/v9.12;E:\Program Files\NVIDIA\CUNND\v9.12;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/rtx5070_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/rtx5070_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at torch/CMakeLists.txt:3 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/csrc does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages;E:/Program Files/NVIDIA/CUNND/v9.12;E:\Program Files\NVIDIA\CUNND\v9.12;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : 1 -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_89,code=sm_89 -gencode arch=compute_90,code=sm_90 -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装生成的包 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $wheelPath = Get-ChildItem dist\*.whl | Select-Object -First 1 Get-ChildItem: Cannot find path 'E:\PyTorch_Build\pytorch\dist' because it does not exist. (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install $wheelPath --force-reinstall --no-deps ERROR: You must give at least one requirement to install (see "pip help install") (rtx5070_env) PS E:\PyTorch_Build\pytorch> python diagnostic_test.py ================================================== CUDA Toolkit 验证: ✅ NVCC 版本: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 ✅ NVIDIA-SMI 输出: Mon Sep 1 20:54:10 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.97 Driver Version: 580.97 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5070 WDDM | 00000000:01:00.0 On | N/A | | 0% 35C P3 16W / 250W | 1328MiB / 12227MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1124 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 1288 C+G ...les\Tencent\Weixin\Weixin.exe N/A | | 0 N/A N/A 1776 C+G C:\Windows\System32\dwm.exe N/A | | 0 N/A N/A 2272 C+G ...t\Edge\Application\msedge.exe N/A | | 0 N/A N/A 3268 C+G ...em32\ApplicationFrameHost.exe N/A | | 0 N/A N/A 7860 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 8004 C+G ...indows\System32\ShellHost.exe N/A | | 0 N/A N/A 8156 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 8852 C+G ..._cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 8876 C+G ...y\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 10540 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 12380 C+G ...5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 15340 C+G ...acted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 18600 C+G ...ntrolPanel\SystemSettings.exe N/A | +-----------------------------------------------------------------------------------------+ ================================================== ❌ 严重错误发生: Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 116, in <module> check_cuda_toolkit() File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 21, in check_cuda_toolkit cuda_path = os.environ.get('CUDA_PATH', '未设置') NameError: name 'os' is not defined 按 Enter 键退出... (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 卸载现有版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y torch torchvision torchaudio WARNING: Skipping torch as it is not installed. WARNING: Skipping torchvision as it is not installed. WARNING: Skipping torchaudio as it is not installed. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装支持 RTX 5070 的预编译版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install --pre torch torchvision torchaudio ` >> --index-url https://download.pytorch.org/whl/nightly/cu121 ` >> --no-deps Looking in indexes: https://download.pytorch.org/whl/nightly/cu121 Collecting torch Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.6.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (2456.2 MB) Collecting torchvision Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.20.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (6.2 MB) Collecting torchaudio Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.5.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (4.2 MB) Installing collected packages: torchaudio, torchvision, torch Successfully installed torch-2.6.0.dev20241112+cu121 torchaudio-2.5.0.dev20241112+cu121 torchvision-0.20.0.dev20241112+cu121 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装必要依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install pyyaml numpy typing_extensions mkl mkl-include intel-openmp Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: intel-openmp in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp) (0.11.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 执行诊断测试 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python diagnostic_test.py ================================================== CUDA Toolkit 验证: ✅ NVCC 版本: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 ✅ NVIDIA-SMI 输出: Mon Sep 1 20:55:52 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.97 Driver Version: 580.97 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5070 WDDM | 00000000:01:00.0 On | N/A | | 0% 35C P3 19W / 250W | 1346MiB / 12227MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1124 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 1288 C+G ...les\Tencent\Weixin\Weixin.exe N/A | | 0 N/A N/A 1776 C+G C:\Windows\System32\dwm.exe N/A | | 0 N/A N/A 2272 C+G ...t\Edge\Application\msedge.exe N/A | | 0 N/A N/A 3268 C+G ...em32\ApplicationFrameHost.exe N/A | | 0 N/A N/A 7860 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 8004 C+G ...indows\System32\ShellHost.exe N/A | | 0 N/A N/A 8156 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 8852 C+G ..._cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 8876 C+G ...y\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 10540 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 12380 C+G ...5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 15340 C+G ...acted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 18600 C+G ...ntrolPanel\SystemSettings.exe N/A | +-----------------------------------------------------------------------------------------+ ================================================== ❌ 严重错误发生: Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 116, in <module> check_cuda_toolkit() File "E:\PyTorch_Build\pytorch\diagnostic_test.py", line 21, in check_cuda_toolkit cuda_path = os.environ.get('CUDA_PATH', '未设置') NameError: name 'os' is not defined 按 Enter 键退出... (rtx5070_env) PS E:\PyTorch_Build\pytorch>
最新发布
09-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值