C++使用foreach any_of all_off none_of

本文通过一个简单的C++程序展示了如何使用STL中的for_each函数遍历整型数组,并输出每个元素。同时,也使用了范围for循环进行对比,帮助读者理解两种遍历方式。
#include <iostream>
#include <algorithm>
int main()
{
    int intArr[] = { 1, 2, 3, 4, 5 };
    std::cout << "std::for_each ......" << std::endl;
    std::for_each(std::begin(intArr), std::end(intArr),
        [](int i)->void{ std::cout << i << std::endl; });
    std::cout << "for(const auto& i : intArr) ......" << std::endl;
    for(const auto& i : intArr)
        std::cout << i << std::endl;
    return 0;
}

PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> # 1. 激活虚拟环境 PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 修复conda路径(执行一次即可) (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = "${env:USERPROFILE}\miniconda3\Scripts" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:PATH += ";$condaPath" (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version # 应显示conda版本 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 1. 安装正确版本的MKL (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y mkl-static mkl-include Found existing installation: mkl-static 2024.1.0 Uninstalling mkl-static-2024.1.0: Successfully uninstalled mkl-static-2024.1.0 Found existing installation: mkl-include 2024.1.0 Uninstalling mkl-include-2024.1.0: Successfully uninstalled mkl-include-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install mkl-static==2024.1 mkl-include==2024.1 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl-static==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d8/f0/3b9976df82906d8f3244213b6d8beb67cda19ab5b0645eb199da3c826127/mkl_static-2024.1.0-py2.py3-none-win_amd64.whl (220.8 MB) Collecting mkl-include==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/1b/f05201146f7f12bf871fa2c62096904317447846b5d23f3560a89b4bbaae/mkl_include-2024.1.0-py2.py3-none-win_amd64.whl (1.3 MB) Requirement already satisfied: intel-openmp==2024.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb-devel==2021.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2021.13.1) Requirement already satisfied: intel-cmplr-lib-ur==2024.2.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from intel-openmp==2024.*->mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb==2021.13.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from tbb-devel==2021.*->mkl-static==2024.1) (2021.13.1) Installing collected packages: mkl-include, mkl-static Successfully installed mkl-include-2024.1.0 mkl-static-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 安装libuv (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge libuv=1.46 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 安装OpenSSL (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge openssl=3.1 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 4. 验证安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('MKL版本:', mkl.__version__)" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证所有关键组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('✓ MKL已安装')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> dir "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\cudnn*" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import os; print('环境变量检查:'); >> print('CUDNN_PATH:', os.getenv('CUDA_PATH')); >> print('CONDA_PREFIX:', os.getenv('CONDA_PREFIX'))" 环境变量检查: CUDNN_PATH: E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0 CONDA_PREFIX: None (pytorch_env) PS E:\PyTorch_Build\pytorch> # 清理并重建 (pytorch_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( -- Checkout nccl release tag: v2.27.5-1 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support -- Autodetected CUDA architecture(s): 12.0 CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at pytorch_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module missing components: NumPy CMake Warning at cmake/Dependencies.cmake:826 (message): NumPy could not be found. Not building with NumPy. Suppress this warning with -DUSE_NUMPY=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h CMake Error: File E:/PyTorch_Build/pytorch/torch/_utils_internal.py does not exist. CMake Error at caffe2/CMakeLists.txt:241 (configure_file): configure_file Problem configuring file CMake Error: File E:/PyTorch_Build/pytorch/torch/csrc/api/include/torch/version.h.in does not exist. CMake Error at caffe2/CMakeLists.txt:246 (configure_file): configure_file Problem configuring file -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at caffe2/CMakeLists.txt:1398 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : ON -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : OFF -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久修复conda命令不可用问题 (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPaths = @( >> "$env:USERPROFILE\miniconda3\Scripts", >> "$env:USERPROFILE\anaconda3\Scripts", >> "C:\ProgramData\miniconda3\Scripts" >> ) (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> foreach ($path in $condaPaths) { >> if (Test-Path $path) { >> $env:PATH = "$path;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> break >> } >> } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN v9.12 路径 (pytorch_env) PS E:\PyTorch_Build\pytorch> $cudnnPath = "E:\Program Files\NVIDIA\CUNND\v9.12" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 添加到环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_ROOT_DIR = $cudnnPath (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$cudnnPath\include" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$cudnnPath\lib\x64\cudnn.lib" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久生效 (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_ROOT_DIR", $cudnnPath, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_INCLUDE_DIR", "$cudnnPath\include", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_LIBRARY", "$cudnnPath\lib\x64\cudnn.lib", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> # 原始代码大约在 190 行左右 (pytorch_env) PS E:\PyTorch_Build\pytorch> # 替换为以下内容强制使用 v9.12: (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_VERSION "9.12.0") # 手动指定版本 CUDNN_VERSION: The term 'CUDNN_VERSION' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_FOUND TRUE) CUDNN_FOUND: The term 'CUDNN_FOUND' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_INCLUDE_DIR $ENV{CUDNN_INCLUDE_DIR}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_LIBRARY $ENV{CUDNN_LIBRARY}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS "Using manually configured cuDNN v${CUDNN_VERSION}") InvalidOperation: The variable '$CUDNN_VERSION' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Include path: ${CUDNN_INCLUDE_DIR}") InvalidOperation: The variable '$CUDNN_INCLUDE_DIR' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Library path: ${CUDNN_LIBRARY}") InvalidOperation: The variable '$CUDNN_LIBRARY' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 精确查找 conda.bat (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = Get-ChildItem -Path C:\ -Recurse -Filter conda.bat -ErrorAction SilentlyContinue | >> Select-Object -First 1 | >> ForEach-Object { $_.DirectoryName } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> if ($condaPath) { >> $env:PATH = "$condaPath;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> Write-Host "Conda found at: $condaPath" -ForegroundColor Green >> } else { >> Write-Host "Conda not found! Installing miniconda..." -ForegroundColor Yellow >> # 自动安装 miniconda >> Invoke-WebRequest -Uri "https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe" -OutFile "$env:TEMP\miniconda.exe" >> Start-Process -FilePath "$env:TEMP\miniconda.exe" -ArgumentList "/S", "/AddToPath=1", "/InstallationType=AllUsers", "/D=C:\Miniconda3" -Wait >> $env:PATH = "C:\Miniconda3\Scripts;$env:PATH" >> } Conda not found! Installing miniconda... /AddToPath=1 is disabled and ignored in 'All Users' installations Welcome to Miniconda3 py313_25.7.0-2 By continuing this installation you are accepting this license agreement: C:\Miniconda3\EULA.txt Please run the installer in GUI mode to read the details. Miniconda3 will now be installed into this location: C:\Miniconda3 Unpacking payload... Setting up the package cache... Setting up the base environment... Installing packages for base, creating shortcuts if necessary... Initializing conda directories... Setting installation directory permissions... Done! (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch>
09-02
PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 查找最新 Windows SDK 库路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $sdkLibPath = @( >> "${env:ProgramFiles(x86)}\Windows Kits\10\Lib", >> "${env:ProgramFiles}\Windows Kits\10\Lib" >> ) | Where-Object { Test-Path $_ } | Select-Object -First 1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 获取最新版本路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $latestSdkVersion = Get-ChildItem -Path $sdkLibPath | >> Where-Object { $_.Name -match '^\d+\.\d+' } | >> Sort-Object { [version]$_.Name } -Descending | >> Select-Object -First 1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> if ($latestSdkVersion) { >> $umPath = Join-Path $latestSdkVersion.FullName "um\x64" >> $ucrtPath = Join-Path $latestSdkVersion.FullName "ucrt\x64" >> >> # 添加到环境变量 >> $env:LIB = "$umPath;$ucrtPath;$env:LIB" >> $env:INCLUDE = "$(Join-Path $latestSdkVersion.FullName 'um');$(Join-Path $latestSdkVersion.FullName 'ucrt');$env:INCLUDE" >> >> Write-Host "已添加 LIB 路径: $umPath, $ucrtPath" >> Write-Host "已添加 INCLUDE 路径: $(Join-Path $latestSdkVersion.FullName 'um')" >> } else { >> Write-Host "错误: 未找到 Windows SDK 库路径" -ForegroundColor Red >> } 已添加 LIB 路径: C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\x64, C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\x64 已添加 INCLUDE 路径: C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证关键工具可用性 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $tools = @("cl.exe", "link.exe", "rc.exe", "mt.exe", "nvcc.exe") (rtx5070_env) PS E:\PyTorch_Build\pytorch> $missingTools = @() (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> foreach ($tool in $tools) { >> if (-not (Get-Command $tool -ErrorAction SilentlyContinue)) { >> $missingTools += $tool >> } >> } (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> if ($missingTools.Count -gt 0) { >> Write-Host "缺失工具: $($missingTools -join ', ')" -ForegroundColor Red >> Write-Host "当前 PATH: $env:PATH" >> } else { >> Write-Host "所有必要工具已配置" -ForegroundColor Green >> } 缺失工具: cl.exe, link.exe, rc.exe, mt.exe 当前 PATH: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts;C:\Program Files\PowerShell\7;C:\Miniconda3\condabin;C:\Miniconda3\Scripts;E:\PyTorch_Build\pytorch\pytorch_env\Scripts;C:\Program Files\PowerShell\7;E:\PyTorch_Build\pytorch\pytorch_env\Scripts;C:\Program Files\PowerShell\7;E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\x64;E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin;E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\x64;;E:\PyTorch_Build\pytorch\build\lib.win-amd64-cpython-310\torch\lib;E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin;E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin;C:\Users\Administrator\AppData\Local\Microsoft\dotnet;C:\Users\Administrator\AppData\Local\Microsoft\dotnet;C:\Users\Administrator\AppData\Local\Microsoft\dotnet\;C:\Program Files\dotnet;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;E:\Python310;C:\Program Files\dotnet\;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Users\Administrator\.dotnet\tools;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Users\Administrator\.dotnet\tools;E:\Python310\Scripts;E:\Python310\Scripts;C:\Program Files\PowerShell\7\;E:\Program Files\Microsoft VS Code\bin;E:\Program Files\Git\cmd;C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.3.0\;E:\Program Files\CMake\bin;C:\Program Files\Microsoft SQL Server\150\Tools\Binn\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\Program Files (x86)\Incredibuild;E:\PyTorch_Build\pytorch\build\lib.win-amd64-cpython-310\torch\lib;E:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;C:\ProgramData\chocolatey\bin;E:\Program Files\Rust\.cargo\bin;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Users\Administrator\.dotnet\tools;C:\Users\Administrator\miniconda3\Scripts;E:\Program Files\Rust\.cargo\bin;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Users\Administrator\.dotnet\tools;C:\Users\Administrator\miniconda3\Scripts;E:\Program Files\Rust\.cargo\bin;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Users\Administrator\.dotnet\tools;C:\ProgramData\mingw64\mingw64\bin (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 获取 SDK 版本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $sdkVersion = Get-ChildItem -Path "${env:ProgramFiles(x86)}\Windows Kits\10\Include" | >> Where-Object { $_.PSIsContainer -and $_.Name -match '^\d+\.\d+\.\d+' } | >> Sort-Object { [version]$_.Name } -Descending | >> Select-Object -First 1 -ExpandProperty Name (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> $cmakeArgs = @( >> "-G Ninja", >> "-DCMAKE_BUILD_TYPE=Release", >> "-DCMAKE_C_COMPILER=cl.exe", >> "-DCMAKE_CXX_COMPILER=cl.exe", >> "-DCMAKE_CUDA_COMPILER=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe", >> "-DCMAKE_CUDA_HOST_COMPILER=cl.exe", >> "-DCMAKE_SYSTEM_VERSION=$sdkVersion", # 关键: 指定 SDK 版本 >> "-DCUDA_NVCC_FLAGS=-Xcompiler /wd4819 -gencode arch=compute_89,code=sm_89", >> "-DTORCH_CUDA_ARCH_LIST=8.9", >> "-DUSE_CUDA=ON", >> "-DUSE_NCCL=OFF", >> "-DUSE_DISTRIBUTED=OFF", >> "-DBUILD_TESTING=OFF", >> "-DBLAS=OpenBLAS", >> "-DCUDA_TOOLKIT_ROOT_DIR=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0", >> "-DCUDNN_ROOT_DIR=E:/Program Files/NVIDIA/CUDNN/v9.12", >> "-DPYTHON_EXECUTABLE=$((Get-Command python).Source)", >> # 显式指定库路径 >> "-DCMAKE_LIBRARY_PATH=C:/Program Files (x86)/Windows Kits/10/Lib/$sdkVersion/um/x64", >> "-DCMAKE_INCLUDE_PATH=C:/Program Files (x86)/Windows Kits/10/Include/$sdkVersion/um" >> ) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> cmake .. @cmakeArgs CMake Warning: Ignoring extra path from command line: ".." CMake Error: The source directory "E:/PyTorch_Build" does not appear to contain CMakeLists.txt. Specify --help for usage, or press the help button on the CMake GUI. (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 加载完整 VC 环境 (rtx5070_env) PS E:\PyTorch_Build\pytorch> & "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars64.bat" -vcvars_ver=14.4 -winsdk=10.0.22621.0 [ERROR:vcvarsall.bat] Invalid argument found : .4 [ERROR:vcvarsall.bat] Invalid argument found : -winsdk=10 [ERROR:vcvarsall.bat] Invalid argument found : .0.22621.0 [ERROR:vcvarsall.bat] Error in script usage. The correct usage is: Syntax: vcvarsall.bat [arch] [platform_type] [winsdk_version] [-vcvars_ver=vc_version] [-vcvars_spectre_libs=spectre_mode] where : [arch]: x86 | amd64 | x86_amd64 | x86_arm | x86_arm64 | amd64_x86 | amd64_arm | amd64_arm64 [platform_type]: {empty} | store | uwp [winsdk_version] : full Windows 10 SDK number (e.g. 10.0.10240.0) or "8.1" to use the Windows 8.1 SDK. [vc_version] : {none} for latest installed VC++ compiler toolset | "14.0" for VC++ 2015 Compiler Toolset | "14.xx" for the latest 14.xx.yyyyy toolset installed (e.g. "14.11") | "14.xx.yyyyy" for a specific full version number (e.g. "14.11.25503") [spectre_mode] : {none} for libraries without spectre mitigations | "spectre" for libraries with spectre mitigations The store parameter sets environment variables to support Universal Windows Platform application development and is an alias for 'uwp'. For example: vcvarsall.bat x86_amd64 vcvarsall.bat x86_amd64 10.0.10240.0 vcvarsall.bat x86_arm uwp 10.0.10240.0 vcvarsall.bat x86_arm onecore 10.0.10240.0 -vcvars_ver=14.0 vcvarsall.bat x64 8.1 vcvarsall.bat x64 store 8.1 Please make sure either Visual Studio or C++ Build SKU is installed. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证环境 (rtx5070_env) PS E:\PyTorch_Build\pytorch> cl.exe cl.exe: The term 'cl.exe' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> link.exe link.exe: The term 'link.exe' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置 CUDA 路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:PATH = "$env:CUDA_PATH/bin;$env:PATH" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 直接运行 CMake (rtx5070_env) PS E:\PyTorch_Build\pytorch> cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DUSE_CUDA=ON ... CMake Warning: Ignoring extra path from command line: "E:/PyTorch_Build" CMake Warning: Ignoring extra path from command line: ".." -- The CXX compiler identification is GNU 13.2.0 -- The C compiler identification is GNU 13.2.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/ProgramData/mingw64/mingw64/bin/c++.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/ProgramData/mingw64/mingw64/bin/gcc.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:418 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:430 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Failed -- Performing Test C_HAS_AVX_2 -- Performing Test C_HAS_AVX_2 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Failed -- Performing Test C_HAS_AVX2_2 -- Performing Test C_HAS_AVX2_2 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Failed -- Performing Test C_HAS_AVX512_2 -- Performing Test C_HAS_AVX512_2 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Failed -- Performing Test CXX_HAS_AVX_2 -- Performing Test CXX_HAS_AVX_2 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Failed -- Performing Test CXX_HAS_AVX2_2 -- Performing Test CXX_HAS_AVX2_2 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Failed -- Performing Test CXX_HAS_AVX512_2 -- Performing Test CXX_HAS_AVX512_2 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success -- Current compiler supports avx512f extension. Will build fbgemm. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_RDYNAMIC -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Failed -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") CMake Error at E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCompilerId.cmake:928 (message): Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed. Compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe Build flags: Id flags: --keep;--keep-dir;tmp -v The output was: 1 nvcc fatal : Cannot find compiler 'cl.exe' in PATH Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed. Compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe Build flags: -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS;-Xfatbin;-compress-all Id flags: --keep;--keep-dir;tmp -v The output was: 1 nvcc fatal : Cannot find compiler 'cl.exe' in PATH Call Stack (most recent call first): E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD) E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test) E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCUDACompiler.cmake:162 (CMAKE_DETERMINE_COMPILER_ID) cmake/public/cuda.cmake:47 (enable_language) cmake/Dependencies.cmake:43 (include) CMakeLists.txt:853 (include) -- Configuring incomplete, errors occurred! (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建测试文件 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @' >> #include <iostream> >> int main() { >> std::cout << "Hello from C++ & CUDA toolchain!" << std::endl; >> return 0; >> } >> '@ | Set-Content test.cpp (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 编译测试 (rtx5070_env) PS E:\PyTorch_Build\pytorch> cl.exe test.cpp /EHsc /link kernel32.lib user32.lib gdi32.lib cl.exe: The term 'cl.exe' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> if (Test-Path test.exe) { >> .\test.exe >> } else { >> Write-Host "编译失败,错误信息:" >> cl.exe test.cpp /EHsc /link kernel32.lib /v >> } 编译失败,错误信息: cl.exe: Line | 5 | cl.exe test.cpp /EHsc /link kernel32.lib /v | ~~~~~~ | The term 'cl.exe' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 1. 设置基本环境变量 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_PATH = "E:/Program Files/NVIDIA/CUDNN/v9.12" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:OpenBLAS_HOME = "E:/Libs/OpenBLAS_Prebuilt" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:PATH = "$env:CUDA_PATH/bin;$env:OpenBLAS_HOME/bin;$env:PATH" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 2. 加载 Visual Studio 环境 (rtx5070_env) PS E:\PyTorch_Build\pytorch> & "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars64.bat" ********************************************************************** ** Visual Studio 2022 Developer Command Prompt v17.14.13 ** Copyright (c) 2025 Microsoft Corporation ********************************************************************** [vcvarsall.bat] Environment initialized for: 'x64' (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 3. 添加编译器路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $clPath = "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.44.35207\bin\Hostx64\x64" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:PATH = "$clPath;$env:PATH" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 4. 修复 Windows SDK 库路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\fix_kernel32_error.ps1 .\fix_kernel32_error.ps1: The term '.\fix_kernel32_error.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 5. 验证工具链 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\verify_toolchain.ps1 .\verify_toolchain.ps1: The term '.\verify_toolchain.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 6. 准备构建目录 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-Location E:\PyTorch_Build\pytorch (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item build -Recurse -Force -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> New-Item -Path build -ItemType Directory | Out-Null (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-Location build (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 7. 配置 CMake (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> .\cmake_configure_final.ps1 .\cmake_configure_final.ps1: The term '.\cmake_configure_final.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 8. 编译和安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> if ($LASTEXITCODE -eq 0) { >> cmake --build . --config Release --parallel 8 >> if ($LASTEXITCODE -eq 0) { >> cmake --install . >> } >> } Error: not a CMake build directory (missing CMakeCache.txt) (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 9. 验证安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> python -c "import torch; print(f'PyTorch版本: {torch.__version__}'); print(f'CUDA可用: {torch.cuda.is_available()}')" PyTorch版本: 2.8.0+cpu CUDA可用: False (rtx5070_env) PS E:\PyTorch_Build\pytorch\build>
09-03
PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建终极修复脚本: build_absolute_fix.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @' >> # ------ 初始化 ------ >> $ErrorActionPreference = "Stop" >> Write-Host "=== PyTorch 终极解决方案 ===" -ForegroundColor Cyan >> Write-Host "开始时间: $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss')" >> $logFile = "$PWD\build_log_$(Get-Date -Format 'yyyyMMdd_HHmmss').txt" >> Start-Transcript -Path $logFile -Append >> >> # ------ 强制设置开发环境 ------ >> $env:DeveloperMode = 1 >> $env:CMAKE_GENERATOR = "Ninja" >> >> # ------ 路径配置 ------ >> $pythonEnv = "E:\PyTorch_Build\pytorch\rtx5070_env" >> $cudaPath = "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0" >> $cudnnBasePath = "E:\Program Files\NVIDIA\CUNND\v9.12" >> $cudaVersion = "13.0" >> $vsPath = "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools" # 确保正确路径 >> >> # ------ 确保Visual Studio工具在PATH中 ------ >> $env:Path = ` >> "$vsPath\VC\Tools\MSVC\14.44.35207\bin\Hostx64\x64;" + ` >> "$vsPath\Common7\IDE;" + ` >> "$cudaPath\bin;" + ` >> $env:Path >> >> # ------ 修复cuDNN检测 ------ >> function Get-ProperCudnnPath { >> $headerPath = "E:\Program Files\NVIDIA\CUNND\v9.12\include\13.0" >> $libraryPath = "E:\Program Files\NVIDIA\CUNND\v9.12\lib\13.0\x64\cudnn64_9.lib" >> $dllPath = "E:\Program Files\NVIDIA\CUNND\v9.12\bin\13.0\cudnn64_9.dll" >> >> if (-not (Test-Path $headerPath)) { >> throw "cuDNN头文件路径不存在: $headerPath" >> } >> if (-not (Test-Path $libraryPath)) { >> throw "cuDNN库文件路径不存在: $libraryPath" >> } >> if (-not (Test-Path $dllPath)) { >> throw "cuDNN DLL文件路径不存在: $dllPath" >> } >> >> return @{ >> IncludeDir = $headerPath >> Library = $libraryPath >> BinDir = [System.IO.Path]::GetDirectoryName($dllPath) >> } >> } >> >> Write-Host "`n=== 硬编码cuDNN路径 ===" -ForegroundColor Yellow >> $cudnn = Get-ProperCudnnPath >> Write-Host "cuDNN头文件: $($cudnn.IncludeDir)" >> Write-Host "cuDNN库文件: $($cudnn.Library)" >> Write-Host "cuDNN DLL目录: $($cudnn.BinDir)" >> >> # ------ 环境配置 ------ >> Write-Host "`n=== 配置环境变量 ===" -ForegroundColor Green >> $env:CUDA_PATH = $cudaPath >> $env:Path = "$cudaPath\bin;$($cudnn.BinDir);$env:Path" >> $env:CUDNN_INCLUDE_DIR = $cudnn.IncludeDir >> $env:CUDNN_LIBRARY = $cudnn.Library >> $env:TORCH_CUDA_ARCH_LIST = "8.9" >> >> # 调试信息 >> Write-Host "环境变量检查:" >> Write-Host "CUDA_PATH: $env:CUDA_PATH" >> Write-Host "CUDNN_INCLUDE_DIR: $env:CUDNN_INCLUDE_DIR" >> Write-Host "CUDNN_LIBRARY: $env:CUDNN_LIBRARY" >> Write-Host "PATH: $($env:Path -split ';' | Select-Object -First 10)..." >> >> # ------ 激活Python环境 ------ >> Write-Host "`n=== 激活Python环境 ===" -ForegroundColor Magenta >> . "$pythonEnv\Scripts\Activate.ps1" >> Write-Host "✅ Python虚拟环境已激活" -ForegroundColor Green >> Write-Host "Python版本: $(python --version)" >> Write-Host "Pip版本: $(pip --version)" >> >> # ------ 强制清理 ------ >> Write-Host "`n=== 深度清理构建缓存 ===" -ForegroundColor Red >> python setup.py clean >> Remove-Item -Recurse -Force build, dist, torch\lib, cmake_build -ErrorAction SilentlyContinue >> Remove-Item -Force *.ncb, *.sdf, *.suo -ErrorAction SilentlyContinue >> >> # ------ 安装依赖 ------ >> Write-Host "`n=== 安装核心依赖 ===" -ForegroundColor Yellow >> pip install -U pip setuptools wheel ninja cmake==3.28.7 # 使用稳定版本 >> pip install -r requirements.txt >> >> # ------ 方法4: 直接CMake+Ninja构建 ------ >> Write-Host "`n=== 方法4: 纯CMake+Ninja构建 ===" -ForegroundColor Green >> $buildDir = "$PWD\cmake_release" >> New-Item -ItemType Directory -Path $buildDir -Force | Out-Null >> Set-Location $buildDir >> >> cmake .. -GNinja ` >> -DCMAKE_BUILD_TYPE=Release ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON ` >> -DCUDNN_INCLUDE_DIR="$($cudnn.IncludeDir)" ` >> -DCUDNN_LIBRARY="$($cudnn.Library)" ` >> -DPYTHON_EXECUTABLE="$pythonEnv\Scripts\python.exe" ` >> -DCMAKE_INSTALL_PREFIX="$PWD/install" ` >> -DTORCH_CUDA_ARCH_LIST="8.9" ` >> -DCMAKE_MSVC_RESPONSE_FILE=OFF # 解决Windows路径问题 >> >> if ($LASTEXITCODE -ne 0) { >> Write-Host "❌ CMake配置失败" -ForegroundColor Red >> exit 1 >> } >> >> ninja install >> $buildExitCode = $LASTEXITCODE >> Set-Location .. >> >> if ($buildExitCode -eq 0) { >> # ------ 修复运行时依赖 ------ >> $torchLibDir = "$PWD\torch\lib" >> New-Item -ItemType Directory -Path $torchLibDir -Force | Out-Null >> >> # 1. 复制所有CUDA DLL >> Get-ChildItem -Path "$cudaPath\bin\*.dll" | ForEach-Object { >> Copy-Item $_.FullName $torchLibDir -Force >> Write-Host "✅ 复制CUDA DLL: $($_.Name)" >> } >> >> # 2. 复制cuDNN DLL >> Copy-Item "$($cudnn.BinDir)\*.dll" $torchLibDir -Force >> >> # 3. 复制构建生成的DLL >> $buildDlls = Get-ChildItem -Path "$buildDir\lib", "$buildDir\bin" -Recurse -Include *.dll >> foreach ($dll in $buildDlls) { >> Copy-Item $dll.FullName $torchLibDir -Force >> Write-Host "✅ 复制构建DLL: $($dll.Name)" >> } >> >> # 4. 设置Python软链接 >> $installDir = "$buildDir\install" >> $env:PYTHONPATH = "$installDir;$env:PYTHONPATH" >> } >> >> # ------ 验证结果 ------ >> if ($buildExitCode -eq 0) { >> Write-Host "`n=== 验证构建结果 ===" -ForegroundColor Cyan >> $env:Path = "$torchLibDir;$env:Path" >> >> python -c @" >> import os, torch >> print(f'PyTorch版本: {torch.__version__}') >> print(f'CUDA可用: {torch.cuda.is_available()}') >> >> if torch.cuda.is_available(): >> print(f'CUDA版本: {torch.version.cuda}') >> print(f'cuDNN版本: {torch.backends.cudnn.version()}') >> print(f'检测到的GPU: {torch.cuda.get_device_name(0)}') >> >> # 测试基础计算 >> a = torch.randn(1000, 1000, device='cuda') >> b = torch.randn(1000, 1000, device='cuda') >> c = a @ b >> print(f'GPU计算验证: 矩阵乘法成功 (结果形状: {c.shape})') >> >> # 检查关键DLL >> from ctypes import WinDLL >> try: >> WinDLL(os.path.join(torch.__file__, 'lib', 'aoti_custom_ops.dll')) >> print('✅ aoti_custom_ops.dll 加载成功') >> except Exception as e: >> print(f'❌ aoti_custom_ops.dll 加载失败: {str(e)}') >> "@ >> $buildExitCode = $LASTEXITCODE >> } >> >> # ------ 完成报告 ------ >> Write-Host "`n=== 构建报告 ===" -ForegroundColor Cyan >> Write-Host "构建状态: $(if ($buildExitCode -eq 0) {'成功!'} else {'失败'})" -ForegroundColor $(if ($buildExitCode -eq 0) {'Green'} else {'Red'}) >> Write-Host "完成时间: $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss')" >> >> Stop-Transcript >> exit $buildExitCode >> '@ | Set-Content -Path "build_absolute_fix.ps1" -Encoding UTF8 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行脚本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-ExecutionPolicy Bypass -Scope Process -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\build_absolute_fix.ps1 === PyTorch 终极解决方案 === 开始时间: 2025-09-03 14:24:16 Transcript started, output file is E:\PyTorch_Build\pytorch\build_log_20250903_142416.txt === 硬编码cuDNN路径 === cuDNN头文件: E:\Program Files\NVIDIA\CUNND\v9.12\include\13.0 cuDNN库文件: E:\Program Files\NVIDIA\CUNND\v9.12\lib\13.0\x64\cudnn64_9.lib cuDNN DLL目录: E:\Program Files\NVIDIA\CUNND\v9.12\bin\13.0 === 配置环境变量 === 环境变量检查: CUDA_PATH: E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0 CUDNN_INCLUDE_DIR: E:\Program Files\NVIDIA\CUNND\v9.12\include\13.0 CUDNN_LIBRARY: E:\Program Files\NVIDIA\CUNND\v9.12\lib\13.0\x64\cudnn64_9.lib PATH: E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin E:\Program Files\NVIDIA\CUNND\v9.12\bin\13.0 C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.44.35207\bin\Hostx64\x64 C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\IDE E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin E:\PyTorch_Build\pytorch\rtx5070_env\Scripts C:\Program Files\PowerShell\7 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\libnvvp C:\Miniconda3\condabin... === 激活Python环境 === ✅ Python虚拟环境已激活 Python版本: Python 3.10.10 Pip版本: pip 25.2 from E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\pip (python 3.10) === 深度清理构建缓存 === Building wheel torch-2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\config\_apply_pyprojecttoml.py:82: SetuptoolsDeprecationWarning: `project.license` as a TOML table is deprecated !! ******************************************************************************** Please use a simple string containing a SPDX expression for `project.license`. You can also use `project.license-files`. (Both options available on setuptools>=77.0.0). By 2026-Feb-18, you need to update your project and remove deprecated calls or your builds will no longer be supported. See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! corresp(dist, value, root_dir) running clean === 安装核心依赖 === Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (25.2) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (79.0.1) Collecting setuptools Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl (1.2 MB) Requirement already satisfied: wheel in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (0.45.1) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.13.0) ERROR: Ignored the following yanked versions: 3.27.4, 3.28.0, 3.29.0, 3.29.1, 3.31.0 ERROR: Could not find a version that satisfies the requirement cmake==3.28.7 (from versions: 0.1.0, 0.2.0, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.7.1, 0.8.0, 0.9.0, 3.6.3, 3.6.3.post1, 3.7.2, 3.8.2, 3.9.6, 3.10.3, 3.11.0, 3.11.4, 3.11.4.post1, 3.12.0, 3.13.0, 3.13.1, 3.13.2, 3.13.2.post1, 3.14.3, 3.14.3.post1, 3.14.4, 3.14.4.post1, 3.15.3, 3.15.3.post1, 3.16.3, 3.16.3.post1, 3.16.5, 3.16.6, 3.16.7, 3.16.8, 3.17.0, 3.17.1, 3.17.2, 3.17.3, 3.18.0, 3.18.2, 3.18.2.post1, 3.18.4, 3.18.4.post1, 3.20.2, 3.20.3, 3.20.4, 3.20.5, 3.21.0, 3.21.1, 3.21.1.post1, 3.21.2, 3.21.3, 3.21.4, 3.22.0, 3.22.1, 3.22.2, 3.22.3, 3.22.4, 3.22.5, 3.22.6, 3.23.3, 3.24.0, 3.24.1, 3.24.1.1, 3.24.2, 3.24.3, 3.25.0, 3.25.2, 3.26.0, 3.26.1, 3.26.3, 3.26.4, 3.27.0, 3.27.1, 3.27.2, 3.27.4.1, 3.27.5, 3.27.6, 3.27.7, 3.27.9, 3.28.1, 3.28.3, 3.28.4, 3.29.0.1, 3.29.2, 3.29.3, 3.29.5, 3.29.5.1, 3.29.6, 3.30.0, 3.30.1, 3.30.2, 3.30.3, 3.30.4, 3.30.5, 3.30.9, 3.31.0.1, 3.31.1, 3.31.2, 3.31.4, 3.31.6, 4.0.0, 4.0.2, 4.0.3, 4.1.0) ERROR: No matching distribution found for cmake==3.28.7 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: setuptools<80.0,>=70.1.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 2)) (79.0.1) Requirement already satisfied: cmake>=3.27 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 3)) (4.1.0) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 4)) (1.13.0) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 5)) (2.2.6) Requirement already satisfied: packaging in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 6)) (25.0) Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 7)) (6.0.2) Requirement already satisfied: requests in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 8)) (2.32.5) Requirement already satisfied: six in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 9)) (1.17.0) Requirement already satisfied: typing-extensions>=4.10.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r E:\PyTorch_Build\pytorch\requirements-build.txt (line 10)) (4.15.0) Requirement already satisfied: expecttest>=0.3.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 8)) (0.3.0) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 9)) (3.19.1) Requirement already satisfied: fsspec>=0.8.5 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 10)) (2025.7.0) Requirement already satisfied: hypothesis in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 11)) (6.138.13) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 12)) (3.1.6) Requirement already satisfied: lintrunner in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 13)) (0.12.7) Requirement already satisfied: networkx>=2.5.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 14)) (3.4.2) Requirement already satisfied: optree>=0.13.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 15)) (0.17.0) Requirement already satisfied: psutil in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 16)) (7.0.0) Requirement already satisfied: sympy>=1.13.3 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 17)) (1.14.0) Requirement already satisfied: wheel in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 19)) (0.45.1) Requirement already satisfied: build[uv] in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from -r requirements.txt (line 7)) (1.3.0) Requirement already satisfied: charset_normalizer<4,>=2 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests->-r E:\PyTorch_Build\pytorch\requirements-build.txt (line 8)) (3.4.3) Requirement already satisfied: idna<4,>=2.5 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests->-r E:\PyTorch_Build\pytorch\requirements-build.txt (line 8)) (3.10) Requirement already satisfied: urllib3<3,>=1.21.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests->-r E:\PyTorch_Build\pytorch\requirements-build.txt (line 8)) (2.5.0) Requirement already satisfied: certifi>=2017.4.17 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests->-r E:\PyTorch_Build\pytorch\requirements-build.txt (line 8)) (2025.8.3) Requirement already satisfied: pyproject_hooks in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from build[uv]->-r requirements.txt (line 7)) (1.2.0) Requirement already satisfied: colorama in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from build[uv]->-r requirements.txt (line 7)) (0.4.6) Requirement already satisfied: tomli>=1.1.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from build[uv]->-r requirements.txt (line 7)) (2.2.1) Requirement already satisfied: uv>=0.1.18 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from build[uv]->-r requirements.txt (line 7)) (0.8.14) Requirement already satisfied: attrs>=22.2.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from hypothesis->-r requirements.txt (line 11)) (25.3.0) Requirement already satisfied: exceptiongroup>=1.0.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from hypothesis->-r requirements.txt (line 11)) (1.3.0) Requirement already satisfied: sortedcontainers<3.0.0,>=2.1.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from hypothesis->-r requirements.txt (line 11)) (2.4.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from jinja2->-r requirements.txt (line 12)) (3.0.2) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from sympy>=1.13.3->-r requirements.txt (line 17)) (1.3.0) === 方法4: 纯CMake+Ninja构建 === CMake Deprecation Warning at CMakeLists.txt:9 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is GNU 13.2.0 -- The C compiler identification is GNU 13.2.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/ProgramData/mingw64/mingw64/bin/c++.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/ProgramData/mingw64/mingw64/bin/gcc.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:421 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:423 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:435 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Failed -- Performing Test C_HAS_AVX_2 -- Performing Test C_HAS_AVX_2 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Failed -- Performing Test C_HAS_AVX2_2 -- Performing Test C_HAS_AVX2_2 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Failed -- Performing Test C_HAS_AVX512_2 -- Performing Test C_HAS_AVX512_2 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Failed -- Performing Test CXX_HAS_AVX_2 -- Performing Test CXX_HAS_AVX_2 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Failed -- Performing Test CXX_HAS_AVX2_2 -- Performing Test CXX_HAS_AVX2_2 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Failed -- Performing Test CXX_HAS_AVX512_2 -- Performing Test CXX_HAS_AVX512_2 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_RDYNAMIC -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:841 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") CMake Error at rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCompilerId.cmake:928 (message): Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed. Compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe Build flags: Id flags: --keep;--keep-dir;tmp -v The output was: 1 nvcc fatal : Cannot find compiler 'cl.exe' in PATH Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed. Compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe Build flags: -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS;-Xfatbin;-compress-all Id flags: --keep;--keep-dir;tmp -v The output was: 1 nvcc fatal : Cannot find compiler 'cl.exe' in PATH Call Stack (most recent call first): rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD) rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test) rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineCUDACompiler.cmake:162 (CMAKE_DETERMINE_COMPILER_ID) cmake/public/cuda.cmake:47 (enable_language) cmake/Dependencies.cmake:44 (include) CMakeLists.txt:869 (include) -- Configuring incomplete, errors occurred! ❌ CMake配置失败 为啥一直失败呢 我们现在缺的是什么 你能解释给我听吗
09-04
(rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置下载URL和目标路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $downloadUrl = "https://www.7-zip.org/a/7z2406-x64.msi" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $installerPath = "$env:TEMP\7z-installer.msi" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $installDir = "E:\Program Files\7-Zip" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 下载安装程序 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Invoke-WebRequest -Uri $downloadUrl -OutFile $installerPath (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建目标目录 (rtx5070_env) PS E:\PyTorch_Build\pytorch> New-Item -ItemType Directory -Path $installDir -Force Directory: E:\Program Files Mode LastWriteTime Length Name ---- ------------- ------ ---- d---- 2025/9/3 19:41 7-Zip (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 使用msiexec手动安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Start-Process -FilePath "msiexec.exe" -ArgumentList "/i `"$installerPath`" INSTALLDIR=`"$installDir`" /quiet" -Wait (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch> if (Test-Path "$installDir\7z.exe") { >> Write-Host "7-Zip 已成功安装到 $installDir" -ForegroundColor Green >> >> # 添加到系统PATH >> $env:Path += ";$installDir" >> [Environment]::SetEnvironmentVariable("Path", $env:Path, "Machine") >> } else { >> # 手动解压作为备用方案 >> $zipUrl = "https://www.7-zip.org/a/7z2406-x64.zip" >> $zipPath = "$env:TEMP\7z-x64.zip" >> Invoke-WebRequest -Uri $zipUrl -OutFile $zipPath >> >> # 使用.NET解压 >> Add-Type -AssemblyName System.IO.Compression.FileSystem >> [System.IO.Compression.ZipFile]::ExtractToDirectory($zipPath, $installDir) >> } 7-Zip 已成功安装到 E:\Program Files\7-Zip (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 清理安装包 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item $installerPath -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item $zipPath -ErrorAction SilentlyContinue InvalidOperation: The variable '$zipPath' cannot be retrieved because it has not been set. (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\manual_install_7zip.ps1 .\manual_install_7zip.ps1: The term '.\manual_install_7zip.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\fixed_setup_env.ps1 环境变量配置完成: CUDA_PATH = E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0 CUDNN_INCLUDE_DIR = E:\Program Files\NVIDIA\CUNND\v9.12\include\13.0 CUDNN_LIBRARY = E:\Program Files\NVIDIA\CUNND\v9.12\lib\13.0\x64\cudnn64_9.lib (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\verify_cudnn.ps1 E:\Program Files\NVIDIA\CUNND\v9.12\include\13.0\cudnn.h : 存在 E:\Program Files\NVIDIA\CUNND\v9.12\lib\13.0\x64\cudnn64_9.lib : 存在 E:\Program Files\NVIDIA\CUNND\v9.12\lib\13.0\x64\cudnn64_9.dll : 缺失 测试CUDA和cuDNN集成: nvcc fatal : nvcc cannot find a supported version of Microsoft Visual Studio. Only the versions between 2019 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. .\test_cudnn.exe: E:\PyTorch_Build\pytorch\verify_cudnn.ps1:40 Line | 40 | .\test_cudnn.exe | ~~~~~~~~~~~~~~~~ | The term '.\test_cudnn.exe' is not recognized as a name of a cmdlet, function, script file, or executable | program. Check the spelling of the name, or if a path was included, verify that the path is correct and try | again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install numpy ninja pybind11 typing_extensions pyyaml future Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: numpy in e:\python310\lib\site-packages (1.26.3) Requirement already satisfied: ninja in e:\python310\lib\site-packages (1.13.0) Collecting pybind11 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/cd/8a/37362fc2b949d5f733a8b0f2ff51ba423914cabefe69f1d1b6aab710f5fe/pybind11-3.0.1-py3-none-any.whl (293 kB) Requirement already satisfied: typing_extensions in e:\python310\lib\site-packages (4.14.1) Requirement already satisfied: pyyaml in e:\python310\lib\site-packages (6.0.2) Collecting future Using cached https://pypi.tuna.tsinghua.edu.cn/packages/da/71/ae30dadffc90b9006d77af76b393cb9dfbfc9629f339fc1574a1c52e6806/future-1.0.0-py3-none-any.whl (491 kB) WARNING: Error parsing dependencies of torch: [Errno 2] No such file or directory: 'e:\\python310\\lib\\site-packages\\torch-2.6.0.dev20241112+cu121.dist-info\\METADATA' Installing collected packages: pybind11, future Successfully installed future-1.0.0 pybind11-3.0.1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证cuDNN文件 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Test-Path "E:\Program Files\NVIDIA\cuDNN\v9.12\include\cudnn.h" False (rtx5070_env) PS E:\PyTorch_Build\pytorch> Test-Path "E:\Program Files\NVIDIA\cuDNN\v9.12\bin\cudnn64_9.dll" False (rtx5070_env) PS E:\PyTorch_Build\pytorch> Test-Path "E:\Program Files\NVIDIA\cuDNN\v9.12\lib\cudnn64_9.lib" False (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证7-Zip (rtx5070_env) PS E:\PyTorch_Build\pytorch> Test-Path "E:\Program Files\7-Zip\7z.exe" True (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证Visual Studio编译器 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Get-Command cl.exe -ErrorAction SilentlyContinue CommandType Name Version Source ----------- ---- ------- ------ Application cl.exe 14.16.270… C:\Program Files\Microsoft Visual Studio… (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装7-Zip (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\install_7zip.ps1 .\install_7zip.ps1: The term '.\install_7zip.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修复cuDNN (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\fix_cudnn.ps1 .\fix_cudnn.ps1: The term '.\fix_cudnn.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修复VS路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\find_cl.exe.ps1 .\find_cl.exe.ps1: The term '.\find_cl.exe.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 更新环境变量 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:Path = @( >> "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin", >> "E:\Program Files\NVIDIA\cuDNN\v9.12\bin", >> "E:\Program Files\NVIDIA\cuDNN\v9.12\lib", >> "E:\Program Files\CMake\bin", >> "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\bin\Hostx64\x64", >> "E:\Python310\Scripts", >> $env:Path >> ) -join ';' (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置cuDNN路径变量 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "E:\Program Files\NVIDIA\cuDNN\v9.12\include" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "E:\Program Files\NVIDIA\cuDNN\v9.12\lib\cudnn64_9.lib" (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证最终配置 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\verify_paths_fixed.ps1 .\verify_paths_fixed.ps1: The term '.\verify_paths_fixed.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch>
最新发布
09-04
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值