Projects_System Administrator

本文分享了作者作为纽约大学HPC团队Linux系统管理员的工作经验,重点介绍了账户管理机制部署细节,包括用户组分配、目录创建及权限设置等。此外还涉及Hadoop生态系统组件如Pig、Hive、Oozie的使用,MongoDB与HBase数据库操作,以及Hadoop集群性能基准测试等内容。

I was working at NYU HPC team as a Linux system administrator since last September. It is an very great experience to work there as a graduate student. The first project after I got there is to deploy the account management mechanism. Since the person who was in charge of account management just left at that time, my colleagues want to know how the details work. 

Generally, for the account creation, we assign a user to its group ID. Syn user's NetID password to HPC account using LDAP.  Create /scratch and /home directory and set up permission and disk space quota(on HPC, the block-soft limit is 5TB, hard-limit is 6TB) and inode number quota(1 million, 1001000). In login nodes, use rocks commands to create subgroup /users/netID in cpuset, cpu, cpuacct, memory  subsystems. Then set on which cpus the process can be scheduled and on which memory nodes it can obtain memory.

cgroups (aka control groups) is a Linux kernel feature to limit, police and account the resource usage of certain processes (actually process groups). 

add user command 

/usr/sbin/useradd -g 100 -s /usr/local/bin/rbash -u 2670296 -c Anastasiya Kolchynska -m ak5879

This command assign user to a group and creates home directory if it does not exist.

 

Pig

Pig is one component of Hadoop ecosystem. It is used to analyze large sets of data representing them as data flow. It is generally used with Hadoop. Programmers need to write Pig script using Pig Latin language. Internally, Pig converts these scripts into a series of map/reduce jobs.

Pig has map, bag and tuple type.

WordCount.pig example

File = LOAD 'hdfs://babar.es.its.nyu.edu:8020/user/bl1810/tmp/inputfile';
HackathonMatch = FOREACH File generate 'hackathon' as key, (($0 matches '.*hackathon.*') ? 1 : 0) as value;
DecMatch = FOREACH File generate 'Dec' as key, (($0 matches '.*Dec.*') ? 1 : 0) as value;
ChicagoMatch = FOREACH File generate 'Chicago' as key, (($0 matches '.*Chicago.*') ? 1 : 0) as value;
JavaMatch = FOREACH File generate 'Java' as key, (($0 matches '.*Java.*') ? 1 : 0) as value;
hackathonWords = FOREACH (GROUP HackathonMatch BY key) generate group AS key, SUM(HackathonMatch.value) as value; 
DecWords = FOREACH (GROUP DecMatch BY key) generate group AS key, SUM(DecMatch.value) as value ; 
ChicagoWords = FOREACH (GROUP ChicagoMatch BY key) generate group AS key, SUM(ChicagoMatch.value) as value; 
JavaWords = FOREACH (GROUP JavaMatch BY key) generate group AS key, SUM(JavaMatch.value) as value; 
TotalMatchWords = UNION hackathonWords, DecWords, ChicagoWords, JavaWords;
GroupTotal = Group TotalMatchWords by 1;
FinalWordslist = FOREACH GroupTotal generate FLATTEN(TotalMatchWords); 
store FinalWordslist into 'hdfs://babar.es.its.nyu.edu:8020/user/bl1810/tmp/output3';

 

Hive 

schema-on-read; Does not support updates and deletes;

select * from w1 where year > 1949; (Runs map/reduce jobs underneath)

 

Oozie

Oozie is a Java web application to schedule hadoop jobs. Oozie combines hadoop jobs sequentially into one logic unit of work. Oozie detects completion of tasks through callback and polling.

 

MongoDB

MongoDB is document-oriented database. It stores data in "whole documents" like JSON document

Don't need joins. Saves time compare to SQL.

## Run mongoDB default port.

>mongod 

## Go to the binary folder and click

>mongo

## import data shows as bank_data

>mongoimport  --jasonArray --db test  --collection bank_data < Your_json_file_path

## count how many records in back_data

>db.bank_data.count()

## Picks up the first record

>db.bank_data.findOne()

## Retrieves all the data and returns a collection.

> db.bank_data.find()

> db.bank_data.find()[6]

> db.bank_data.find({ last_name : "SMITH" }).count();

> db.bank_data.find({ last_name : "SMITH" })[50]

## Get the projection and return only the fields we want.

> db.bank_data.find({ last_name : "SMITH" }, { first_name : 1, last_name : 1})

 

HBase

Distributed Column-Orientated database build on top of hadoop. HBase internally uses Hash tables and provides random access, and it stores the data in indexed HDFS files for faster lookups.

HBase stores de-normalized data while RDBMS is normalized. 

Denormalization VS Normalization

Denormalization is generally used to either :

  • Avoid a certain number of queries
  • Remove some joins

The basic idea of denormalization is that you'll add redundant data, or group some, to be able to get those data more easily -- at a smaller cost ; which is better for performances.

A quick examples ?

  • Consider a "Posts" and a "Comments" table, for a blog
    • For each Post, you'll have several lines in the "Comment" table
    • This means that, to display a list of posts with the associated number of comments, you'll have to :
      • Do one query to list the posts
      • Do one query per post to count how many comments it has (Yes, those can be merged into only one, to get the number for all posts at once)
      • Which means several queries.
  • Now, if you add a "number of comments" field into the Posts table :
    • You only need one query to list the posts
    • And no need to query the Comments table : the number of comments are already de-normalized to the Posts table.
    • And only one query that returns one more field is better than more queries.

Now, there are some costs, yes :

    • First, this costs some place on both disk and in memory, as you have some redundant informations :
      • The number of comments are stored in the Posts table
      • And you can also find those number counting on the Comments table
    • Second, each time someone adds/removes a comment, you have to :
      • Save/delete the comment, of course
      • But also, update the corresponding number in the Posts table.
      • But, if your blog has a lot more people reading than writing comments, this is probably not so bad.

 

After I'm done with this small project, I then helped one of HPC consultant to do Hadoop benchmarking, which is to measure how fast is our cluster in terms of distributed I/O by running TestDFSIO. It stresses the cluster to see if it can handle high I/O. It also measures how fast map/reduce is by running TeraSort. The benchmarking also include some use cases, such as PageRank. 

Globus is a connected set of services for data management. It can be used for moving data between your local machine and the cluster. It is based on GridFTP

Hadoop testing

TestDFSIO (TestDFSIO.java is saved on web link): Test how fast is your cluster in terms of I/O. It is read and write test. As read does not generate its own files, we need to performe write opeartion first. The most important output are throughput (mb/sec) and average I/O rate (mb/sec). 

## terasort
sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar teragen 1000 /user/cloudera/terasort-input
sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar teragen 100000000 /user/cloudera/terasort-input

Output:

----- TestDFSIO ----- : write
           Date & time: Fri Apr 08 2011
       Number of files: 1000
Total MBytes processed: 1000000
     Throughput mb/sec: 4.989
Average IO rate mb/sec: 5.185
 IO rate std deviation: 0.960
    Test exec time sec: 1113.53

----- TestDFSIO ----- : read
           Date & time: Fri Apr 08 2011
       Number of files: 1000
Total MBytes processed: 1000000
     Throughput mb/sec: 11.349
Average IO rate mb/sec: 22.341
 IO rate std deviation: 119.231
    Test exec time sec: 544.842

Throughput is calculated as the following: files handled by all map tasks divided by time used on all.

Average I/O rate: the sum of rate on each map task divided by the number of map tasks.

TeraSort: Basically, the goal of TeraSort is to sort 1TB of data (or any other amount of data you want) as fast as possible. It is a benchmark that combines testing the HDFS and MapReduce layers of an Hadoop cluster. It has three stages: Teragen, TeraSort and TeraValidate

TeraGen generates output data that is byte for byte equivalent to the C version including the newlines and specific keys. It divides the desired number of rows by the desired number of tasks and assigns ranges of rows to each map. The map jumps the random number generator to the correct value for the first row and generates the following rows.

TeraSort is a standard map/reduce sort, except for a custom partitioner that uses a sorted list of N-1 sampled keys that define the key range for each reduce. In particular, all keys such that sample[i-1] <= key < sample[i] are sent to reduce i. This guarantees that the output of reduce i are all less than the output of reduce i+1. To speed up the partitioning, the partitioner builds a two level trie that quickly indexes into the list of sample keys based on the first two bytes of the key. TeraSort generates the sample keys by sampling the input before the job is submitted and writing the list of keys into HDFS. The input and output format, which are used by all 3 applications, read and write the text files in the right format. The output of the reduce has replication set to 1, instead of the default 3, because the contest does not require the output data be replicated on to multiple nodes.

TeraValidate ensures that the output is globally sorted. It creates one map per a file in the output directory and each map ensures that each key is less than or equal to the previous one. The map also generates records with the first and last keys of the file and the reduce ensures that the first key of file i is greater that the last key of file i-1. Any problems are reported as output of the reduce with the keys that are out of order.

MRBenchMRBench (see src/test/org/apache/hadoop/mapred/MRBench.java) loops a small job a number of times. As such it is a very complimentary benchmark to the “large-scale” TeraSort benchmark suite because MRBench checks whether small job runs are responsive and running efficiently on your cluster. It puts its focus on the MapReduce layer as its impact on the HDFS layer is very limited.

转载于:https://www.cnblogs.com/touchdown/p/5174292.html

PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env No pyvenv.cfg file PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate .\rtx5070_env\Scripts\activate: The term '.\rtx5070_env\Scripts\activate' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. PS E:\PyTorch_Build\pytorch> # 修复 CMake CUDA 路径配置 PS E:\PyTorch_Build\pytorch> $cmakeCachePath = "E:\PyTorch_Build\pytorch\build\CMakeCache.txt" PS E:\PyTorch_Build\pytorch> $fixedCudaPath = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0" PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 使用正斜杠替换反斜杠避免转义问题 PS E:\PyTorch_Build\pytorch> (Get-Content $cmakeCachePath) -replace >> 'CUDA_TOOLKIT_ROOT_DIR:FILEPATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v13\.0', >> "CUDA_TOOLKIT_ROOT_DIR:FILEPATH=$fixedCudaPath" | >> Set-Content $cmakeCachePath -Encoding UTF8 Get-Content: Cannot find path 'E:\PyTorch_Build\pytorch\build\CMakeCache.txt' because it does not exist. PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> Write-Host "✅ 已修复 CUDA 路径配置: $fixedCudaPath" ✅ 已修复 CUDA 路径配置: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 PS E:\PyTorch_Build\pytorch> # 创建虚拟环境激活脚本 PS E:\PyTorch_Build\pytorch> $activateScript = @" >> # E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\activate.ps1 >> param() >> >> # 设置环境变量 >> $env:VIRTUAL_ENV = "E:\PyTorch_Build\pytorch\rtx5070_env" >> $env:PATH = "E:\PyTorch_Build\pytorch\rtx5070_env\Scripts;" + $env:PATH >> >> # 移除旧的环境变量(如果存在) >> if ($env:_OLD_VIRTUAL_PATH) { >> $env:PATH = $env:_OLD_VIRTUAL_PATH >> } >> >> # 保存当前 PATH >> $env:_OLD_VIRTUAL_PATH = $env:PATH >> >> Write-Host "✅ 虚拟环境已激活: $env:VIRTUAL_ENV" -ForegroundColor Green >> "@ PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> $activateScript | Out-File "E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\activate.ps1" -Encoding UTF8 PS E:\PyTorch_Build\pytorch> # 手动配置 CMake 构建 PS E:\PyTorch_Build\pytorch> Set-Location "E:\PyTorch_Build\pytorch" PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 清理无效的构建缓存 PS E:\PyTorch_Build\pytorch> Remove-Item build -Recurse -Force -ErrorAction SilentlyContinue PS E:\PyTorch_Build\pytorch> New-Item -ItemType Directory -Path build | Out-Null PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 使用正确格式的路径变量 PS E:\PyTorch_Build\pytorch> $env:CUDA_PATH = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0" PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 配置 CMake PS E:\PyTorch_Build\pytorch> cmake -S . -B build ` >> -GNinja ` >> -DCMAKE_BUILD_TYPE=Release ` >> -DUSE_CUDA=ON ` >> -DUSE_CUDNN=ON ` >> -DCUDA_TOOLKIT_ROOT_DIR="$env:CUDA_PATH" ` >> -DCUDNN_ROOT="E:/Program Files/NVIDIA/CUDNN/v9.12" ` >> -DTORCH_CUDA_ARCH_LIST="8.6" ` >> -DBUILD_PYTHON=ON CMake Deprecation Warning at CMakeLists.txt:9 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is GNU 13.2.0 -- The C compiler identification is GNU 13.2.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/ProgramData/mingw64/mingw64/bin/c++.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/ProgramData/mingw64/mingw64/bin/gcc.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:421 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:423 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:435 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Failed -- Performing Test C_HAS_AVX_2 -- Performing Test C_HAS_AVX_2 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Failed -- Performing Test C_HAS_AVX2_2 -- Performing Test C_HAS_AVX2_2 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Failed -- Performing Test C_HAS_AVX512_2 -- Performing Test C_HAS_AVX512_2 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Failed -- Performing Test CXX_HAS_AVX_2 -- Performing Test CXX_HAS_AVX_2 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Failed -- Performing Test CXX_HAS_AVX2_2 -- Performing Test CXX_HAS_AVX2_2 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Failed -- Performing Test CXX_HAS_AVX512_2 -- Performing Test CXX_HAS_AVX512_2 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_RDYNAMIC -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:841 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Could NOT find CUDA (missing: CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) (found version "13.0") CMake Warning at cmake/public/cuda.cmake:31 (message): PyTorch: CUDA cannot be found. Depending on whether you are building PyTorch or a PyTorch dependent library, the next warning / error will give you more info. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:869 (include) CMake Warning at cmake/Dependencies.cmake:76 (message): Not compiling with CUDA. Suppress this warning with -DUSE_CUDA=OFF. Call Stack (most recent call first): CMakeLists.txt:869 (include) CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:869 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:869 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Performing Test protobuf_HAVE_BUILTIN_ATOMICS -- Performing Test protobuf_HAVE_BUILTIN_ATOMICS - Success -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:869 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:869 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is GNU -- Found assembler: C:/ProgramData/mingw64/mingw64/bin/gcc.exe CMake Error at aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt:58 (message): Unrecognized CMAKE_SYSTEM_PROCESSOR = AMD64 -- Configuring incomplete, errors occurred! PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 开始构建 PS E:\PyTorch_Build\pytorch> cmake --build build --config Release --parallel 8 ninja: error: loading 'build.ninja': The system cannot find the file specified. PS E:\PyTorch_Build\pytorch> # 步骤1: 修复虚拟环境激活脚本 PS E:\PyTorch_Build\pytorch> .\create_activate_script.ps1 .\create_activate_script.ps1: The term '.\create_activate_script.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 步骤2: 激活虚拟环境 PS E:\PyTorch_Build\pytorch> & "E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\activate.ps1" ParserError: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\activate.ps1:9 Line | 9 | if () { | ~ | Missing condition in if statement after 'if ('. PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 步骤3: 修复 CMake 缓存 PS E:\PyTorch_Build\pytorch> .\fix_cmake_cache.ps1 .\fix_cmake_cache.ps1: The term '.\fix_cmake_cache.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 步骤4: 重建 CMake 配置 PS E:\PyTorch_Build\pytorch> Set-Location "E:\PyTorch_Build\pytorch" PS E:\PyTorch_Build\pytorch> .\reconfigure_cmake.ps1 .\reconfigure_cmake.ps1: The term '.\reconfigure_cmake.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> # 步骤5: 构建 PyTorch PS E:\PyTorch_Build\pytorch> Set-Location "E:\PyTorch_Build\pytorch\build" PS E:\PyTorch_Build\pytorch\build> ninja ninja: error: loading 'build.ninja': The system cannot find the file specified. PS E:\PyTorch_Build\pytorch\build> PS E:\PyTorch_Build\pytorch\build> # 步骤6: 安装 PyTorch PS E:\PyTorch_Build\pytorch\build> Set-Location "E:\PyTorch_Build\pytorch" PS E:\PyTorch_Build\pytorch> pip install -v --no-build-isolation . Using pip 25.2 from E:\Python310\lib\site-packages\pip (python 3.10) Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Processing e:\pytorch_build\pytorch Running command Preparing metadata (pyproject.toml) Building wheel torch-2.9.0a0+git2d31c3d E:\Python310\lib\site-packages\setuptools\config\_apply_pyprojecttoml.py:82: SetuptoolsDeprecationWarning: `project.license` as a TOML table is deprecated !! ******************************************************************************** Please use a simple string containing a SPDX expression for `project.license`. You can also use `project.license-files`. (Both options available on setuptools>=77.0.0). By 2026-Feb-18, you need to update your project and remove deprecated calls or your builds will no longer be supported. See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! corresp(dist, value, root_dir) running dist_info creating C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info writing C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\PKG-INFO writing dependency_links to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\dependency_links.txt writing entry points to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\entry_points.txt writing requirements to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\requires.txt writing top-level names to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\top_level.txt writing manifest file 'C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\SOURCES.txt' reading manifest file 'C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-f_2krmlk\torch.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'BUILD' warning: no files found matching '*.BUILD' warning: no files found matching 'BUCK' warning: no files found matching '[Mm]akefile.*' warning: no files found matching '*.[Dd]ockerfile' warning: no files found matching '[Dd]ockerfile.*' Preparing metadata (pyproject.toml) ... canceled ERROR: Operation cancelled by user PS E:\PyTorch_Build\pytorch>
09-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值