ARouter 1.4.0版本后,配置添了还是报错Compiler >>> No module name, for more information

本文详细解析了在使用ARouter进行组件化开发时遇到的常见编译错误:ARouter::Compiler>>>Nomodulename。通过对比官方文档与常见博客建议,指出了正确配置方法,即使用AROUTER_MODULE_NAME而非moduleName,并强调了在所有需要跳转的module中添加annotationProcessor的重要性。

最近研究组件化时,用到ARouter,编译时报 :ARouter::Compiler >>> No module name, for more information, look at gradle log错误,于是很自然的百度,看了不下20调博客,千篇一律的写着在 build.gradle 中添加配置
在这里插入图片描述
,然而没什么卵用,还是报同样错误,后来跑到github上看 ARouter的官方说明,配置是这样的:
在这里插入图片描述
,看出来不一样吗,对,arguments = [AROUTER_MODULE_NAME: project.getName()]
是 AROUTER_MODULE_NAME 不是 moduleName,编译运行,成功,问题解决,还有注意一定要在没每个需要跳转的module中都加上 annotationProcessor ‘com.alibaba:arouter-compiler:1.2.1’ ,不能只在 通用的module中加。 记录下,望遇到同样问题的小伙伴少走弯路。

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修改 cmake_config.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # cmake_config.ps1 >> # 修复源目录路径 - 指向PyTorch源码目录 >> `$sourceDir = (Get-Item -Path "..").FullName # 关键修改:使用上级目录 >> >> `$cmakeArgs = @( >> "-G", "Ninja", >> "-DCMAKE_BUILD_TYPE=Release", >> "-DCMAKE_C_COMPILER=cl.exe", >> "-DCMAKE_CXX_COMPILER=cl.exe", >> "-DCMAKE_CUDA_COMPILER=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe", >> "-DCMAKE_CUDA_HOST_COMPILER=cl.exe", >> "-DCMAKE_SYSTEM_VERSION=10.0.22621.0", >> "-DCUDA_NVCC_FLAGS=-Xcompiler /wd4819 -gencode arch=compute_89,code=sm_89", >> "-DTORCH_CUDA_ARCH_LIST=8.9", >> "-DUSE_CUDA=ON", >> "-DUSE_NCCL=OFF", >> "-DUSE_DISTRIBUTED=OFF", >> "-DBUILD_TESTING=OFF", >> "-DBLAS=OpenBLAS", >> "-DCUDA_TOOLKIT_ROOT_DIR=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0", >> "-DCUDNN_ROOT_DIR=E:/Program Files/NVIDIA/CUDNN/v9.12", >> "-DPYTHON_EXECUTABLE=`$((Get-Command python).Source)" >> ) >> >> Write-Host "运行 CMake: cmake `$sourceDir @cmakeArgs" >> cmake `$sourceDir @cmakeArgs # 指向正确的源码目录 >> "@ | Set-Content cmake_config.ps1 -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修改 compile_cuda_test.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # compile_cuda_test.ps1 >> `$cudaTest = @' >> #include <cuda_runtime.h> >> #include <iostream> >> #include <cuda.h> // 加必要头文件 >> >> __global__ void addKernel(int *c, const int *a, const int *b) { >> int i = threadIdx.x; >> c[i] = a[i] + b[i]; >> } >> >> int main() { >> const int arraySize = 5; >> const int a[arraySize] = {1, 2, 3, 4, 5}; >> const int b[arraySize] = {10, 20, 30, 40, 50}; >> int c[arraySize] = {0}; >> >> int *dev_a, *dev_b, *dev_c; >> cudaMalloc((void**)&dev_a, arraySize * sizeof(int)); // 修复指针转换 >> cudaMalloc((void**)&dev_b, arraySize * sizeof(int)); >> cudaMalloc((void**)&dev_c, arraySize * sizeof(int)); >> >> cudaMemcpy(dev_a, a, arraySize * sizeof(int), cudaMemcpyHostToDevice); >> cudaMemcpy(dev_b, b, arraySize * sizeof(int), cudaMemcpyHostToDevice); >> >> // 修复内核调用语法 >> addKernel<<<1, arraySize>>>(dev_c, dev_a, dev_b); >> >> cudaMemcpy(c, dev_c, arraySize * sizeof(int), cudaMemcpyDeviceToHost); >> >> std::cout << "CUDA测试结果: "; >> for (int i = 0; i < arraySize; i++) { >> std::cout << c[i] << " "; >> } >> return 0; >> } >> '@ >> >> `$cudaTest | Set-Content cuda_test.cu # 改为.cu扩展名 >> >> `$ccbinPath = "`$env:VCToolsInstallDir\bin\Hostx64\x64" >> >> # 加显式包含路径 >> nvcc -ccbin "`$ccbinPath" -I "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include" cuda_test.cu -o cuda_test >> >> if (Test-Path cuda_test.exe) { >> .\cuda_test.exe >> } else { >> Write-Host "CUDA编译失败,请检查环境" >> } >> "@ | Set-Content compile_cuda_test.ps1 -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 1. 更新问题脚本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 执行上面的两个脚本修改命令 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 2. 清理构建目录并重新构建 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item build -Recurse -Force -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\full_build.ps1 === 开始完整构建 === 环境设置完成! Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.13.0) Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Requirement already satisfied: cmake in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.1.0) Requirement already satisfied: cffi in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.17.1) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) Requirement already satisfied: future in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.0.0) Requirement already satisfied: six in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.17.0) Requirement already satisfied: requests in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.32.5) Requirement already satisfied: dataclasses in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (0.6) Requirement already satisfied: intel-openmp<2026,>=2024 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) Requirement already satisfied: pycparser in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from cffi) (2.22) Requirement already satisfied: idna<4,>=2.5 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (3.10) Requirement already satisfied: certifi>=2017.4.17 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (2025.8.3) Requirement already satisfied: urllib3<3,>=1.21.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (2.5.0) Requirement already satisfied: charset_normalizer<4,>=2 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (3.4.3) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp<2026,>=2024->mkl) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp<2026,>=2024->mkl) (0.11.0) [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip 运行 CMake: cmake E:\PyTorch_Build\pytorch @cmakeArgs CMake Warning at CMakeLists.txt:418 (message): TensorPipe cannot be used on Windows. Set it to OFF -- Current compiler supports avx2 extension. Will build perfkernels. -- Current compiler supports avx512f extension. Will build fbgemm. -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - failed -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - broken CMake Error at E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeTestCUDACompiler.cmake:59 (message): The CUDA compiler "E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe" is not able to compile a simple test program. It fails with the following output: Change Dir: 'E:/PyTorch_Build/pytorch/CMakeFiles/CMakeScratch/TryCompile-9hsd61' Run Build Command(s): E:/PyTorch_Build/pytorch/rtx5070_env/Scripts/ninja.exe -v cmTC_917ed [1/2] "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\nvcc.exe" -forward-unknown-to-host-compiler -ccbin=C:\PROGRA~1\MICROS~3\2022\COMMUN~1\VC\Tools\MSVC\1444~1.352\bin\Hostx64\x64\cl.exe -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xfatbin -compress-all -Xcompiler=" -Zi -Ob0 -Od /RTC1" "--generate-code=arch=compute_75,code=[compute_75,sm_75]" -Xcompiler=-MDd -MD -MT CMakeFiles\cmTC_917ed.dir\main.cu.obj -MF CMakeFiles\cmTC_917ed.dir\main.cu.obj.d -x cu -c E:\PyTorch_Build\pytorch\CMakeFiles\CMakeScratch\TryCompile-9hsd61\main.cu -o CMakeFiles\cmTC_917ed.dir\main.cu.obj -Xcompiler=-FdCMakeFiles\cmTC_917ed.dir\,-FS main.cu tmpxft_00004ff4_00000000-10_main.cudafe1.cpp [2/2] C:\WINDOWS\system32\cmd.exe /C "cd . && E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages\cmake\data\bin\cmake.exe -E vs_link_exe --intdir=CMakeFiles\cmTC_917ed.dir --rc=C:\ProgramData\mingw64\mingw64\bin\windres.exe --mt="" --manifests -- C:\ProgramData\mingw64\mingw64\bin\ld.exe /nologo CMakeFiles\cmTC_917ed.dir\main.cu.obj /out:cmTC_917ed.exe /implib:cmTC_917ed.lib /pdb:cmTC_917ed.pdb /version:0.0 /debug /INCREMENTAL cudadevrt.lib cudart.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib -LIBPATH:"E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64" && cd ." FAILED: [code=4294967295] cmTC_917ed.exe C:\WINDOWS\system32\cmd.exe /C "cd . && E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages\cmake\data\bin\cmake.exe -E vs_link_exe --intdir=CMakeFiles\cmTC_917ed.dir --rc=C:\ProgramData\mingw64\mingw64\bin\windres.exe --mt="" --manifests -- C:\ProgramData\mingw64\mingw64\bin\ld.exe /nologo CMakeFiles\cmTC_917ed.dir\main.cu.obj /out:cmTC_917ed.exe /implib:cmTC_917ed.lib /pdb:cmTC_917ed.pdb /version:0.0 /debug /INCREMENTAL cudadevrt.lib cudart.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib -LIBPATH:"E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64" && cd ." RC Pass 1: command "C:\ProgramData\mingw64\mingw64\bin\windres.exe /fo CMakeFiles\cmTC_917ed.dir/manifest.res CMakeFiles\cmTC_917ed.dir/manifest.rc" failed (exit code 1) with the following output: Usage: C:\ProgramData\mingw64\mingw64\bin\windres.exe [option(s)] [input-file] [output-file] The options are: -i --input=<file> Name input file -o --output=<file> Name output file -J --input-format=<format> Specify input format -O --output-format=<format> Specify output format -F --target=<target> Specify COFF target --preprocessor=<program> Program to use to preprocess rc file --preprocessor-arg=<arg> Additional preprocessor argument -I --include-dir=<dir> Include directory when preprocessing rc file -D --define <sym>[=<val>] Define SYM when preprocessing rc file -U --undefine <sym> Undefine SYM when preprocessing rc file -v --verbose Verbose - tells you what it's doing -c --codepage=<codepage> Specify default codepage -l --language=<val> Set language when reading rc file --use-temp-file Use a temporary file instead of popen to read the preprocessor output --no-use-temp-file Use popen (default) -r Ignored for compatibility with rc @<file> Read options from <file> -h --help Print this help message -V --version Print version information FORMAT is one of rc, res, or coff, and is deduced from the file name extension if not specified. A single file name is an input file. No input-file is stdin, default rc. No output-file is stdout, default rc. C:\ProgramData\mingw64\mingw64\bin\windres.exe: supported targets: pe-x86-64 pei-x86-64 pe-bigobj-x86-64 elf64-x86-64 pe-i386 pei-i386 elf32-i386 elf32-iamcu elf64-little elf64-big elf32-little elf32-big srec symbolsrec verilog tekhex binary ihex plugin ninja: build stopped: subcommand failed. CMake will not be able to correctly generate this project. Call Stack (most recent call first): cmake/public/cuda.cmake:47 (enable_language) cmake/Dependencies.cmake:43 (include) CMakeLists.txt:853 (include) -- Configuring incomplete, errors occurred! You have changed variables that require your cache to be deleted. Configure will be re-run and you may have to reset some variables. The following variables have changed: CMAKE_CXX_COMPILER= cl.exe CMAKE_C_COMPILER= cl.exe -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: BLAS BUILD_TESTING CUDNN_ROOT_DIR PYTHON_EXECUTABLE TORCH_CUDA_ARCH_LIST CMake Generate step failed. Build files cannot be regenerated correctly. Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'torch' has no attribute '__version__' (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 3. 验证CUDA编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\compile_cuda_test.ps1 cuda_test.cu cuda_test.cu(1): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include\cuda.h(23283): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 cuda_test.cu(1): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include\cuda.h(23283): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 tmpxft_00004e0c_00000000-10_cuda_test.cudafe1.cpp CUDA娴嬭瘯缁撴灉: 11 22 33 44 55 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 4. 最终验证 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c @" >> import torch >> print(f'PyTorch版本: {torch.__version__}') >> print(f'CUDA可用: {torch.cuda.is_available()}') >> if torch.cuda.is_available(): >> print(f'CUDA设备数量: {torch.cuda.device_count()}') >> print(f'设备0名称: {torch.cuda.get_device_name(0)}') >> "@ Traceback (most recent call last): File "<string>", line 2, in <module> AttributeError: module 'torch' has no attribute '__version__' (rtx5070_env) PS E:\PyTorch_Build\pytorch>
09-03
<think> 我们正在讨论的是关于Maven依赖项的版本确认问题。用户想要确认`software.amazon.spapi:spapi-sdk`的版本是否为1.4.0。 根据用户提供的依赖项代码片段,确实明确写明了版本1.4.0。 但是,用户可能希望我们确认这个版本是否存在,或者是否是最新版本,或者是否与项目兼容等。不过,从问题表述来看,用户主要是确认在给定的依赖项中版本号是否写的是1.4.0。 因此,我们可以直接回答:是的,在您提供的依赖项中,版本明确指定为1.4.0。 同时,为了提供更多帮助,我们可以建议用户如何验证该版本在Maven仓库中的存在性,以及如何查找最新版本。 另外,根据系统级指令,我们需要生成相关问题。 </think> 根据您提供的依赖项配置,可以确认 `software.amazon.spapi:spapi-sdk` 的版本确实为 **1.4.0**: ```xml <dependency> <groupId>software.amazon.spapi</groupId> <artifactId>spapi-sdk</artifactId> <version>1.4.0</version> <!-- 明确指定版本号 --> </dependency> ``` ### 验证建议: 1. **官方仓库检查** 可通过 [Maven Central 仓库](https://search.maven.org/artifact/software.amazon.spapi/spapi-sdk) 验证版本存在性。截至最新索引,1.4.0 版本是公开可用的正式版本[^1]。 2. **本地构建验证** 执行 Maven 命令检查依赖解析: ```bash mvn dependency:tree | grep "spapi-sdk" ``` 若输出显示 `spapi-sdk:jar:1.4.0` 即表示版本生效。 3. **版本兼容性提示** 需注意 SP-API 的接口变更可能影响 SDK 兼容性,建议查阅 [Amazon SP-API 官方文档](https://developer-docs.amazon.com/sp-api/docs) 确认 1.4.0 版本支持的 API 端点[^2]。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值