Could not create and/or set value back on to object .

Struts2 ModelDriven机制问题
本文探讨了Struts2框架中ModelDriven机制的问题,特别是当用户类未提供无参构造方法时,如何影响框架正常工作。通过示例分析了错误产生的原因及可能的解决方案。

 

严重: Error building bean
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'cn.it.shop.model.Category': Unsatisfied dependency expressed through constructor argument with index 0 of type [java.lang.String]: : No qualifying bean of type [java.lang.String] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [java.lang.String] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [java.lang.String] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {
}

严重: Could not create and/or set value back on to object
java.lang.InstantiationException: cn.it.shop.model.Category
at java.lang.Class.newInstance(Class.java:359)
at com.opensymphony.xwork2.ObjectFactory.buildBean(ObjectFactory.java:158)
at com.opensymphony.xwork2.spring.SpringObjectFactory.buildBean(SpringObjectFactory.java:204)
at com.opensymphony.xwork2.conversion.impl.InstantiatingNullHandler.createObject(InstantiatingNullHandler.java:163)

严重: Exception occurred during processing request: attempt to create saveOrUpdate event with null entity
java.lang.IllegalArgumentException: attempt to create saveOrUpdate event with null entity
at org.hibernate.event.spi.SaveOrUpdateEvent.<init>(SaveOrUpdateEvent.java:62)
at org.hibernate.event.spi.SaveOrUpdateEvent.<init>(SaveOrUpdateEvent.java:45)
at org.hibernate.internal.SessionImpl.update(SessionImpl.java:731)
at org.hibernate.internal.SessionImpl.update(SessionImpl.java:726)
at cn.it.shop.service.impl.CategoryServiceImpl.update(CategoryServiceImpl.java:46)

 

 

  • 原因:

struts2的model driven机制采用了反射机制
反射机制要求用户类必须要有无参构造方法
当一个类没有构造方法时,java缺省为其加一个无参构造方法
当你为一个类添加构造方法时,实际上就禁掉了java缺省为类添加无参构造方法这个动作.
也就是说当你为类添加构造方法后,你的类实际上就没有无参构造方法了,也就不支持反射机制了

PowerShell 7 环境已加载 (版本: 7.5.2) PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修复之前的脚本错误 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $fixedActivation = @" >> try { >> & "$activatePath" >> Write-Host "✅ 虚拟环境激活成功" -ForegroundColor Green >> python -VV >> } >> catch [System.Exception] { >> Write-Host "❌ 激活失败: $($_.Exception.Message)" -ForegroundColor Red >> } >> "@ InvalidOperation: Line | 3 | & "$activatePath" | ~~~~~~~~~~~~~ | The variable '$activatePath' cannot be retrieved because it has not been set. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 切换到PyTorch源码目录 (rtx5070_env) PS E:\PyTorch_Build\pytorch> cd E:\PyTorch_Build\pytorch (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 更新pip到最新版 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m pip install --upgrade pip Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pip in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (22.3.1) Collecting pip Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/3f/945ef7ab14dc4f9d7f40288d2df998d1837ee0888ec3659c813487572faa/pip-25.2-py3-none-any.whl (1.8 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 22.3.1 Uninstalling pip-22.3.1: Successfully uninstalled pip-22.3.1 Successfully installed pip-25.2 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装编译依赖 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install -r requirements-build.txt --verbose Using pip 25.2 from E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\pip (python 3.10) Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting setuptools<80.0,>=70.1.0 (from -r requirements-build.txt (line 2)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0d/6d/b4752b044bf94cb802d88a888dc7d288baaf77d7910b7dedda74b5ceea0c/setuptools-79.0.1-py3-none-any.whl (1.3 MB) Collecting cmake>=3.27 (from -r requirements-build.txt (line 3)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/7c/d0/73cae88d8c25973f2465d5a4457264f95617c16ad321824ed4c243734511/cmake-4.1.0-py3-none-win_amd64.whl (37.6 MB) Collecting ninja (from -r requirements-build.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/45/c0adfbfb0b5895aa18cec400c535b4f7ff3e52536e0403602fc1a23f7de9/ninja-1.13.0-py3-none-win_amd64.whl (309 kB) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/f3/db/8e12381333aea300890829a0a36bfa738cac95475d88982d538725143fd9/numpy-2.3.0.tar.gz (from https://pypi.tuna.tsinghua.edu.cn/simple/numpy/) (requires-python:>=3.11) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/2e/19/d7c972dfe90a353dbd3efbbe1d14a5951de80c99c9dc1b93cd998d51dc0f/numpy-2.3.1.tar.gz (from https://pypi.tuna.tsinghua.edu.cn/simple/numpy/) (requires-python:>=3.11) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/37/7d/3fec4199c5ffb892bed55cff901e4f39a58c81df9c44c280499e92cad264/numpy-2.3.2.tar.gz (from https://pypi.tuna.tsinghua.edu.cn/simple/numpy/) (requires-python:>=3.11) Collecting numpy (from -r requirements-build.txt (line 5)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a3/dd/4b822569d6b96c39d1215dbae0582fd99954dcbcf0c1a13c61783feaca3f/numpy-2.2.6-cp310-cp310-win_amd64.whl (12.9 MB) Collecting packaging (from -r requirements-build.txt (line 6)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl (66 kB) Collecting pyyaml (from -r requirements-build.txt (line 7)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b5/84/0fa4b06f6d6c958d207620fc60005e241ecedceee58931bb20138e1e5776/PyYAML-6.0.2-cp310-cp310-win_amd64.whl (161 kB) Collecting requests (from -r requirements-build.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl (64 kB) Collecting six (from -r requirements-build.txt (line 9)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl (11 kB) Collecting typing-extensions>=4.10.0 (from -r requirements-build.txt (line 10)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Collecting charset_normalizer<4,>=2 (from requests->-r requirements-build.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e2/c6/f05db471f81af1fa01839d44ae2a8bfeec8d2a8b4590f16c4e7393afd323/charset_normalizer-3.4.3-cp310-cp310-win_amd64.whl (107 kB) Collecting idna<4,>=2.5 (from requests->-r requirements-build.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl (70 kB) Collecting urllib3<3,>=1.21.1 (from requests->-r requirements-build.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl (129 kB) Collecting certifi>=2017.4.17 (from requests->-r requirements-build.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e5/48/1549795ba7742c948d2ad169c1c8cdbae65bc450d6cd753d124b17c8cd32/certifi-2025.8.3-py3-none-any.whl (161 kB) Installing collected packages: urllib3, typing-extensions, six, setuptools, pyyaml, packaging, numpy, ninja, idna, cmake, charset_normalizer, certifi, requests Attempting uninstall: setuptools Found existing installation: setuptools 65.5.0 Uninstalling setuptools-65.5.0: Removing file or directory e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages\_distutils_hack\ Removing file or directory e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages\distutils-precedence.pth Removing file or directory e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages\pkg_resources\ Removing file or directory e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages\setuptools-65.5.0.dist-info\ Removing file or directory e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages\setuptools\ Successfully uninstalled setuptools-65.5.0 Successfully installed certifi-2025.8.3 charset_normalizer-3.4.3 cmake-4.1.0 idna-3.10 ninja-1.13.0 numpy-2.2.6 packaging-25.0 pyyaml-6.0.2 requests-2.32.5 setuptools-79.0.1 six-1.17.0 typing-extensions-4.15.0 urllib3-2.5.0 (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip install cmake ninja --upgrade Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: cmake in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.1.0) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.13.0) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 清理旧编译产物 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build, dist -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> Write-Host "`n==== 编译环境验证 ====" -ForegroundColor Cyan ==== 编译环境验证 ==== (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 1. 目录验证 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Write-Host "当前目录: $pwd" 当前目录: E:\PyTorch_Build\pytorch (rtx5070_env) PS E:\PyTorch_Build\pytorch> if ($pwd -ne "E:\PyTorch_Build\pytorch") { >> Write-Host "⚠️ 错误: 需要切换到E:\PyTorch_Build\pytorch" -ForegroundColor Yellow >> cd E:\PyTorch_Build\pytorch >> } ⚠️ 错误: 需要切换到E:\PyTorch_Build\pytorch (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 2. CUDA工具链验证 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $cudaStatus = @( >> "nvcc --version", >> "nvidia-smi", >> "where cudnn64_8.dll" >> ) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> foreach ($cmd in $cudaStatus) { >> Write-Host "`n执行: $cmd" -ForegroundColor Magenta >> try { >> Invoke-Expression $cmd >> } >> catch { >> Write-Host "❌ 命令失败: $_" -ForegroundColor Red >> } >> } 执行: nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jul_16_20:06:48_Pacific_Daylight_Time_2025 Cuda compilation tools, release 13.0, V13.0.48 Build cuda_13.0.r13.0/compiler.36260728_0 执行: nvidia-smi Wed Sep 3 22:04:47 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.97 Driver Version: 580.97 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5070 WDDM | 00000000:01:00.0 On | N/A | | 0% 38C P3 22W / 250W | 1601MiB / 12227MiB | 1% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1540 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 1916 C+G C:\Windows\System32\dwm.exe N/A | | 0 N/A N/A 4972 C+G ...em32\ApplicationFrameHost.exe N/A | | 0 N/A N/A 5036 C+G ...ef.win7x64\steamwebhelper.exe N/A | | 0 N/A N/A 5996 C+G ...8bbwe\PhoneExperienceHost.exe N/A | | 0 N/A N/A 6420 C+G ...ntrolPanel\SystemSettings.exe N/A | | 0 N/A N/A 8280 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 8428 C+G ...indows\System32\ShellHost.exe N/A | | 0 N/A N/A 8616 C+G ..._cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 9212 C+G ...y\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 10092 C+G ...0.3405.125\msedgewebview2.exe N/A | | 0 N/A N/A 12816 C+G ...5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 13400 C+G ...crosoft\OneDrive\OneDrive.exe N/A | | 0 N/A N/A 14212 C+G ...t\Edge\Application\msedge.exe N/A | | 0 N/A N/A 14440 C+G ...acted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 15156 C+G ...les\Tencent\Weixin\Weixin.exe N/A | | 0 N/A N/A 18312 C+G ...es\Microsoft VS Code\Code.exe N/A | +-----------------------------------------------------------------------------------------+ 执行: where cudnn64_8.dll (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 3. Python环境验证 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Write-Host "`nPython环境状态:" -ForegroundColor Magenta Python环境状态: (rtx5070_env) PS E:\PyTorch_Build\pytorch> pip show torch | Select-String "Location" WARNING: Package(s) not found: torch (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; print(f'PyTorch版本: {torch.__version__}')" Traceback (most recent call last): File "<string>", line 1, in <module> File "E:\PyTorch_Build\pytorch\torch\__init__.py", line 61, in <module> from torch.torch_version import __version__ as __version__ File "E:\PyTorch_Build\pytorch\torch\torch_version.py", line 5, in <module> from torch.version import __version__ as internal_version ModuleNotFoundError: No module named 'torch.version' (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置RTX 5070专属编译参数 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $cmakeArgs = @( >> "-B build", >> "-G Ninja", >> "-DUSE_CUDA=ON", >> "-DUSE_CUDNN=ON", >> "-DCUDA_TOOLKIT_ROOT_DIR=`"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0`"", >> "-DCUDNN_ROOT_DIR=`"E:\Program Files\NVIDIA\CUNND\v9.12`"", >> "-DCUDA_ARCH_LIST=`"8.9`"", # RTX 5070架构 >> "-DTORCH_CUDA_ARCH_LIST=`"8.9`"", >> "-DCMAKE_BUILD_TYPE=Release", >> "-DUSE_NCCL=OFF", >> "-DUSE_MKLDNN=ON", >> "-DTORCH_CUDA_VERSION=11.8" # 兼容旧驱动 >> ) (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 启动配置过程 (rtx5070_env) PS E:\PyTorch_Build\pytorch> cmake ($cmakeArgs -join " ") CMake Error: Unable to (re)create the private pkgRedirects directory: E:/PyTorch_Build/pytorch/build -G Ninja -DUSE_CUDA=ON -DUSE_CUDNN=ON -DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0" -DCUDNN_ROOT_DIR="E:/Program Files/NVIDIA/CUNND/v9.12" -DCUDA_ARCH_LIST="8.9" -DTORCH_CUDA_ARCH_LIST="8.9" -DCMAKE_BUILD_TYPE=Release -DUSE_NCCL=OFF -DUSE_MKLDNN=ON -DTORCH_CUDA_VERSION=11.8/CMakeFiles/pkgRedirects This may be caused by not having read/write access to the build directory. Try specifying a location with read/write access like: cmake -B build If using a CMake presets file, ensure that preset parameter 'binaryDir' expands to a writable directory. (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置并行编译(根据CPU核心数调整) (rtx5070_env) PS E:\PyTorch_Build\pytorch> $env:MAX_JOBS = 8 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 启动编译并记录日志 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $logFile = "build_$(Get-Date -Format 'yyyyMMdd_HHmmss').log" (rtx5070_env) PS E:\PyTorch_Build\pytorch> Start-Transcript -Path $logFile Transcript started, output file is build_20250903_220514.log (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> try { >> cmake --build build --config Release --parallel $env:MAX_JOBS >> pip install -v --no-build-isolation . >> } >> catch { >> Write-Host "🔥 编译失败!错误详情: $_" -ForegroundColor Red >> } Error: E:/PyTorch_Build/pytorch/build is not a directory Using pip 25.2 from E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\pip (python 3.10) Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Processing e:\pytorch_build\pytorch Running command Preparing metadata (pyproject.toml) Building wheel torch-2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\config\_apply_pyprojecttoml.py:82: SetuptoolsDeprecationWarning: `project.license` as a TOML table is deprecated !! ******************************************************************************** Please use a simple string containing a SPDX expression for `project.license`. You can also use `project.license-files`. (Both options available on setuptools>=77.0.0). By 2026-Feb-18, you need to update your project and remove deprecated calls or your builds will no longer be supported. See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! corresp(dist, value, root_dir) running dist_info creating C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info writing C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\PKG-INFO writing dependency_links to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\dependency_links.txt writing entry points to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\entry_points.txt writing requirements to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\requires.txt writing top-level names to C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\top_level.txt writing manifest file 'C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\SOURCES.txt' reading manifest file 'C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'BUILD' warning: no files found matching '*.BUILD' warning: no files found matching 'BUCK' warning: no files found matching '[Mm]akefile.*' warning: no files found matching '*.[Dd]ockerfile' warning: no files found matching '[Dd]ockerfile.*' warning: no previously-included files matching '*.o' found anywhere in distribution warning: no previously-included files matching '*.obj' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.a' found anywhere in distribution warning: no previously-included files matching '*.dylib' found anywhere in distribution no previously-included directories found matching '*\.git' warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '*.swp' found anywhere in distribution adding license file 'LICENSE' adding license file 'NOTICE' writing manifest file 'C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch.egg-info\SOURCES.txt' creating 'C:\Users\Administrator\AppData\Local\Temp\pip-modern-metadata-inyji1j9\torch-2.9.0a0+git2d31c3d.dist-info' Preparing metadata (pyproject.toml) ... done Collecting filelock (from torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/42/14/42b2651a2f46b022ccd948bca9f2d5af0fd8929c4eec235b8d6d844fbe67/filelock-3.19.1-py3-none-any.whl (15 kB) Requirement already satisfied: typing-extensions>=4.10.0 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from torch==2.9.0a0+git2d31c3d) (4.15.0) Collecting sympy>=1.13.3 (from torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a2/09/77d55d46fd61b4a135c444fc97158ef34a095e5681d0a6c10b75bf356191/sympy-1.14.0-py3-none-any.whl (6.3 MB) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/eb/8d/776adee7bbf76365fdd7f2552710282c79a4ead5d2a46408c9043a2b70ba/networkx-3.5-py3-none-any.whl (from https://pypi.tuna.tsinghua.edu.cn/simple/networkx/) (requires-python:>=3.11) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/6c/4f/ccdb8ad3a38e583f214547fd2f7ff1fc160c43a75af88e6aec213404b96a/networkx-3.5.tar.gz (from https://pypi.tuna.tsinghua.edu.cn/simple/networkx/) (requires-python:>=3.11) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/3f/a1/46c1b6e202e3109d2a035b21a7e5534c5bb233ee30752d7f16a0bd4c3989/networkx-3.5rc0-py3-none-any.whl (from https://pypi.tuna.tsinghua.edu.cn/simple/networkx/) (requires-python:>=3.11) Link requires a different Python (3.10.10 not in: '>=3.11'): https://pypi.tuna.tsinghua.edu.cn/packages/90/7e/0319606a20ced20730806b9f7fe91d8a92f7da63d76a5c388f87d3f7d294/networkx-3.5rc0.tar.gz (from https://pypi.tuna.tsinghua.edu.cn/simple/networkx/) (requires-python:>=3.11) Collecting networkx>=2.5.1 (from torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b9/54/dd730b32ea14ea797530a4479b2ed46a6fb250f682a9cfb997e968bf0261/networkx-3.4.2-py3-none-any.whl (1.7 MB) Collecting jinja2 (from torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl (134 kB) Collecting fsspec>=0.8.5 (from torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/47/71/70db47e4f6ce3e5c37a607355f80da8860a33226be640226ac52cb05ef2e/fsspec-2025.9.0-py3-none-any.whl (199 kB) Collecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl (536 kB) Collecting MarkupSafe>=2.0 (from jinja2->torch==2.9.0a0+git2d31c3d) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/44/06/e7175d06dd6e9172d4a69a72592cb3f7a996a9c396eee29082826449bbc3/MarkupSafe-3.0.2-cp310-cp310-win_amd64.whl (15 kB) Building wheels for collected packages: torch Running command Building wheel for torch (pyproject.toml) Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( Cloning into 'nccl'... Note: switching to '3ea7eedf3b9b94f1d9f99f4e55536dfcbd23c1ca'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c <new-branch-name> Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages -DCUDNN_INCLUDE_DIR=E:\Program Files\NVIDIA\CUNND\v9.12\include\12.9 -DCUDNN_LIBRARY=E:\Program Files\NVIDIA\CUNND\v9.12\lib\12.9\x64 -DCUDNN_ROOT=E:\Program Files\NVIDIA\CUNND\v9.12 -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe -DPython_NumPy_INCLUDE_DIR=E:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\numpy\_core\include -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DTORCH_CUDA_ARCH_LIST=8.9 -DUSE_NUMPY=True -DUSE_OPENBLAS=1 E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:9 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:421 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:423 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:435 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:841 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\rtx5070_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:869 (include) -- Found CUDNN: E:/Program Files/NVIDIA/CUNND/v9.12/lib/13.0/x64/cudnn.lib -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:869 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:869 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:869 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:869 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:869 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:869 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:869 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake
09-04
这个是库的 文档说明 react-native-vision Library for accessing VisionKit and visual applications of CoreML from React Native. iOS Only Incredibly super-alpha, and endeavors to provide a relatively thin wrapper between the underlying vision functionality and RN. Higher-level abstractions are @TODO and will be in a separate library. Installation yarn add react-native-vision react-native-swift react-native link Note react-native-swift is a peer dependency of react-native-vision. If you are running on a stock RN deployment (e.g. from react-native init) you will need to make sure your app is targeting IOS 11 or higher: yarn add react-native-fix-ios-version react-native link Since this module uses the camera, it will work much better on a device, and setting up permissions and codesigning in advance will help: yarn add -D react-native-camera-ios-enable yarn add -D react-native-setdevteam react-native link react-native setdevteam Then you are ready to run! react-native run-ios --device Command line - adding a Machine Learning Model with add-mlmodel react-native-vision makes it easier to bundle a pre-built machine learning model into your app. After installing, you will find the following command available: react-native add-mlmodel /path/to/mymodel.mlmodel You may also refere to the model from a URL, which is handy when getting something off the interwebs. For example, to apply the pre-built mobileNet model from apple, you can: react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Note that the name of your model in the code will be the same as the filename minus the "mlmodel". In the above case, the model in code can be referenced as "MobileNet" Easy Start 1 : Full Frame Object Detection One of the most common easy use cases is just detecting what is in front of you. For this we use the VisionCamera component that lets you apply a model and get the classification via render props. Setup react-native init imagedetector; cd imagedetector yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with MobileNet A free download from Apple! react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Add Some App Code import React from "react"; import { Text } from "react-native"; import { VisionCamera } from "react-native-vision"; export default () => ( <VisionCamera style={{ flex: 1 }} classifier="MobileNet"> {({ label, confidence }) => ( <Text style={{ width: "75%", fontSize: 50, position: "absolute", right: 50, bottom: 100 }} > {label + " :" + (confidence * 100).toFixed(0) + "%"} </Text> )} </VisionCamera> ); Easy Start 2: GeneratorView - for Style Transfer Most machine learning application are classifiers. But generators can be useful and a lot of fun. The GeneratorView lets you look at style transfer models that show how you can use deep learning techniques for creating whole new experiences. Setup react-native init styletest; cd styletest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel Apple has not published a style transfer model, but there are a few locations on the web where you can download them. Here is one: https://github.com/mdramos/fast-style-transfer-coreml So go to his github, navigate to his google drive, and then download the la_muse model to your personal Downloads directory. react-native add-mlmodel ~/Downloads/la_muse.mlmodel App Code This is the insanely short part. Note that the camera view is not necessary for viewing the style-transferred view: its just for reference. import React from "react"; import { GeneratorView, RNVCameraView } from "react-native-vision"; export default () => ( <GeneratorView generator="FNS-The-Scream" style={{ flex: 1 }}> <RNVCameraView style={{ position: "absolute", height: 200, width: 100, top: 0, right: 0 }} resizeMode="center" /> </GeneratorView> ); Easy Start 3: Face Camera Detect what faces are where in your camera view! Taking a page (and the model!) from (https://github.com/gantman/nicornot)[Gant Laborde's NicOrNot app], here is the entirety of an app that discerns whether the target is nicolas cage. Setup react-native init nictest; cd nictest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel react-native add-mlmodel https://s3.amazonaws.com/despiteallmyrage/MegaNic50_linear_5.mlmodel App Code import React from "react"; import { Text, View } from "react-native"; import { FaceCamera } from "react-native-vision"; import { Identifier } from "react-native-identifier"; export default () => ( <FaceCamera style={{ flex: 1 }} classifier="MegaNic50_linear_5"> {({ face, faceConfidence, style }) => face && (face == "nic" ? ( <Identifier style={{ ...style }} accuracy={faceConfidence} /> ) : ( <View style={{ ...style, justifyContent: "center", alignItems: "center" }} > <Text style={{ fontSize: 50, color: "red", opacity: faceConfidence }}> X </Text> </View> )) } </FaceCamera> ); Face Detection Component Reference FacesProvider Context Provider that extends <RNVisionProvider /> to detect, track, and identify faces. Props Inherits from <RNVisionProvider />, plus: interval: How frequently (in ms) to run the face detection re-check. (Basically lower values here keeps the face tracking more accurate) Default: 500 classifier: File URL to compiled MLModel (e.g. mlmodelc) that will be applied to detected faces updateInterval: How frequently (in ms) to update the detected faces - position, classified face, etc. Smaller values will mean smoother animation, but at the price of processor intensity. Default: 100 Example <FacesProvider isStarted={true} isCameraFront={true} classifier={this.state.classifier} > {/* my code for handling detected faces */} </FacesProvider> FacesConsumer Consumer of <FacesProvider /> context. As such, takes no props and returns a render prop function. Render Prop Members faces: Keyed object of information about the detected face. Elements of each object include: region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note that when there is no classifier specified, faces, face and faceConfidence are undefined Face Render prop generator to provision information about a single detected face. Can be instantiated by spread-propping the output of a single face value from <FacesConsumer> or by appling a faceID that maps to the key of a face. Returns null if no match. Props faceID: ID of the face (corresponding to the key of the faces object in FacesConsumer) Render Prop Members region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. Note These are adjusted for the visible camera view when you are rendering from that context. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note These arguments are the sam Faces A render-prop generator to provision information about all detected faces. Will map all detected faces into <Face> components and apply the children prop to each, so you have one function to generate all your faces. Designed to be similar to FlatMap implentation. Required Provider Context This component must be a descendant of a <FacesProvider> Props None Render Prop Members Same as <Face> above, but output will be mapped across all detected faces. Example of use is in the primary Face Recognizer demo code above. Props faceID: ID of the face applied. isCameraView: Whether the region frame information to generate should be camera-aware (e.g. is it adjusted for a preview window or not) Render Props This largely passes throught the members of the element that you could get from the faces collection from FaceConsumer, with the additional consideration that when isCameraView is set, style: A spreadable set of styling members to position the rectangle, in the same style as a RNVCameraRegion If faceID is provided but does not map to a member of the faces collection, the function will return null. Core Component References The package exports a number of components to facilitate the vision process. Note that the <RNVisionProvider /> needs to be ancestors to any others in the tree. So a simple single-classifier using dominant image would look something like: <RNVisionProvider isStarted={true}> <RNVDefaultRegion classifiers={[{url: this.state.FileUrlOfClassifier, max: 5}]}> {({classifications})=>{ return ( <Text> {classifications[this.state.FileUrlOfClassifier][0].label} </Text> }} </RNVDefaultRegion> </RNVisionProvider> RNVisionProvider Context provider for information captured from the camera. Allows the use of regional detection methods to initialize identification of objects in the frame. Props isStarted: Whether the camera should be activated for vision capture. Boolean isCameraFront: Facing of the camera. False for the back camera, true to use the front. Note only one camera facing can be used at a time. As of now, this is a hardware limitation. regions: Specified regions on the camera capture frame articulated as {x,y,width,height} that should always be returned by the consumer trackedObjects: Specified regions that should be tracked as objects, so that the regions returned match these object IDs and show current position. onRegionsChanged: Fires when the list of regions has been altered onDetectedFaces: Fires when the number of detected faces has changed Class imperative member detectFaces: Triggers one call to detect faces based on current active frame. Directly returns locations. RNVisionConsumer Consumer partner of RNVisionProvider. Must be its descendant in the node tree. Render Prop Members imageDimensions: Object representing size of the camera frame in {width, height} isCameraFront: Relaying whether camera is currently in selfie mode. This is important if you plan on displaying camera output, because in selfie mode a preview will be mirrored. regions: The list of detected rectangles in the most recently captured frame, where detection is driven by the RNVisionProvider props RNVRegion Props region: ID of the region (Note the default region, which is the whole frame, has an id of "" - blank.) classifiers: CoreML classifiers passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML image generators passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML models that generate a collection of output values passed as file URLs to the classifier mlmodelc itself. bottlenecks: A collection of CoreML models that take other CoreML model outputs as their inputs. Keys are the file URLs of the original models (that take an image as their input) and values are arrays of mdoels that generate the output passed via render props. onFrameCaptured: Callback to fire when a new image of the current frame in this region has been captured. Making non-null activates frame capture, setting to null turns it off. The callback passes a URL of the saved frame image file. Render Prop members key: ID of the region x, y, width, height: the elements of the frame containing the region. All values expressed as percentages of the overall frame size, so a 50x100 frame at origin 5,10 in a 500x500 frame would come across as {x: 0.01, y: 0.02, width: .1, height: .2}. Changes in these values are often what drives the re-render of the component (and therefore re-run of the render prop) confidence: If set, the confidence that the object identified as key is actually at this location. Used by tracked objects API of iOS Vision. Sometimes null. classifications: Collection, keyed by the file URL of the classifier passed in props, of collections of labels and probabilities. (e.g. {"file:///path/to/myclassifier.mlmodelc": {"label1": 0.84, "label2": 0.84}}) genericResults: Collection of generic results returned from generic models passed in via props to the region RNVDefaultRegion Convenience region that references the full frame. Same props as RNVRegion, except region is always set to "" - the full frame. Useful for simple style transfers or "dominant image" classifiers. Props Same as RNVRegion, with the exception that region is forced to "" Render Prop Members Same as RNVRegion, with the note that key will always be "" RNVCameraView Preview of the camera captured by the RNVisionProvider. Note that the preview is flipped in selfie mode (e.g. when isCameraFront is true) Props The properties of a View plus: gravity: how to scale the captured camera frame in the view. String. Valid values: fill: Fills the rectangle much like the "cover" in an Image resize: Leaves transparent (or style:{backgroundColor}) the parts of the rectangle that are left over from a resized version of the image. RNVCameraConsumer Render prop consumer for delivering additional context that regions will find helpful, mostly for rendering rectangles that map to the regions identified. Render Prop Members viewPortDimensions: A collection of {width, height} of the view rectangle. viewPortGravity: A pass-through of the gravity prop to help decide how to manage the math converting coordinates. RNVCameraRegion A compound consumer that blends the render prop members of RNVRegion and RNVCameraConsumer and adds a style prop that can position the region on a specified camera preview Props Same as RNVRegion Render Prop Members Includes members from RNVRegion and RNVCameraConsumer and adds: style: A pre-built colleciton of style prop members {position, width, height, left, top} that are designed to act in the context of the RNVCameraView rectangle. Spread-prop with your other style preferences (border? backgroundColor?) for easy on-screen representation. RNVImageView View for displaying output of image generators. Link it to , and the resulting image will display in this view. Useful for style transfer models. More performant because there is no round trip to JavaScript notifying of each frame update. Props id: the ID of an image generator model attached to a region. Usually is the file:/// URL of the .mlmodelc. Otherwise conforms to Image and View API. 请叫我如何做
最新发布
11-06
Code 分享 Notebook 保存成功 Python 3 (ipykernel) import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from tensorflow.keras import layers,losses,optimizers, Sequential from tensorflow.keras.layers import InputLayer, Dense, LSTM, Dropout from tensorflow.keras.models import load_model from tensorflow.keras import Model 0秒 + Code + Markdown stock_data = pd.read_csv('IBM_stock_data.csv') 0秒 + Code + Markdown stock_data.head() 0秒 date Open High Low Close Volume Price Change % 0 1999/11/1 98.50 98.81 96.37 96.75 9551800 0.000000 1 1999/11/2 96.75 96.81 93.69 94.81 11105400 -2.005168 2 1999/11/3 95.87 95.94 93.50 94.37 10369100 -0.464086 3 1999/11/4 94.44 94.44 90.00 91.56 16697600 -2.977641 4 1999/11/5 92.75 92.94 90.19 90.25 13737600 -1.430756 + Code + Markdown scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(stock_data['Close'].values.reshape(-1, 1)) 0秒 + Code + Markdown scaled_data[0] 0秒 array([0.23131139]) + Code + Markdown len(scaled_data) 0秒 6293 + Code + Markdown training_data_len = int(np.ceil(len(scaled_data) * 0.8)) #向上取整 0秒 + Code + Markdown train_data = scaled_data[0:training_data_len] X_train, y_train = [], [] time_step = 10 # 时间窗口,模型基于前10个时间步长进行预测。可以尝试不同长度(如20、30)并观察效果变化。 0秒 + Code + Markdown for i in range(len(train_data) - time_step - 1): X_train.append(train_data[i:(i + time_step), 0]) y_train.append(train_data[i + time_step, 0]) 0秒 + Code + Markdown X_train, y_train = np.array(X_train), np.array(y_train) X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) 0秒 + Code + Markdown X_train.shape 0秒 (5024, 10, 1) + Code + Markdown X_test, y_test = [], [] test_data = scaled_data[training_data_len - time_step:] ​ for i in range(len(test_data) - time_step): X_test.append(test_data[i:(i + time_step), 0]) y_test.append(test_data[i + time_step, 0]) ​ X_test, y_test = np.array(X_test), np.array(y_test) X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1) 0秒 + Code + Markdown model = Sequential() model.add(InputLayer(input_shape=(X_train.shape[1], 1))) model.add(LSTM(units=64, return_sequences=True)) # 调整Dropout: 当前设置为0.3,可以尝试在不同的层上使用不同的Dropout值,例如0.2~0.5之间。Dropout的作用是防止过拟合。 model.add(Dropout(0.3)) model.add(LSTM(units=64, return_sequences=True)) model.add(Dropout(0.3)) model.add(LSTM(units=32)) model.add(Dropout(0.2)) # 增加回归层: 如果希望更高的拟合精度,可以添加多个Dense层,例如在输出前再增加一层Dense。 model.add(Dense(units=1)) 3秒 2025-06-21 16:21:14.205566: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2025-06-21 16:21:14.205665: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2025-06-21 16:21:14.205686: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (dsw-1161427-779497c87d-j6j54): /proc/driver/nvidia/version does not exist 2025-06-21 16:21:14.206190: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. + Code + Markdown def MyRNN(): model = Sequential([ layers.InputLayer(input_shape=(X_train.shape[1],1)), layers.SimpleRNN(units=64, dropout=0.5, return_sequences=True, unroll=True), layers.SimpleRNN(units=64, dropout=0.5, unroll=True), layers.Dense(1)] ) return model 0秒 + Code + Markdown model = MyRNN() model.compile(optimizer='adam', loss='mean_squared_error') 0秒 + Code + Markdown history = model.fit( X_train, y_train, epochs=2, # 批大小(batch_size): 尝试不同的batch_size,例如16、32、64,以找到训练稳定性和准确性之间的平衡 batch_size=32, #callbacks=[early_stopping, lr_scheduler] ) 12秒 Epoch 1/2 WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7f02c8692040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7f02c8692040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert 157/157 [==============================] - 9s 20ms/step - loss: 0.0396 Epoch 2/2 157/157 [==============================] - 3s 21ms/step - loss: 0.0266 + Code + Markdown model.save('my_model.keras') 0秒 + Code + Markdown train_loss = model.evaluate(X_train, y_train, verbose=0) test_loss = model.evaluate(X_test, y_test, verbose=0) 5秒 WARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x7f02cc2b5dc0> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x7f02cc2b5dc0> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert + Code + Markdown # 训练集上计算的损失值,数值越小表示模型在训练数据上的拟合效果越好 print(f"Training Loss: {train_loss:.4f}") # 测试集上计算的损失值,反映了模型在未见过的数据上的表现。测试损失略高于训练损失,但差距不大,说明模型在新数据上的表现依然良好。 print(f"Testing Loss: {test_loss:.4f}") 0秒 Training Loss: 0.0412 Testing Loss: 0.0480 + Code + Markdown model = load_model('my_model.keras') predictions = model.predict(X_test) predictions = scaler.inverse_transform(predictions) # 反归一化预测值 2秒 WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x7f0276f14040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x7f0276f14040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert + Code + Markdown import matplotlib.pyplot as plt plt.rcParams['font.sans-serif'] = ['SimHei'] # 设置中文字体为黑体 plt.rcParams['axes.unicode_minus'] = False # 正确显示负号 0秒 + Code + Markdown %matplotlib inline 0秒 + Code + Markdown Code train = stock_data[:training_data_len] valid = stock_data[training_data_len:] valid.loc[:, 'Predictions'] = predictions ​ ​ # 绘制图像 plt.figure(figsize=(14, 5)) plt.title('股票价格预测', fontsize=20) # 图表标题改为中文 plt.xlabel('日期', fontsize=14) # X 轴标签改为中文 plt.ylabel('收盘价', fontsize=14) # Y 轴标签改为中文 plt.plot(train['date'], train['Close'], label='训练数据', color='blue') # 训练数据标签改为中文 plt.plot(valid['date'], valid['Close'], label='真实价格', color='green') # 真实价格标签改为中文 plt.plot(valid['date'], valid['Predictions'], label='预测价格', color='red') # 预测价格标签改为中文 plt.legend() # 添加图例 # 添加保存图像的代码 plt.savefig('stock_price_predictions.png') # 保存图像 plt.show() ​ # 计算和输出评估指标 rmse = np.sqrt(np.mean(np.square(predictions - y_test))) mae = np.mean(np.abs(predictions - y_test)) print(f'均方根误差 (RMSE): {rmse}, 平均绝对误差 (MAE): {mae}') # 输出信息改为中文 4分钟58秒 /opt/conda/lib/python3.8/site-packages/pandas/core/indexing.py:1667: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy self.obj[key] = value findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. findfont: Generic family 'sans-serif' not found because none of the following families were found: SimHei /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 32929 (\N{CJK UNIFIED IDEOGRAPH-80A1}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 31080 (\N{CJK UNIFIED IDEOGRAPH-7968}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 20215 (\N{CJK UNIFIED IDEOGRAPH-4EF7}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 26684 (\N{CJK UNIFIED IDEOGRAPH-683C}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 39044 (\N{CJK UNIFIED IDEOGRAPH-9884}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. findfont: Generic family 'sans-serif' not found because none of the following families were found: SimHei findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. findfont: Generic family 'sans-serif' not found because none of the following families were found: SimHei /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 26085 (\N{CJK UNIFIED IDEOGRAPH-65E5}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 26399 (\N{CJK UNIFIED IDEOGRAPH-671F}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 25910 (\N{CJK UNIFIED IDEOGRAPH-6536}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 30424 (\N{CJK UNIFIED IDEOGRAPH-76D8}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 25968 (\N{CJK UNIFIED IDEOGRAPH-6570}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 25454 (\N{CJK UNIFIED IDEOGRAPH-636E}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 30495 (\N{CJK UNIFIED IDEOGRAPH-771F}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 23454 (\N{CJK UNIFIED IDEOGRAPH-5B9E}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 26085 (\N{CJK UNIFIED IDEOGRAPH-65E5}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 26399 (\N{CJK UNIFIED IDEOGRAPH-671F}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 25910 (\N{CJK UNIFIED IDEOGRAPH-6536}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 30424 (\N{CJK UNIFIED IDEOGRAPH-76D8}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 20215 (\N{CJK UNIFIED IDEOGRAPH-4EF7}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 32929 (\N{CJK UNIFIED IDEOGRAPH-80A1}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 31080 (\N{CJK UNIFIED IDEOGRAPH-7968}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 26684 (\N{CJK UNIFIED IDEOGRAPH-683C}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 39044 (\N{CJK UNIFIED IDEOGRAPH-9884}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 25968 (\N{CJK UNIFIED IDEOGRAPH-6570}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 25454 (\N{CJK UNIFIED IDEOGRAPH-636E}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 30495 (\N{CJK UNIFIED IDEOGRAPH-771F}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 23454 (\N{CJK UNIFIED IDEOGRAPH-5B9E}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) 均方根误差 (RMSE): 105.03158114646604, 平均绝对误差 (MAE): 104.21013102460681误差好大,还有中文显示不成功
06-22
"""Support for dynamic COM client support. Introduction Dynamic COM client support is the ability to use a COM server without prior knowledge of the server. This can be used to talk to almost all COM servers, including much of MS Office. In general, you should not use this module directly - see below. Example >>> import win32com.client >>> xl = win32com.client.Dispatch("Excel.Application") # The line above invokes the functionality of this class. # xl is now an object we can use to talk to Excel. >>> xl.Visible = 1 # The Excel window becomes visible. """ import traceback from itertools import chain from types import MethodType import pythoncom # Needed as code we eval() references it. import win32com.client import winerror from pywintypes import IIDType from . import build debugging = 0 # General debugging debugging_attr = 0 # Debugging dynamic attribute lookups. LCID = 0x0 # These errors generally mean the property or method exists, # but can't be used in this context - eg, property instead of a method, etc. # Used to determine if we have a real error or not. ERRORS_BAD_CONTEXT = [ winerror.DISP_E_MEMBERNOTFOUND, winerror.DISP_E_BADPARAMCOUNT, winerror.DISP_E_PARAMNOTOPTIONAL, winerror.DISP_E_TYPEMISMATCH, winerror.E_INVALIDARG, ] ALL_INVOKE_TYPES = [ pythoncom.INVOKE_PROPERTYGET, pythoncom.INVOKE_PROPERTYPUT, pythoncom.INVOKE_PROPERTYPUTREF, pythoncom.INVOKE_FUNC, ] def debug_print(*args): if debugging: for arg in args: print(arg, end=" ") print() def debug_attr_print(*args): if debugging_attr: for arg in args: print(arg, end=" ") print() # get the type objects for IDispatch and IUnknown PyIDispatchType = pythoncom.TypeIIDs[pythoncom.IID_IDispatch] PyIUnknownType = pythoncom.TypeIIDs[pythoncom.IID_IUnknown] _GoodDispatchTypes = (str, IIDType) def _GetGoodDispatch(IDispatch, clsctx=pythoncom.CLSCTX_SERVER): # quick return for most common case if isinstance(IDispatch, PyIDispatchType): return IDispatch if isinstance(IDispatch, _GoodDispatchTypes): try: IDispatch = pythoncom.connect(IDispatch) except pythoncom.ole_error: IDispatch = pythoncom.CoCreateInstance( IDispatch, None, clsctx, pythoncom.IID_IDispatch ) else: # may already be a wrapped class. IDispatch = getattr(IDispatch, "_oleobj_", IDispatch) return IDispatch def _GetGoodDispatchAndUserName(IDispatch, userName, clsctx): # Get a dispatch object, and a 'user name' (ie, the name as # displayed to the user in repr() etc. if userName is None: if isinstance(IDispatch, str): userName = IDispatch ## ??? else userName remains None ??? else: userName = str(userName) return (_GetGoodDispatch(IDispatch, clsctx), userName) def _GetDescInvokeType(entry, invoke_type): # determine the wFlags argument passed as input to IDispatch::Invoke # Only ever called by __getattr__ and __setattr__ from dynamic objects! # * `entry` is a MapEntry with whatever typeinfo we have about the property we are getting/setting. # * `invoke_type` is either INVOKE_PROPERTYGET | INVOKE_PROPERTYSET and really just # means "called by __getattr__" or "called by __setattr__" if not entry or not entry.desc: return invoke_type if entry.desc.desckind == pythoncom.DESCKIND_VARDESC: return invoke_type # So it's a FUNCDESC - just use what it specifies. return entry.desc.invkind def Dispatch( IDispatch, userName=None, createClass=None, typeinfo=None, clsctx=pythoncom.CLSCTX_SERVER, ): IDispatch, userName = _GetGoodDispatchAndUserName(IDispatch, userName, clsctx) if createClass is None: createClass = CDispatch lazydata = None try: if typeinfo is None: typeinfo = IDispatch.GetTypeInfo() if typeinfo is not None: try: # try for a typecomp typecomp = typeinfo.GetTypeComp() lazydata = typeinfo, typecomp except pythoncom.com_error: pass except pythoncom.com_error: typeinfo = None olerepr = MakeOleRepr(IDispatch, typeinfo, lazydata) return createClass(IDispatch, olerepr, userName, lazydata=lazydata) def MakeOleRepr(IDispatch, typeinfo, typecomp): olerepr = None if typeinfo is not None: try: attr = typeinfo.GetTypeAttr() # If the type info is a special DUAL interface, magically turn it into # a DISPATCH typeinfo. if ( attr[5] == pythoncom.TKIND_INTERFACE and attr[11] & pythoncom.TYPEFLAG_FDUAL ): # Get corresponding Disp interface; # -1 is a special value which does this for us. href = typeinfo.GetRefTypeOfImplType(-1) typeinfo = typeinfo.GetRefTypeInfo(href) attr = typeinfo.GetTypeAttr() if typecomp is None: olerepr = build.DispatchItem(typeinfo, attr, None, 0) else: olerepr = build.LazyDispatchItem(attr, None) except pythoncom.ole_error: pass if olerepr is None: olerepr = build.DispatchItem() return olerepr def DumbDispatch( IDispatch, userName=None, createClass=None, clsctx=pythoncom.CLSCTX_SERVER, ): "Dispatch with no type info" IDispatch, userName = _GetGoodDispatchAndUserName(IDispatch, userName, clsctx) if createClass is None: createClass = CDispatch return createClass(IDispatch, build.DispatchItem(), userName) class CDispatch: def __init__(self, IDispatch, olerepr, userName=None, lazydata=None): if userName is None: userName = "<unknown>" self.__dict__["_oleobj_"] = IDispatch self.__dict__["_username_"] = userName self.__dict__["_olerepr_"] = olerepr self.__dict__["_mapCachedItems_"] = {} self.__dict__["_builtMethods_"] = {} self.__dict__["_enum_"] = None self.__dict__["_unicode_to_string_"] = None self.__dict__["_lazydata_"] = lazydata def __call__(self, *args): "Provide 'default dispatch' COM functionality - allow instance to be called" if self._olerepr_.defaultDispatchName: invkind, dispid = self._find_dispatch_type_( self._olerepr_.defaultDispatchName ) else: invkind, dispid = ( pythoncom.DISPATCH_METHOD | pythoncom.DISPATCH_PROPERTYGET, pythoncom.DISPID_VALUE, ) if invkind is not None: allArgs = (dispid, LCID, invkind, 1) + args return self._get_good_object_( self._oleobj_.Invoke(*allArgs), self._olerepr_.defaultDispatchName, None ) raise TypeError("This dispatch object does not define a default method") def __bool__(self): return True # ie "if object:" should always be "true" - without this, __len__ is tried. # _Possibly_ want to defer to __len__ if available, but I'm not sure this is # desirable??? def __repr__(self): return "<COMObject %s>" % (self._username_) def __str__(self): # __str__ is used when the user does "print(object)", so we gracefully # fall back to the __repr__ if the object has no default method. try: return str(self.__call__()) except pythoncom.com_error as details: if details.hresult not in ERRORS_BAD_CONTEXT: raise return self.__repr__() def __dir__(self): attributes = chain(self.__dict__, dir(self.__class__), self._dir_ole_()) try: attributes = chain(attributes, [p.Name for p in self.Properties_]) except AttributeError: pass return list(set(attributes)) def _dir_ole_(self): items_dict = {} for iTI in range(0, self._oleobj_.GetTypeInfoCount()): typeInfo = self._oleobj_.GetTypeInfo(iTI) self._UpdateWithITypeInfo_(items_dict, typeInfo) return list(items_dict) def _UpdateWithITypeInfo_(self, items_dict, typeInfo): typeInfos = [typeInfo] # suppress IDispatch and IUnknown methods inspectedIIDs = {pythoncom.IID_IDispatch: None} while len(typeInfos) > 0: typeInfo = typeInfos.pop() typeAttr = typeInfo.GetTypeAttr() if typeAttr.iid not in inspectedIIDs: inspectedIIDs[typeAttr.iid] = None for iFun in range(0, typeAttr.cFuncs): funDesc = typeInfo.GetFuncDesc(iFun) funName = typeInfo.GetNames(funDesc.memid)[0] if funName not in items_dict: items_dict[funName] = None # Inspect the type info of all implemented types # E.g. IShellDispatch5 implements IShellDispatch4 which implements IShellDispatch3 ... for iImplType in range(0, typeAttr.cImplTypes): iRefType = typeInfo.GetRefTypeOfImplType(iImplType) refTypeInfo = typeInfo.GetRefTypeInfo(iRefType) typeInfos.append(refTypeInfo) # Delegate comparison to the oleobjs, as they know how to do identity. def __eq__(self, other): other = getattr(other, "_oleobj_", other) return self._oleobj_ == other def __ne__(self, other): other = getattr(other, "_oleobj_", other) return self._oleobj_ != other def __int__(self): return int(self.__call__()) def __len__(self): invkind, dispid = self._find_dispatch_type_("Count") if invkind: return self._oleobj_.Invoke(dispid, LCID, invkind, 1) raise TypeError("This dispatch object does not define a Count method") def _NewEnum(self): try: invkind = pythoncom.DISPATCH_METHOD | pythoncom.DISPATCH_PROPERTYGET enum = self._oleobj_.InvokeTypes( pythoncom.DISPID_NEWENUM, LCID, invkind, (13, 10), () ) except pythoncom.com_error: return None # no enumerator for this object. from . import util return util.WrapEnum(enum, None) def __getitem__(self, index): # syver modified # Improved __getitem__ courtesy Syver Enstad # Must check _NewEnum before Item, to ensure b/w compat. if isinstance(index, int): if self.__dict__["_enum_"] is None: self.__dict__["_enum_"] = self._NewEnum() if self.__dict__["_enum_"] is not None: return self._get_good_object_(self._enum_.__getitem__(index)) # See if we have an "Item" method/property we can use (goes hand in hand with Count() above!) invkind, dispid = self._find_dispatch_type_("Item") if invkind is not None: return self._get_good_object_( self._oleobj_.Invoke(dispid, LCID, invkind, 1, index) ) raise TypeError("This object does not support enumeration") def __setitem__(self, index, *args): # XXX - todo - We should support calling Item() here too! # print("__setitem__ with", index, args) if self._olerepr_.defaultDispatchName: invkind, dispid = self._find_dispatch_type_( self._olerepr_.defaultDispatchName ) else: invkind, dispid = ( pythoncom.DISPATCH_PROPERTYPUT | pythoncom.DISPATCH_PROPERTYPUTREF, pythoncom.DISPID_VALUE, ) if invkind is not None: allArgs = (dispid, LCID, invkind, 0, index) + args return self._get_good_object_( self._oleobj_.Invoke(*allArgs), self._olerepr_.defaultDispatchName, None ) raise TypeError("This dispatch object does not define a default method") def _find_dispatch_type_(self, methodName): if methodName in self._olerepr_.mapFuncs: item = self._olerepr_.mapFuncs[methodName] return item.desc[4], item.dispid if methodName in self._olerepr_.propMapGet: item = self._olerepr_.propMapGet[methodName] return item.desc[4], item.dispid try: dispid = self._oleobj_.GetIDsOfNames(0, methodName) except: ### what error? return None, None return pythoncom.DISPATCH_METHOD | pythoncom.DISPATCH_PROPERTYGET, dispid def _ApplyTypes_(self, dispid, wFlags, retType, argTypes, user, resultCLSID, *args): result = self._oleobj_.InvokeTypes( *(dispid, LCID, wFlags, retType, argTypes) + args ) return self._get_good_object_(result, user, resultCLSID) def _wrap_dispatch_( self, ob, userName=None, returnCLSID=None, ): # Given a dispatch object, wrap it in a class return Dispatch(ob, userName) def _get_good_single_object_(self, ob, userName=None, ReturnCLSID=None): if isinstance(ob, PyIDispatchType): # make a new instance of (probably this) class. return self._wrap_dispatch_(ob, userName, ReturnCLSID) if isinstance(ob, PyIUnknownType): try: ob = ob.QueryInterface(pythoncom.IID_IDispatch) except pythoncom.com_error: # It is an IUnknown, but not an IDispatch, so just let it through. return ob return self._wrap_dispatch_(ob, userName, ReturnCLSID) return ob def _get_good_object_(self, ob, userName=None, ReturnCLSID=None): """Given an object (usually the retval from a method), make it a good object to return. Basically checks if it is a COM object, and wraps it up. Also handles the fact that a retval may be a tuple of retvals""" if ob is None: # Quick exit! return None elif isinstance(ob, tuple): return tuple( map( lambda o, s=self, oun=userName, rc=ReturnCLSID: s._get_good_single_object_(o, oun, rc), ob, ) ) else: return self._get_good_single_object_(ob) def _make_method_(self, name): "Make a method object - Assumes in olerepr funcmap" methodName = build.MakePublicAttributeName(name) # translate keywords etc. methodCodeList = self._olerepr_.MakeFuncMethod( self._olerepr_.mapFuncs[name], methodName, 0 ) methodCode = "\n".join(methodCodeList) try: # print(f"Method code for {self._username_} is:\n", methodCode) # self._print_details_() codeObject = compile(methodCode, "<COMObject %s>" % self._username_, "exec") # Exec the code object tempNameSpace = {} # "Dispatch" in the exec'd code is win32com.client.Dispatch, not ours. globNameSpace = globals().copy() globNameSpace["Dispatch"] = win32com.client.Dispatch exec( codeObject, globNameSpace, tempNameSpace ) # self.__dict__, self.__dict__ name = methodName # Save the function in map. fn = self._builtMethods_[name] = tempNameSpace[name] return MethodType(fn, self) except: debug_print("Error building OLE definition for code ", methodCode) traceback.print_exc() return None def _Release_(self): """Cleanup object - like a close - to force cleanup when you don't want to rely on Python's reference counting.""" for childCont in self._mapCachedItems_.values(): childCont._Release_() self._mapCachedItems_ = {} if self._oleobj_: self._oleobj_.Release() self.__dict__["_oleobj_"] = None if self._olerepr_: self.__dict__["_olerepr_"] = None self._enum_ = None def _proc_(self, name, *args): """Call the named method as a procedure, rather than function. Mainly used by Word.Basic, which whinges about such things.""" try: item = self._olerepr_.mapFuncs[name] dispId = item.dispid return self._get_good_object_( self._oleobj_.Invoke(*(dispId, LCID, item.desc[4], 0) + (args)) ) except KeyError: raise AttributeError(name) def _print_details_(self): "Debug routine - dumps what it knows about an object." print("AxDispatch container", self._username_) try: print("Methods:") for method in self._olerepr_.mapFuncs: print("\t", method) print("Props:") for prop, entry in self._olerepr_.propMap.items(): print(f"\t{prop} = 0x{entry.dispid:x} - {entry!r}") print("Get Props:") for prop, entry in self._olerepr_.propMapGet.items(): print(f"\t{prop} = 0x{entry.dispid:x} - {entry!r}") print("Put Props:") for prop, entry in self._olerepr_.propMapPut.items(): print(f"\t{prop} = 0x{entry.dispid:x} - {entry!r}") except: traceback.print_exc() def __LazyMap__(self, attr): try: if self._LazyAddAttr_(attr): debug_attr_print( f"{self._username_}.__LazyMap__({attr}) added something" ) return 1 except AttributeError: return 0 # Using the typecomp, lazily create a new attribute definition. def _LazyAddAttr_(self, attr): if self._lazydata_ is None: return 0 res = 0 typeinfo, typecomp = self._lazydata_ olerepr = self._olerepr_ # We need to explicitly check each invoke type individually - simply # specifying '0' will bind to "any member", which may not be the one # we are actually after (ie, we may be after prop_get, but returned # the info for the prop_put.) for i in ALL_INVOKE_TYPES: try: x, t = typecomp.Bind(attr, i) # Support 'Get' and 'Set' properties - see # bug 1587023 if x == 0 and attr[:3] in ("Set", "Get"): x, t = typecomp.Bind(attr[3:], i) if x == pythoncom.DESCKIND_FUNCDESC: # it's a FUNCDESC r = olerepr._AddFunc_(typeinfo, t, 0) elif x == pythoncom.DESCKIND_VARDESC: # it's a VARDESC r = olerepr._AddVar_(typeinfo, t, 0) else: # not found or TYPEDESC/IMPLICITAPP r = None if not r is None: key, map = r[0], r[1] item = map[key] if map == olerepr.propMapPut: olerepr._propMapPutCheck_(key, item) elif map == olerepr.propMapGet: olerepr._propMapGetCheck_(key, item) res = 1 except: pass return res def _FlagAsMethod(self, *methodNames): """Flag these attribute names as being methods. Some objects do not correctly differentiate methods and properties, leading to problems when calling these methods. Specifically, trying to say: ob.SomeFunc() may yield an exception "None object is not callable" In this case, an attempt to fetch the *property* has worked and returned None, rather than indicating it is really a method. Calling: ob._FlagAsMethod("SomeFunc") should then allow this to work. """ for name in methodNames: details = build.MapEntry(self.__AttrToID__(name), (name,)) self._olerepr_.mapFuncs[name] = details def __AttrToID__(self, attr): debug_attr_print( "Calling GetIDsOfNames for property {} in Dispatch container {}".format( attr, self._username_ ) ) return self._oleobj_.GetIDsOfNames(0, attr) def __getattr__(self, attr): if attr == "__iter__": # We can't handle this as a normal method, as if the attribute # exists, then it must return an iterable object. try: invkind = pythoncom.DISPATCH_METHOD | pythoncom.DISPATCH_PROPERTYGET enum = self._oleobj_.InvokeTypes( pythoncom.DISPID_NEWENUM, LCID, invkind, (13, 10), () ) except pythoncom.com_error: raise AttributeError("This object can not function as an iterator") # We must return a callable object. class Factory: def __init__(self, ob): self.ob = ob def __call__(self): import win32com.client.util return win32com.client.util.Iterator(self.ob) return Factory(enum) if attr.startswith("_") and attr.endswith("_"): # Fast-track. raise AttributeError(attr) # If a known method, create new instance and return. try: return MethodType(self._builtMethods_[attr], self) except KeyError: pass # XXX - Note that we current are case sensitive in the method. # debug_attr_print("GetAttr called for %s on DispatchContainer %s" % (attr,self._username_)) # First check if it is in the method map. Note that an actual method # must not yet exist, (otherwise we would not be here). This # means we create the actual method object - which also means # this code will never be asked for that method name again. if attr in self._olerepr_.mapFuncs: return self._make_method_(attr) # Delegate to property maps/cached items retEntry = None if self._olerepr_ and self._oleobj_: # first check general property map, then specific "put" map. retEntry = self._olerepr_.propMap.get(attr) if retEntry is None: retEntry = self._olerepr_.propMapGet.get(attr) # Not found so far - See what COM says. if retEntry is None: try: if self.__LazyMap__(attr): if attr in self._olerepr_.mapFuncs: return self._make_method_(attr) retEntry = self._olerepr_.propMap.get(attr) if retEntry is None: retEntry = self._olerepr_.propMapGet.get(attr) if retEntry is None: retEntry = build.MapEntry(self.__AttrToID__(attr), (attr,)) except pythoncom.ole_error: pass # No prop by that name - retEntry remains None. if retEntry is not None: # see if in my cache try: ret = self._mapCachedItems_[retEntry.dispid] debug_attr_print("Cached items has attribute!", ret) return ret except (KeyError, AttributeError): debug_attr_print("Attribute %s not in cache" % attr) # If we are still here, and have a retEntry, get the OLE item if retEntry is not None: invoke_type = _GetDescInvokeType(retEntry, pythoncom.INVOKE_PROPERTYGET) debug_attr_print( "Getting property Id 0x%x from OLE object" % retEntry.dispid ) try: ret = self._oleobj_.Invoke(retEntry.dispid, 0, invoke_type, 1) except pythoncom.com_error as details: if details.hresult in ERRORS_BAD_CONTEXT: # May be a method. self._olerepr_.mapFuncs[attr] = retEntry return self._make_method_(attr) raise debug_attr_print("OLE returned ", ret) return self._get_good_object_(ret) # no where else to look. raise AttributeError(f"{self._username_}.{attr}") def __setattr__(self, attr, value): if ( attr in self.__dict__ ): # Fast-track - if already in our dict, just make the assignment. # XXX - should maybe check method map - if someone assigns to a method, # it could mean something special (not sure what, tho!) self.__dict__[attr] = value return # Allow property assignment. debug_attr_print( f"SetAttr called for {self._username_}.{attr}={value!r} on DispatchContainer" ) if self._olerepr_: # Check the "general" property map. if attr in self._olerepr_.propMap: entry = self._olerepr_.propMap[attr] invoke_type = _GetDescInvokeType(entry, pythoncom.INVOKE_PROPERTYPUT) self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) return # Check the specific "put" map. if attr in self._olerepr_.propMapPut: entry = self._olerepr_.propMapPut[attr] invoke_type = _GetDescInvokeType(entry, pythoncom.INVOKE_PROPERTYPUT) self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) return # Try the OLE Object if self._oleobj_: if self.__LazyMap__(attr): # Check the "general" property map. if attr in self._olerepr_.propMap: entry = self._olerepr_.propMap[attr] invoke_type = _GetDescInvokeType( entry, pythoncom.INVOKE_PROPERTYPUT ) self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) return # Check the specific "put" map. if attr in self._olerepr_.propMapPut: entry = self._olerepr_.propMapPut[attr] invoke_type = _GetDescInvokeType( entry, pythoncom.INVOKE_PROPERTYPUT ) self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) return try: entry = build.MapEntry(self.__AttrToID__(attr), (attr,)) except pythoncom.com_error: # No attribute of that name entry = None if entry is not None: try: invoke_type = _GetDescInvokeType( entry, pythoncom.INVOKE_PROPERTYPUT ) self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) self._olerepr_.propMap[attr] = entry debug_attr_print( "__setattr__ property {} (id=0x{:x}) in Dispatch container {}".format( attr, entry.dispid, self._username_ ) ) return except pythoncom.com_error: pass raise AttributeError(f"Property '{self._username_}.{attr}' can not be set.") 结合这个文件
09-10
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值