Hololens API解析Input-InteractionSource/SourceLocation/SourceProperties/SourceState

本文介绍了交互源(如手部、控制器和用户声音)的概念及其在空间交互中的应用。包括了交互源的位置、速度获取方法及状态快照的管理方式等关键技术细节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

InteractionSource

代表一个被检测到的手、控制器或用户声音的实例,它可以引起交互和手势。

变量

id 手、控制器或用户声音的ID。

kind  交互类型。

InteractionSourceLocation


表示手或控制器的位置和速度。


公共函数

TryGetPosition  获取交互的位置。
TryGetVelocity 获得交互的速度。


InteractionSourceProperties

表示一组可用的属性,用于探索手动或控制器的当前状态。

变量

location 手的位置和速度,在指定的坐标系中表示。
sourceLossMitigationDirection 你应该建议用户手移动的方向如果户的手接近检测区域的边缘。
sourceloss 丢失手的识别的风险(0-1)

InteractionSourceState

在给定的时间内,代表一个空间交互源(手、声音或控制器)状态的快照。

变量

headRay InteractionSourceState所代表的射线。
pressed  如果源处于被压状态,则为True。
propertie 用于探索交互源的状态的附加属性。
source  这个状态表示的交互源。


frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x7f203e86aa6d in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x7f203e86c7f0 in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f203e86defd in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #4: <unknown function> + 0xd6df4 (0x7f202e850df4 in /lib/x86_64-linux-gnu/libstdc++.so.6) frame #5: <unknown function> + 0x8609 (0x7f21357f2609 in /lib/x86_64-linux-gnu/libpthread.so.0) frame #6: clone + 0x43 (0x7f21355b1353 in /lib/x86_64-linux-gnu/libc.so.6) Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1905 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7f203d54f5e8 in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0x11b4abe (0x7f203e83cabe in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #2: <unknown function> + 0xe07bed (0x7f203e48fbed in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #3: <unknown function> + 0xd6df4 (0x7f202e850df4 in /lib/x86_64-linux-gnu/libstdc++.so.6) frame #4: <unknown function> + 0x8609 (0x7f21357f2609 in /lib/x86_64-linux-gnu/libpthread.so.0) frame #5: clone + 0x43 (0x7f21355b1353 in /lib/x86_64-linux-gnu/libc.so.6) what(): [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=719, OpType=_ALLGATHER_BASE, NumelIn=8, NumelOut=16, Timeout(ms)=1800000) ran for 1800024 milliseconds before timing out. Exception raised from checkTimeout at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:635 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7f1625eb15e8 in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libc10.so) frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x23d (0x7f16271cca6d in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0xc80 (0x7f16271ce7f0 in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f16271cfefd in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #4: <unknown function> + 0xd6df4 (0x7f16171b2df4 in /lib/x86_64-linux-gnu/libstdc++.so.6) frame #5: <unknown function> + 0x8609 (0x7f171e154609 in /lib/x86_64-linux-gnu/libpthread.so.0) frame #6: clone + 0x43 (0x7f171df13353 in /lib/x86_64-linux-gnu/libc.so.6) Exception raised from ncclCommWatchdog at /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1905 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7f1625eb15e8 in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0x11b4abe (0x7f162719eabe in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #2: <unknown function> + 0xe07bed (0x7f1626df1bed in /data1/users/heyu/uv_env/pyhy/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) frame #3: <unknown function> + 0xd6df4 (0x7f16171b2df4 in /lib/x86_64-linux-gnu/libstdc++.so.6) frame #4: <unknown function> + 0x8609 (0x7f171e154609 in /lib/x86_64-linux-gnu/libpthread.so.0) frame #5: clone + 0x43 (0x7f171df13353 in /lib/x86_64-linux-gnu/libc.so.6)
最新发布
07-17
Traceback (most recent call last): File "/data16/jiugan/code/DEIM-514/train.py", line 93, in <module> main(args) File "/data16/jiugan/code/DEIM-514/train.py", line 64, in main solver.fit(cfg_str) File "/data16/jiugan/code/DEIM-514/engine/solver/det_solver.py", line 86, in fit train_stats = train_one_epoch( ^^^^^^^^^^^^^^^^ File "/data16/jiugan/code/DEIM-514/engine/solver/det_engine.py", line 112, in train_one_epoch outputs = model(samples, targets=targets) # 前向传播 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1523, in forward else self._run_ddp_forward(*inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1359, in _run_ddp_forward return self.module(*inputs, **kwargs) # type: ignore[index] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/jiugan/code/DEIM-514/engine/deim/deim.py", line 29, in forward x = self.decoder(x, targets) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/jiugan/code/DEIM-514/engine/deim/dfine_decoder.py", line 827, in forward self._get_decoder_input(memory, spatial_shapes, denoising_logits, denoising_bbox_unact) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/jiugan/code/DEIM-514/engine/deim/dfine_decoder.py", line 745, in _get_decoder_input anchors, valid_mask = self._generate_anchors(spatial_shapes, device=memory.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/jiugan/code/DEIM-514/engine/deim/dfine_decoder.py", line 731, in _generate_anchors anchors = torch.concat(anchors, dim=1).to(device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. [E ProcessGroupNCCL.cpp:1182] [Rank 0] NCCL watchdog thread terminated with exception: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1711403408687/work/c10/cuda/CUDAException.cpp:44 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f46a4f80d87 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f46a4f3175f in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10.so) frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f46a60628a8 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10_cuda.so) frame #3: c10d::ProcessGroupNCCL::WorkNCCL::finishedGPUExecutionInternal() const + 0x6c (0x7f46416909ec in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() + 0x58 (0x7f4641694b08 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #5: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x15a (0x7f464169823a in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f4641698e79 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #7: <unknown function> + 0xd8198 (0x7f469e6eb198 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #8: <unknown function> + 0x94b43 (0x7f46a7094b43 in /lib/x86_64-linux-gnu/libc.so.6) frame #9: <unknown function> + 0x126a00 (0x7f46a7126a00 in /lib/x86_64-linux-gnu/libc.so.6) terminate called after throwing an instance of 'c10::DistBackendError' what(): [Rank 0] NCCL watchdog thread terminated with exception: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1711403408687/work/c10/cuda/CUDAException.cpp:44 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f46a4f80d87 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f46a4f3175f in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10.so) frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f46a60628a8 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10_cuda.so) frame #3: c10d::ProcessGroupNCCL::WorkNCCL::finishedGPUExecutionInternal() const + 0x6c (0x7f46416909ec in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() + 0x58 (0x7f4641694b08 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #5: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x15a (0x7f464169823a in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f4641698e79 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #7: <unknown function> + 0xd8198 (0x7f469e6eb198 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #8: <unknown function> + 0x94b43 (0x7f46a7094b43 in /lib/x86_64-linux-gnu/libc.so.6) frame #9: <unknown function> + 0x126a00 (0x7f46a7126a00 in /lib/x86_64-linux-gnu/libc.so.6) Exception raised from ncclCommWatchdog at /opt/conda/conda-bld/pytorch_1711403408687/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f46a4f80d87 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0xdef733 (0x7f46413ef733 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so) frame #2: <unknown function> + 0xd8198 (0x7f469e6eb198 in /data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #3: <unknown function> + 0x94b43 (0x7f46a7094b43 in /lib/x86_64-linux-gnu/libc.so.6) frame #4: <unknown function> + 0x126a00 (0x7f46a7126a00 in /lib/x86_64-linux-gnu/libc.so.6) [2025-05-28 13:49:53,599] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 639748) of binary: /data16/home/zjl/miniconda3/envs/deim/bin/python Traceback (most recent call last): File "/data16/home/zjl/miniconda3/envs/deim/bin/torchrun", line 33, in <module> sys.exit(load_entry_point('torch==2.2.2', 'console_scripts', 'torchrun')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/distributed/run.py", line 812, in main run(args) File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/distributed/run.py", line 803, in run elastic_launch( File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 135, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data16/home/zjl/miniconda3/envs/deim/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ======================================================= train.py FAILED ------------------------------------------------------- Failures: <NO_OTHER_FAILURES> ------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2025-05-28_13:49:53 host : cv147cv rank : 0 (local_rank: 0) exitcode : -6 (pid: 639748) error_file: <N/A> traceback : Signal 6 (SIGABRT) received by PID 639748 =======================================================报错
05-30
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值