Capturing forceful interaction with deformable objects using a deep learning- powered... 翻译

Capturing forceful interaction with deformable objects using a deep learning- powered stretchable tactile array

利用深度学习驱动的可拉伸触觉阵列捕捉与可变形物体的强力交互

Chunpeng Jiang @ 1 , 7 {\text{@}}^{1,7} @1,7 ,Wenqiang X u 2 , 7 {\mathrm{ {Xu}}}^{2,7} Xu2,7 ,Yutong L i 2 {\mathrm{ {Li}}}^{2} Li2 ,Zhenjun Y u 2 {\mathrm{ {Yu}}}^{2} Yu2 , Longchun Wang 1 {}^{1} 1 ,Xiaotong Hu 1 , 3 {}^{1,3} 1,3 ,Zhengyi Xie 1 , 3 {}^{1,3} 1,3 ,Qingkun Liu O’,Bin Yang 1 {}^{1} 1 , Xiaolin Wang1,Wenxin Du2,Tutian Tang O 2 {\text{O}}^{2} O2 ,Dongzhe Zheng 2 {}^{2} 2 ,Siqiong Yao 4 {}^{4} 4 , Cewu Lu 5 , 6 ⊠ {}^{5,6} \boxtimes 5,6 & Jingquan Liu 1 ⊠ {}^{1} \boxtimes 1

Capturing forceful interaction with deformable objects during manipulation benefits applications like virtual reality, telemedicine, and robotics. Replicating full hand-object states with complete geometry is challenging because of the occluded object deformations. Here, we report a visual-tactile recording and tracking system for manipulation featuring a stretchable tactile glove with 1152 force-sensing channels and a visual-tactile joint learning framework to estimate dynamic hand-object states during manipulation. To overcome the strain interference caused by contact with deformable objects, an active suppression method based on symmetric response detection and adaptive calibration is proposed and achieves 97.6 % {97.6}\% 97.6% accuracy in force measurement, contributing to an improvement of 45.3 % {45.3}\% 45.3% . The learning framework processes the visual-tactile sequence and reconstructs hand-object states. We experiment on 24 objects from 6 categories including both deformable and rigid ones with an average reconstruction error of 1.8    c m {1.8}\mathrm{\;{cm}} 1.8cm for all sequences, demonstrating a universal ability to replicate human knowledge in manipulating objects with varying degrees of deformability.

在操作过程中捕捉与可变形物体的强力交互,有利于虚拟现实、远程医疗和机器人等应用。由于物体变形的遮挡,复制完整的手-物体状态及其几何形状是具有挑战性的。在此,我们报告了一种视觉-触觉记录和跟踪系统,该系统配备有1152个力传感通道的可拉伸触觉手套,并采用视觉-触觉联合学习框架来估计操作过程中的动态手-物体状态。为了克服与可变形物体接触引起的应变干扰,提出了一种基于对称响应检测和自适应校准的主动抑制方法,并在力测量中实现了 97.6 % {97.6}\% 97.6% 的精度,贡献了 45.3 % {45.3}\% 45.3% 的改进。学习框架处理视觉-触觉序列并重建手-物体状态。我们在6个类别中的24个物体(包括可变形和刚性的物体)上进行了实验,所有序列的平均重建误差为 1.8    c m {1.8}\mathrm{\;{cm}} 1.8cm,展示了在操作不同程度可变形物体时复制人类知识的通用能力。

Human-machine interaction (HMI) systems serve as gateways to the metaverse, acting as bridges between the physical world and the digital realm. A natural user interface in HMI allows humans to perform natural and intuitive control 1 {}^{1} 1 . Although the non-forceful interfaces such as hand gestures (Fig. 1A(i)) can be tracked using technologies like inertial measurement units ( I M U ) 2 {\left( \mathrm{ {IMU}}\right) }^{2} (IMU)2 ,electromyography (EMG) sensors 3 , 4 {\text{sensors}}^{3,4} sensors3,4 ,strain sensors 5 , 6 {}^{5,6} 5,6 ,video recording 7 {}^{7} 7 and triboelectric sensors 8 {}^{8} 8 , the forceful interfaces such as interaction with objects, i.e., the human manipulation,are less explored 9 , 10 {}^{9,{10}} 9,10 . Capturing forceful human Nature Communications | (2024)15:9513 manipulation has extensive potential applications, such as virtual reality ( V R ) 11 , 12 {\left( \mathrm{ {VR}}\right) }^{ {11},{12}} (VR)11,12 ,telemedicine 13 {}^{13} 13 ,robotics 14 , 15 {}^{ {14},{15}} 14,15 ,and contributes to real-world understanding for large artificial intelligence (AI) models 16 {}^{16} 16 . Replicating the hand-object interplay is the first step to applying human manipulation knowledge in these applications. However, the hand-object states captured in previous research were far from complete. They mainly explore tasks like semantic recognition and spatial localization to predict object category and position (Fig. 1A(ii)) 17 − 20 {}^{ {17} - {20}} 172

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值