Online Sparse Gaussian Process Based Human Motion Intent Learning for an Electrically Actuated Lower

研究了一种在线稀疏高斯过程算法,用于基于电驱动下肢外骨骼的人体运动意图推断。此方法适用于非线性问题,通过灰色关联分析优化计算复杂度。实验验证了算法能实时获取人机交互信息。

基于电驱动下肢外骨骼对人体运动意图的在线稀疏高斯过程学习。

1)摘要

下肢外骨骼最重要的一步是推断人体运动意图(HMI),这有助于实现人体外骨骼协作。由于用户是在控制回路中,人机交互(HRI)信息和HMI是非线性的,复杂的,难以用数学方法进行建模。利用机器学习方法可以学习非线性近似。高斯过程回归适用于高维和小样本非线性回归问题。由于计算复杂度,GP回归对大数据集是有限制的。本文构造了一种在线稀疏GP算法来学习人机交互。当用户穿戴有摩擦补偿的外骨骼系统时,尽可能地进行无约束运动,收集原始训练数据集。数据集有两种类型的数据,即(1)物理人机交互,它被放置在相互作用的扭矩传感器收集。活动关节,即膝关节交界处;(2)用光学位置传感器测量关节角位置。为了降低GP的计算复杂度,灰色关联分析被用来选定原始数据集和提供最终的训练数据集。这些超参数通过最大化边缘似然进行离线优化,并将其应用到在线GP回归算法中。作为人机交互的人体关节的角位置,将被视为机械腿的参考轨迹。为
了验证算法的有效性,在自然速度下对实验对象进行了实验。实验结果表明,该人机交互信息可以实时获得,可扩展应用于类似的外骨骼系统中。

### Sparse Variational Gaussian Process Implementation and Application in Multi-output Regression In the context of multi-output regression using sparse variational Gaussian processes (SVGP), these models aim to efficiently handle large datasets while maintaining predictive performance. The key idea is to approximate a full Gaussian process with a smaller set of inducing points that summarize the data distribution effectively. The formulation for SVGP involves introducing pseudo-inputs or inducing variables \( \mathbf{u} \). These are strategically chosen locations where function values can be evaluated, thereby reducing computational complexity from O(N³) to approximately O(M²N)[^1], making it feasible to apply GPs on larger datasets. For implementing an SVGP model tailored towards multi-output scenarios: #### Model Definition A common approach employs independent latent functions per output dimension but shares some parameters across them. This setup allows capturing correlations between outputs through shared kernels or other mechanisms like coregionalization matrices. ```python import gpflow class MultiOutputSVGPR(gpflow.models.SVGP): def __init__(self, kernel_list, likelihood, Z, num_latent_gps=1): super().__init__( kernel=gpflow.kernels.SharedIndependent(kernels=kernel_list), likelihood=likelihood, inducing_variable=Z, q_diag=False, whiten=True, num_latent_gps=num_latent_gps ) ``` This code snippet defines a custom class inheriting from `gpflow.models.SVGP` which supports multiple outputs by specifying different kernels within `kernel_list`. #### Training Procedure Training such models typically relies on stochastic optimization techniques due to their scalability properties when dealing with big data sets. One popular method mentioned previously includes Bayesian posterior sampling via stochastic gradient Fisher scoring[^3]. However, more commonly used algorithms involve Adam optimizer combined with mini-batch training strategies. #### Practical Considerations When applying SVGPs for real-world problems involving high-dimensional inputs/output spaces, several factors should be considered: - Selection criteria for choosing optimal number M of inducing points based on trade-offs between accuracy vs speed; - Efficient selection methods for placing those inducing points either randomly sampled from dataset or optimized positions learned during inference phase; - Choice of appropriate covariance structures capable of modeling complex dependencies among various dimensions without overfitting issues;
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值