论文观后感 - Learning in High Dimension Always Amounts to Extrapolation

本文探讨了在高维数据集中,机器学习模型实际上总是进行外推而非插值的现象。研究发现,在数据维度超过100的情况下,模型几乎不可能正确地进行插值,这对评估模型的泛化能力提出了挑战。

标题: Learning in High Dimension Always Amounts to Extrapolation

arXiv链接: https://arxiv.org/abs/2110.09485

阅读原因: 推荐,LeCun出品,Facebook AI出品,机器学习理论

摘要: The notion of interpolation and extrapolation is fundamental in various fields from deep learning to function approximation. Interpolation occurs for a sample x whenever this sample falls inside or on the boundary of the given dataset’s convex hull. Extrapolation occurs when x falls outside of that convex hull. One fundamental (mis)conception is that state-of-the-art algorithms work so well because of their ability to correctly interpolate training data. A second (mis)conception is that interpolation happens throughout tasks and datasets, in fact, many intuitions and theories rely on that assumption. We empirically and theoretically argue against those two points and demonstrate that on any high-dimensional (>100) dataset, interpolation almost su

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值