强化学习探索 BEBOLD里的金句

文章探讨了在强化学习中,Go-Explore算法倾向于长轨迹探索,导致agent可能在探索新轨迹τ2后放弃较短的τ1。为解决专注与探索的矛盾,Go-Explore采用分阶段方法,涉及大量手动调整的参数。作者还提及了与novelD相关的相似性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

对长的trajectory有偏好,这是真的,这也造成了agent有时候会来回蠕动

As mentioned in Go-Explore series (Ecoffet et al., 2019; 2020), count-based approaches also suffer from detachment: if the agent by chance starts exploring τ2 after briefly exploring the first few states of τ1, it would not return and explore τ1 further since τ1 is now “shorter” than τ2 and has lower IR than τ2 for a long period. Go-Explore tries to resolve this dilemma between “dedication” and “exploration” by using a two-stage approach with many hand-tuned parameters.

主要就是这个,对长trajectory有偏好,其他和novelD一样,嗯。。。。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值