

对长的trajectory有偏好,这是真的,这也造成了agent有时候会来回蠕动
As mentioned in Go-Explore series (Ecoffet et al., 2019; 2020), count-based approaches also suffer from detachment: if the agent by chance starts exploring τ2 after briefly exploring the first few states of τ1, it would not return and explore τ1 further since τ1 is now “shorter” than τ2 and has lower IR than τ2 for a long period. Go-Explore tries to resolve this dilemma between “dedication” and “exploration” by using a two-stage approach with many hand-tuned parameters.
主要就是这个,对长trajectory有偏好,其他和novelD一样,嗯。。。。
文章探讨了在强化学习中,Go-Explore算法倾向于长轨迹探索,导致agent可能在探索新轨迹τ2后放弃较短的τ1。为解决专注与探索的矛盾,Go-Explore采用分阶段方法,涉及大量手动调整的参数。作者还提及了与novelD相关的相似性。
15万+

被折叠的 条评论
为什么被折叠?



