您好,欢迎访问北京市农林科学院 机构知识库!

Eliminating Primacy Bias in Online Reinforcement Learning by Self-Distillation

文献类型: 外文期刊

作者: Li, Jingchen 1 ; Shi, Haobin 2 ; Wu, Huarui 1 ; Zhao, Chunjiang 1 ; Hwang, Kao-Shing 3 ;

作者机构: 1.Beijing Acad Agr & Forestry Sci, Informat Technol Res Ctr, Beijing 100079, Peoples R China

2.Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China

3.Natl Sun Yat sen Univ, Dept Elect Engn, Kaohsiung 80424, Taiwan

关键词: Online reinforcement learning; overfitting; reinforcement learning

期刊名称:IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS ( 影响因子:10.4; 五年影响因子:11.2 )

ISSN: 2162-237X

年卷期: 2024 年

页码:

收录情况: SCI

摘要: Excessive invalid explorations at the beginning of training lead deep reinforcement learning process to fall into the risk of overfitting, further resulting in spurious decisions, which obstruct agents in the following states and explorations. This phenomenon is termed primacy bias in online reinforcement learning. This work systematically investigates the primacy bias in online reinforcement learning, discussing the reason for primacy bias, while the characteristic of primacy bias is also analyzed. Besides, to learn a policy generalized to the following states and explorations, we develop an online reinforcement learning framework, termed self-distillation reinforcement learning (SDRL), based on knowledge distillation, allowing the agent to transfer the learned knowledge into a randomly initialized policy at regular intervals, and the new policy network is used to replace the original one in the following training. The core idea for this work is distilling knowledge from the trained policy to another policy can filter biases out, generating a more generalized policy in the learning process. Moreover, to avoid the overfitting of the new policy due to excessive distillations, we add an additional loss in the knowledge distillation process, using L2 regularization to improve the generalization, and the self-imitation mechanism is introduced to accelerate the learning on the current experiences. The results of several experiments in DMC and Atari 100k suggest the proposal has the ability to eliminate primacy bias for reinforcement learning methods, and the policy after knowledge distillation can urge agents to get higher scores more quickly.

  • 相关文献
作者其他论文 更多>>