您好,欢迎访问北京市农林科学院 机构知识库!

Cournot Policy Model: Rethinking centralized training in multi-agent reinforcement learning

文献类型: 外文期刊

作者: Li, Jingchen 1 ; Yang, Yusen 1 ; He, Ziming 2 ; Wu, Huarui 1 ; Shi, Haobin 2 ; Chen, Wenbai 3 ;

作者机构: 1.Beijing Acad Agr & Forestry Sci, Informat Technol Res Ctr, Beijing 100079, Peoples R China

2.Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China

3.Beijing Informat Sci & Technol Univ, Automat Sch, Beijing 100192, Peoples R China

4.Minist Agr & Rural Affairs, Key Lab Digital Village Technol, Beijing 100079, Peoples R China

关键词: Multi-agent reinforcement learning; Machine learning; Multi-agent system

期刊名称:INFORMATION SCIENCES ( 影响因子:6.8; 五年影响因子:6.6 )

ISSN: 0020-0255

年卷期: 2024 年 677 卷

页码:

收录情况: SCI

摘要: This work studies Centralized Training and Decentralized Execution (CTDE), which is a powerful mechanism to ease multi -agent reinforcement learning. Although the centralized evaluation ensures unbiased estimates of Q -value, peers with unknown policies make the decentralized policy far from the expectation. To make progress in more stabilized and effective joint policy, we develop a novel game framework, termed Cournot Policy Model, to enhance the CTDE-based multi -agent learning. Combining the game theory and reinforcement learning, we regard the joint decision -making in a single time step as a Cournot duopoly model, and then design a Hetero Variational Auto -Encoder to model the policies of peers in the decentralized execution. With a conditional policy, each agent is guided to a stable mixed -strategy equilibrium even though the joint policy evolves over time. We further demonstrate that such an equilibrium must exist in the case of centralized evaluation. We investigate the improvement of our method on existing centralized learning methods. The experimental results on a comprehensive collection of benchmarks indicate our approach consistently outperforms baseline methods.

  • 相关文献
作者其他论文 更多>>