<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ganapathi Subramanian, Sriram</style></author><author><style face="normal" font="default" size="100%">Taylor, Matthew</style></author><author><style face="normal" font="default" size="100%">Crowley, Mark</style></author><author><style face="normal" font="default" size="100%">Poupart, Pascal</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Partially Observable Mean Field Reinforcement Learning</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">AAMAS '21</style></tertiary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">mean field theory</style></keyword><keyword><style  face="normal" font="default" size="100%">Multi-Agent Reinforcement Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">partial observation</style></keyword><keyword><style  face="normal" font="default" size="100%">reinforcement learning</style></keyword><keyword><style  face="normal" font="default" size="100%">showcase</style></keyword><keyword><style  face="normal" font="default" size="100%">year-in-review-2021</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2021</style></year><pub-dates><date><style  face="normal" font="default" size="100%">3–7 May</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">International Foundation for Autonomous Agents and Multiagent Systems</style></publisher><pub-location><style face="normal" font="default" size="100%">London, United Kingdom</style></pub-location><pages><style face="normal" font="default" size="100%">537-545</style></pages><isbn><style face="normal" font="default" size="100%">9781450383073</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Traditional multi-agent reinforcement learning algorithms are not scalable to environments with more than a few agents, since these algorithms are exponential in the number of agents. Recent research has introduced successful methods to scale multi-agent reinforcement learning algorithms to many agent scenarios using mean field theory. Previous work in this field assumes that an agent has access to exact cumulative metrics regarding the mean field behaviour of the system, which it can then use to take its actions. In this paper, we relax this assumption and maintain a distribution to model the uncertainty regarding the mean field of the system. We consider two different settings for this problem. In the first setting, only agents in a fixed neighbourhood are visible, while in the second setting, the visibility of agents is determined at random based on distances. For each of these settings, we introduce a Q-learning based algorithm that can learn effectively. We prove that this Q-learning estimate stays very close to the Nash Q-value (under a common set of assumptions) for the first setting. We also empirically show our algorithms outperform multiple baselines in three different games in the MAgents framework, which supports large environments with many agents learning simultaneously to achieve possibly distinct goals.</style></abstract></record></records></xml>