Predictive PER: Balancing Priority and Diversity Towards Stable Deep Reinforcement Learning

Title Predictive PER: Balancing Priority and Diversity Towards Stable Deep Reinforcement Learning
Author
Keywords
Abstract

Prioritized experience replay (PER) samples important transitions, rather than uniformly, to improve data efficiency of a deep reinforcement learning agent. We claim that such prioritization must be balanced with sample diversity to make the deep Q-network (DQN) stabilized and prevent severe forgetting. Our proposed improvement over PER, called Predictive PER (PPER), takes three countermeasures (TDInit, TDClip, TDPred) for (i) eliminating priority outliers and explosions; (ii) improving the diversity of samples and their distributions, weighted by priorities. Both contribute to stabilizing the learning process, thus forgetting less. The most notable among the three is TDPred, the second DNN introduced for generalizing in-distribution priorities. Ablation and experimental studies with Atari games show that each countermeasure, in its own way, and PPER successfully contribute to enhancing stability hence performance, over PER.

Year of Conference
2021
Publisher
IEEE
Conference Location
Shenzhen, China (virtual)
DOI
10.1109/IJCNN52387.2021.9534243
Download citation