Recursive Constraints to Prevent Instability in Constrained Reinforcement Learning
Title | Recursive Constraints to Prevent Instability in Constrained Reinforcement Learning |
---|---|
Author | |
Keywords | |
Abstract | We consider the challenge of finding a deterministic policy for a Markov decision process that uniformly (in all states) maximizes one reward subject to a probabilistic constraint over a different reward. Existing solutions do not fully address our precise problem definition, which nevertheless arises naturally in the context of safety-critical robotic systems. This class of problem is known to be hard, but the combined requirements of determinism and uniform optimality can create learning instability. In this work, after describing and motivating our problem with a simple example, we present a suitable constrained reinforcement learning algorithm that prevents learning instability, using recursive constraints. Our proposed approach admits an approximative form that improves efficiency and is conservative w.r.t. the constraint. |
Year of Publication |
2021
|
Conference Name |
Recursive Constraints to Prevent Instability in Constrained Reinforcement Learning
|
Conference Location |
Online at http://modem2021.cs.nuigalway.ie/
|
URL |
https://arxiv.org/abs/2201.07958
|
Download citation |