Date: Thursday, November 28, 2024
Time: 1:00 pm - 2:30 pm
Location: DC 1304
Title: Bridging the Reality Gap in Reinforcement Learning: Robust Algorithms and Techniques for Practical Applications
Abstract: Reinforcement Learning (RL) tackles complex decision-making by training computational models through interactions. However, direct training in real-world settings—such as autonomous driving or medical procedures—is often impractical due to the high risk of costly or dangerous errors. As a result, RL commonly relies on simulated environments or static offline datasets. This reliance, however, introduces a critical challenge known as the "reality gap"—a discrepancy between training conditions and the dynamics encountered in real-world applications. This presentation addresses innovative strategies designed to bridge this gap by enhancing the effectiveness of RL policies:
- Robust RL Optimization: We delve into the strategic use of perturbations to refine policies learned from simulators. This approach focuses on increasing the adaptability and robustness of these policies, making them better suited for real-world applications where variability and unexpected conditions are common.
- Offline RL Optimization: Further discussion will explore the application of the Hamilton-Jacobi-Bellman (HJB) equation as a method to enhance the performance of policies trained on static datasets. This technique is crucial for improving real-world applicability in scenarios where real-time interaction with the environment is not possible.
Bio: Yun-Hsuan Lien is a doctoral candidate at National Yang Ming Chiao Tung University, specializing in Computer Science with a research focus on reinforcement learning. Her work centers on developing theoretical RL algorithms and their applications in dynamic, real-world environments. She is dedicated to reducing the reality gap by enhancing the robustness of RL models when leveraging simulator or pre-collected data to circumvent the limitations of direct environmental interaction.