Webcast
“Instance-Adaptive and Optimal Offline Reinforcement Learning”
Speaker: Ming Yin (UCSB)
Abstract: Reinforcement Learning is becoming the mainstay of sequential decision-making problems. In particular, offline reinforcement learning is considered the central framework for real-life applications when online interactions are not permitted. This talk will expose the main challenges for offline RL (including distribution shift, the curse of the horizon, and the suboptimal data) and offer our solutions on how to bypass them. I will discuss how to improve the sample efficiency using various techniques and show how they adapt to the hardness of individual problems. I will also briefly discuss the connection between these methodologies and their extensions to more general settings.
Remote presentation only.
Join from PC, Mac, Linux, iOS or Android: https://yale.zoom.us/j/95770019076
Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788 One Tap Mobile: +12034329666,,95770019076# US (Bridgeport)
Meeting ID: 957 7001 9076
International numbers available: https://yale.zoom.us/u/adTjb3rkTu