Game-Theoretic Reinforcement Learning

Date: 

Friday, November 10, 2023, 1:30pm to 2:30pm

Location: 

SEC 1.413

Speaker: Stephen McAleer (CMU postdoc)

Title: Game-Theoretic Reinforcement Learning

Abstract: Game-theoretic reinforcement learning studies reinforcement learning algorithms that have guarantees of convergence to equilibrium. In this talk I give an overview of game-theoretic reinforcement learning, categorizing it into three main classes: double oracle-based methods, counterfactual regret minimization-based methods, and policy-gradient-based methods. I then introduce state-of-the-art approaches within each algorithmic class, and show how these algorithms can achieve expert-level performance on the challenge game of Stratego. Lastly, I show how game-theoretic reinforcement learning can be used to solve core problems in single-agent reinforcement learning.

Speaker Bio: Stephen McAleer is a postdoc at Carnegie Mellon University working with Tuomas Sandholm. His research has led to the first reinforcement learning algorithm to solve the Rubik's cube and the first algorithm to achieve expert-level performance on Stratego. His work has been published in Science, Nature Machine Intelligence, ICML, NeurIPS, and ICLR, and has been featured in news outlets such as the Washington Post, the LA Times, MIT Technology Review, and Forbes. He received a PhD in computer science from UC Irvine working with Pierre Baldi, and a BS in mathematics and economics from Arizona State University.