Bailey Flanigan & He Sun

Date: 

Friday, February 10, 2023, 1:00pm to 2:30pm

Location: 

SEC 1.413 https://goo.gl/maps/UjUiWMGZCEGh5qMr9
Fridays 1-2pm in SEC 1.413 

Bailey Flanigan (Carnegie Mellon)

Distortion under public-spirited voting

Abstract: A key promise of democratic voting is that, by accounting for all constituents’ preferences, it produces decisions that benefit the constituency overall. It is alarming, then, that all deterministic voting rules have unbounded distortion: all such rules — even under reasonable conditions — will sometimes select outcomes that yield essentially no value for constituents. In this talk — based on our paper Distortion under public-spirited voting — we show that this problem is mitigated by voters being public-spirited: that is, when deciding how to rank alternatives, voters weigh the common good in addition to their own interests. We first generalize the standard voting model to capture this public-spirited voting behavior. In this model, we then show that public-spirited voting can substantially — and in some senses, monotonically — reduce the distortion of several voting rules. Notably, these results include the finding that if voters are at all public-spirited, some voting rules have constant distortion in the number of alternatives. Further, we demonstrate that these benefits are robust to adversarial conditions likely to exist in practice. Taken together, our results suggest an implementable approach to improving the welfare outcomes of elections: democratic deliberation, an already-mainstream practice that is believed to increase voters' public spirit.

 

He Sun (Harvard)

Reinforcement Learning with Stepwise Fairness Constraints

Abstract: AI methods are used in societally important settings, ranging from credit to employment to housing, and it is crucial to provide fairness in regard to algorithmic decision making. Moreover, many settings are dynamic, with populations responding to sequential decision policies. We introduce the study of reinforcement learning (RL) with stepwise fairness constraints, requiring group fairness at each time step. Our focus is on tabular episodic RL, and we provide learning algorithms with strong theoretical guarantees in regard to policy optimality and fairness violation. Our framework provides useful tools to study the impact of fairness constraints in sequential settings and brings up new challenges in RL.