When an algorithm can make consequential decisions for people's lives, people have an incentive to respond to the algorithm strategically in order to obtain a more desirable decision. This means that unless the algorithm adapts to this strategizing, it may end up creating policy decisions that are incompatible with the original policy's goal. This has been the mantra of the rapidly growing research area of incentive-aware Machine Learning (ML). In this tutorial, we introduce this area to the broader ML community. After a primer on the basic background needed, we introduce the audience to the four perspectives that have been studied so far: the robustness perspective (where the decision-maker tries to create algorithms that are robust to strategizing), the fairness perspective (where we study the inequalities that arise or are reinforced as a result of strategizing), the improvement perspective (where the learner tries to incentivize effort exertion towards actually improving their points), and the performativity perspective (where the decision-maker wishes to achieve a notion of stability in these settings).
The purpose of the panel is to broaden the discussion around incentive-aware ML and the direction of the field. The panelists are:
Avrim Blum
Meena Jagadeesan
Jon Kleinberg
Celestine Mendler-Dunner
Jennifer Wortman Vaughan