Incentive-Aware Machine Learning: A Tale of Robustness, Fairness, Improvement, and Performativity

Tutorial at NeurIPS22

Organizer: Chara Podimata


When an algorithm can make consequential decisions for people's lives, people have an incentive to respond to the algorithm strategically in order to obtain a more desirable decision. This means that unless the algorithm adapts to this strategizing, it may end up creating policy decisions that are incompatible with the original policy's goal. This has been the mantra of the rapidly growing research area of incentive-aware Machine Learning (ML). In this tutorial, we introduce this area to the broader ML community. After a primer on the basic background needed, we introduce the audience to the four perspectives that have been studied so far: the robustness perspective (where the decision-maker tries to create algorithms that are robust to strategizing), the fairness perspective (where we study the inequalities that arise or are reinforced as a result of strategizing), the improvement perspective (where the learner tries to incentivize effort exertion towards actually improving their points), and the performativity perspective (where the decision-maker wishes to achieve a notion of stability in these settings).



Part I (50 mins)

  • Introduction
  • Strategic Classification as a Stackelberg Game
  • Strategic Classification vs Distribution Shift
  • Performative Prediction
  • A Bird's Eye View of the Different Perspectives in the Literature

Part II (50 mins)

  • Robustness
  • Fairness
  • Improvement

Panel (30 mins)

The purpose of the panel is to broaden the discussion around incentive-aware ML and the direction of the field. The panelists are:

Avrim Blum

Meena Jagadeesan

Jon Kleinberg

Celestine Mendler-Dunner

Jennifer Wortman Vaughan