How fair do people perceive government decisions based on algorithmic predictions? And to what extent can the government delegate decisions to machines without sacrificing perceived procedural fairness? Using a set of vignettes in the context of predictive policing, school admissions, and refugee-matching, we explore how different degrees of human–machine interaction affect fairness perceptions and procedural preferences. We implement four treatments varying the extent of responsibility delegation to the machine and the degree of human involvement in the decision-making process, ranging from full human discretion, machine-based predictions with high human involvement, machine-based predictions with low human involvement, and fully machine-based decisions. We find that machine-based predictions with high human involvement yield the highest and fully machine-based decisions the lowest fairness scores. Different accuracy assessments can partly explain these differences. Fairness scores follow a similar pattern across contexts, with a negative level effect and lower fairness perceptions of human decisions in the context of predictive policing. Our results shed light on the behavioral foundations of several legal human-in-the-loop rules.
Public school choice often yields student assignments that are neither fair nor efficient. The efficiency-adjusted deferred acceptance mechanism (EADAM) allows students to consent to waive priorities that have no effect on their assignments. A burgeoning recent literature places EADAM at the centre of the trade-off between efficiency and fairness in school choice. Meanwhile, the Flemish Ministry of Education has taken the first steps to implement this algorithm in Belgium. We provide the first experimental evidence on the performance of EADAM against the celebrated deferred acceptance mechanism (DA). We find that both efficiency and truth-telling rates are higher under EADAM than under DA, even though EADAM is not strategy-proof. When the priority waiver is enforced, efficiency further increases, while truth-telling rates decrease relative to the EADAM variants where students can dodge the waiver. Our results challenge the importance of strategy-proofness as a prerequisite for truth-telling and portend a new trade-off between efficiency and vulnerability to preference manipulation.
When do students game the school admissions system? It depends. Complexity may push students to refrain from manipulating their school rankings, even when the admissions system is manipulable.
Fair Governance with Humans and Machines
Is an AI really less fair than a human decision-maker? There is a human-AI fairness gap. But an AI doing most of the work is considered as fair as a human doing all the work.
Which decisions are you OK with AI making?
Interview on a study conducted at the Max Planck Institute for Research on Collective Goods. A follow-up project involving a collaboration between the University of Zurich, ETH Zurich, Georgetown University, the University of Hong Kong and the Max Planck Institute is in the making.