"Algorithmic Decision-Making When Humans Disagree About Ends"

The title of this post is the title of this notable new paper authored by Kiel Brennan-Marquez and Vincent Chiao available via SSRN.  Here is its abstract:

Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates; one familiar answer is that machines are capable, in principle, of implementing tasks whose ends are uncontroversial, but machine delegation is inappropriate for tasks that elude human consensus.  After all, if even (human) experts cannot agree about the nature of a task, what hope is there for machines?

Here, we turn this position around.  In fact, when humans disagree about the nature of a task, that should be prima facie grounds for machine-delegation, not against it. The reason comes back to a fairness concern: affected parties should be able to predict the outcomes of particular cases.  Indeterminate decision-making environments — those in which human disagree about ends — are inherently unpredictable in the sense that, for any given case, the distribution of likely outcomes will depend on a specific decision-maker’s view of the relevant end. This injects an irreducible — and, we argue, intolerable — dynamic of randomization into the decision-making process from the perspective of non-repeat players.  To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement, as such, on this specific dimension of predictability; whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance.

The essay has two halves. In the first, we elaborate the formal point, drawing a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision-types.  In the second half, we bring the formal point alive through the case study of criminal sentencing.

Via RSSMix.com Mix ID 8247011 http://www.rssmix.com/

Comments