Humans, Machines, and Decision Responsibility
Focus Areas: News & Media, Social Services, Mobilities, Health
Research Program: Institutions, Machines
Automated decision-making provokes a range of anxieties around transparency, equality, and accountability. A key response has been the call to ‘re-humanise’ automated decisions, with the hope that human control of automated systems might defend human values from mindless technocracy. Regulation of automated decision-making and AI often embeds this form of human centrism by prescribing a ‘human in the loop’ and the need for automated decisions to be ‘explained’. These requirements are central elements of the risk-based approaches AI regulation currently in development.
Despite their intuitive appeal, empirical research is revealing the limitations and complexities of these approaches. AI explanations sometimes provide little that is useful for decision subjects or decision makers, and risk distracting from more meaningful interrogation of why decisions are made. A human in the loop sometimes functions as a rubber stamp for automated decisions, cleaving accountability away from the true sites of decision responsibility.