[social-share]

Simons Institute Summer Cluster for AI and Humanity at UC Berkeley

Who is the ‘human’ in human-centred AI? Reflections on the Simons Institute Summer Cluster for AI and Humanity at UC Berkeley

Author Dr Thao Phan, Dr Chris O’Neill and Dr Kacper Sokol
Date 16 August 2022

In her classic paper ‘The Ironies of Automation’ (1983), human factors researcher Lisanne Bainbridge highlights one of the key contradictions at the heart of automated decision-making and control systems: despite being introduced as a means to make up for the limitations and failures of human operators, it is the same flawed humans that are often tasked with monitoring and providing assurance that these systems are working effectively. The unresolved tensions at the heart of Bainbridge’s argument continue to structure many of the contradictions we see with applied AI today – the more complex and high-stakes an AI system (like those used in criminal sentencing, medical diagnosis, and determining access to key social services), the more we cling to the idea of an irreplaceable human or humanity to safeguard against the extreme and unpredictable harms these systems can cause.

Approaches to AI accountability and responsibility increasingly look to the figure of the human to help alleviate the anxieties that haunt AI-driven risk societies. The language of “AI safety” dissimulates the tension at play in such contradictions through the use of phrases like human-centred AI, human-compatible AI, and human-in-the-loop. But what model of the human is being mobilised within these configurations? And how might this figure be used to stabilise, smooth over, or even legitimate expanding forms of technocracy?

The aim of the Simons Institute Summer Cluster on AI and Humanity was to engage precisely with these questions and more. Hosted by the Simons Institute for the Theory of Computing at UC Berkeley, this interdisciplinary workshop brought together scholars from across law, philosophy, feminist STS, media history, computer science, data science, engineering, HCI, and design to critically interrogate the ways in which the historically exclusionary, unstable, and oppressive figure of the human is, ironically, today recuperated as an instrument to give AI systems a sense of stability and to prevent explicitly racist, sexist, classist and other intersecting categories of harm.

Over the 6 weeks we covered a range of topics, including:

  • Human-in-the-loop as a mechanism for distributing liability 
  • Recommender systems and the problems of optimising for engagement over values
  • How contemporary machines make humans legible and how humans must in turn conform to these regimes of legibility
  • The role of proxies in making categories like race, gender and class legible within AI-driven systems of recognition
  • Techniques of documentation in the field of AI fairness, accountability, and transparency
  • The potentials of natural language processing for intervening in toxic forms of communication
  • The history and limits of algorithmic risk assessment tools
  • The creative promise of error and its contemporary absorption into normative frameworks of cybernetic feedback and homeostasis

In addition to our twice-weekly seminars, the organisers also arranged a week-long public symposium hosted at the Simons Institute on the cluster’s theme of AI and Humanity. The symposium included invited speakers from universities across the U.S. and the world, including Cornell Tech, MIT, Stanford University, UC Berkeley, UC Santa Barbara, Georgia Tech, Drexel University, University of New Mexico, University of North Carolina, University of Helsinki, Kent Law School, Birkbeck College, the East-West Centre (University of Hawai’i), Melbourne Law School, Monash University, and RMIT.

As visiting scholars we were each given the opportunity to present and receive feedback on our research.

In her talk ‘Race Beyond Perception’, Thao Phan explored how algorithmic culture is transforming the processes of race and racialisation today. She used examples from commercial platforms, like Facebook/Meta, to the U.S. National Security Agency to discuss how racial categories are being constituted in new and novel ways via behavioural data and machine learning techniques. She argued that these techniques present new challenges for critical race scholars. First, because these processes are inherently ‘invisual’ – operating in ways that explicitly cannot be seen either because its constitution occurs beyond human scrutiny (at scales imperceptible to the human) or because it is deliberately obscured and opaque (because they operate through proprietary processes). And second, because this invisuality demands new skills to think, identify, and confront the forms of racism these systems perpetuate. 

Chris O’Neill presented a genealogy of the ‘human-in-the-loop’, which aimed to expand the dominant narrative around the figure, especially through a consideration of the creative approach to understanding the human operator pursued in the French work science tradition. He also led a seminar on the status of ‘Error’ as a disruptive force in automated systems.

Kacper Sokol discussed the social and technical components that are the foundation of explainability when dealing with automated decision-making systems. In his talk, Kacper demonstrated an example of a toy algorithmic explainer applied to a simple image classification task, and argued that the human understanding of a predictive model facilitated by such tools should be the primary metric of their success. This observation was based on a clear separation between the technical and social aspects of explainer systems, which enables a detailed analysis of their dimensions, properties, and roles. The discussion that followed the talk shed a light on an interesting connection between the social and technical theories of explainability and understanding, and sparked an interdisciplinary collaboration aimed to bridge the two.

The workshop provided us with the invaluable opportunity to share our research with world-leading experts, establish new connections with North American colleagues and institutions, lay the groundwork for new, international collaborations, and to generate new knowledge on the important topic of AI and Humanity.

The workshop brought together not just a highly interdisciplinary cohort of researchers, but a highly heterogeneous set of approaches and conceptual languages. The issue of translating between these different ways of knowing was perhaps one of the most challenging and the most productive aspects of the meeting. Indeed, the challenge was never ‘merely’ that of language – questions of scale, style, goals and intentions were also at play. In this sense, the key achievement of the workshop was to provide a space where this kind of dialogue could be pursued without collapsing the important differences between such approaches. As members of an interdisciplinary research centre like the ADM+S we were both able to contribute productively to this project, as well as to take away important lessons to develop in our future work.

We would like to acknowledge and express our sincere thanks to the organisers of the AI and Humanity cluster for their careful work in planning the workshop, in shaping and leading our weekly discussions, and for providing the resources, time, and space for us to gather together for these enlivening conversations: Professor Helen Nissenbaum (Cornell University), Dr Thomas Krendl Gilbert (Cornell University), Dr Jake Goldenfein (Melbourne Law School), Dr Connal Parsley (University of Kent) and Assistant Professor Qian Yang (Cornell University).

We would also like to thank the staff at the Simons Institute for the Theory of Computing for hosting and making us feel welcome for the duration of the summer cluster: Research Director Professor Peter Bartlett, Visitor Services Coordinator Atiya Rashid, and Events Coordinator Elizabeth Yuen.

Finally, we wish to thank the ARC Centre of Excellence for Automated Decision-Making and Society for providing us with the incredibly generous support to travel and participate in this unique opportunity. In particular, a special acknowledgement must go to our Research and Training Coordinator, the incomparable Sally Storey, for going above and beyond to reach across time zones and to help us survive the absolute chaos of pandemic travel – delayed flights, missed connections, accommodation fails, and of course, catching COVID.

SEE ALSO