Dr José-Miguel Bello y Villarino (left) and Dr Henry Fraser (third from right) with colleagues from ADM+S, the European Centre for Algorithmic Transparency and the European Commission’s Joint Research Centre
Dr José-Miguel Bello y Villarino (left) and Dr Henry Fraser (third from right) with colleagues from ADM+S, the European Centre for Algorithmic Transparency and the European Commission’s Joint Research Centre

Research informs leading AI risk management framework

Author  Kathy Nickels
Date 20 December 2023

Research undertaken by Dr Henry Fraser and Dr José-Miguel Bello y Villarino at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has been cited in the US National Institute of Standards of Technology’s Artificial Intelligence (AI) Risk Management Framework (RMF) Playbook, a leading standard on risk management for the AI in the US and around the world.

The AI RMF is a go-to resource for AI developers, procurers and users to guide risk assessment and risk-management. It is intended to promote safe, secure and trustworthy AI. The NIST AI RMF will also be the foundation of the approach to generative AI in the US, following a mandate in the US President’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence of 30 October 2023 to develop a companion resource for generative AI.

As a consensus resource, NIST’s AI Risk Management Framework was developed in an open, transparent, multidisciplinary, and multistakeholder manner over an 18-month time period and in collaboration with more than 240 contributing organisations from private industry, academia, civil society, and government.

Where residual risk resides, a working paper by Dr Henry Fraser, QUT and Dr José-Miguel Bello y Villarino, University of Sydney is cited several times in NIST’s AI RMF Playbook under the function of “Manage”, which deals with how identified risks from AI systems should be managed.

The working paper analysed the approach to risk-management in Europe’s AI Act, especially the requirement for risks from AI systems to be rendered “acceptable” through risk management. It argued that the most workable way to judge risk acceptability is through cost-benefit analysis, but recognised that this would unavoidably be a value-laden exercise.

“Risks and benefits from AI systems are not like risks from other products like toys or boats”, said Dr Fraser. “They are multi-dimensional, encompassing safety, human rights, the environment; and systems can be detrimental to some rights and interests while at the same time benefiting others”.

The paper maps the trade-offs, uses and limitations of various cost-benefit approaches in judging risks from AI systems. It proposes original and creative ways to balance competing interests and values in AI risk management, including innovation, efficiency, distributive justice and human rights.

The paper had already been praised in academic circles in the last year and received the inaugural Scotiabank AI + Regulation Emerging Scholar Award. Dr Fraser and Dr Bello y Villarino are currently developing the ideas form the paper into a book on AI Risk.

SEE ALSO

Send this to a friend