Australian Government proposes ‘Mandatory Guardrails’ for High-Risk AI
Author ADM+S Centre
Date 6 September 2024
The Australian Government has proposed ‘mandatory guardrails’ for high-risk AI, including human oversight and mechanisms to challenge AI decisions, in a discussion paper developed the Department of Industry, Science and Resources assisted by a leading AI Expert Group.
Professor Kimberlee Weatherall from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at the University of Sydney is member of the AI Expert Group.
“I’m really pleased to see this paper out now for discussion, with the government recognising that to make the most of the potential benefits of AI, we also need to ensure it is used responsibly”, said Professor Kimberlee Weatherall.
The Proposals paper for introducing mandatory guardrails for AI in high-risk settings, released by Industry and Science minister Ed Husic outlines options the Australian Government is considering to mandate guardrails on those developing and deploying AI in Australia in high-risk settings.
“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails,” said the Minister for Industry and Science Ed Husic.
The proposal provides a risk-based approach with emphasis on measures including testing, transparency and accountability, consistent with developments in other jurisdictions.
It includes the following key elements:
- A proposed definition of high-risk AI.
- Ten proposed mandatory guardrails.
- Three regulatory options to mandate these guardrails.
The three regulatory approaches could be:
- Adopting the guardrails within existing regulatory frameworks as needed
- Introducing new framework legislation to adapt existing regulatory frameworks across the economy.
- Introducing a new cross-economy AI-specific law (for example, an Australian AI Act).
The AI Expert Group, which led the development of this proposal, was appointed earlier this year following an undertaking in the Government’s interim response to the Safe and Responsible AI in Australia report.
The group includes members from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) Professor Kimberlee Weatherall, Professor Nicolas Suzor and Professor Jeannie Paterson as well as Bill Simpson-Young from ADM+S collaborating organisation Gradient.
Consultation on the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings is open for four weeks, closing 5pm AEST on Friday 4 October 2024. The Government also released a Voluntary AI Safety Standard designed to assist businesses to ensure safe and responsible AI.
For more information on the Proposals Paper, including how to have your say, go to consult.industry.gov.au/ai-regulatory-guardrails.