Last week, one of Australia’s leading artificial intelligence (AI) researchers, Toby Walsh, warned Australia’s lack of guardrails for AI is putting young people at risk of being “sacrificed for the profits of big tech”.
Walsh’s remarks came after the government scrapped its own proposal to establish an advisory body of AI experts. Instead, the government offered its National AI Plan, which, among others, stresses investment in data centres, telecommunications infrastructure, and workforce training.
The plan also envisages an “AI Safety Institute” (currently recruiting staff), and also some internal AI transparency measures for the public sector. Transparency results so far have not been great.
What does it all add up to for AI regulation in Australia?
What are other countries doing?
The European Union has attracted attention for its AI Act, which already prohibits such things as using AI systems to exploit vulnerable groups or individuals. However, Europe is struggling to implement rules on high-risk AI uses that are not prohibited.
Several governments in Australia’s region are also passing AI laws, mainly to give themselves the powers to respond when they deem it necessary.
South Korea, Japan and Taiwan – none of them minor AI players – all have newly minted laws, which are meeting the expected pushback from industry.
Not everyone has comprehensive rules
There are countries without any kind of comprehensive AI regulation, including the United States and the United Kingdom.
In the US, president Donald Trump has even prohibited most state-based regulation in relation to private AI uses. Despite the anti-safeguards language, the government has quietly retained strong safeguards for federal use of AI.
The UK has followed an even more erratic path, to end up in a similar place to Australia. Incapable of deciding what to do, it has tried to provide technical (non-legal) safeguards. This has been done through the creation of the first AI Safety (now Security) Agency, hailed by some, derided by others.
The dilemma of control
The differences in approach between countries are not surprising. Governments face the dilemma of control described by English technology scholar David Collingridge almost 50 years ago:
“when [regulatory] change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”
What’s more, Australia has limited regulatory clout regarding AI. It is not a significant global AI player in the way it is, for example, in mining, so its influence is limited.
Facing these uncertainties, what should Australia be doing?
Australia’s plan for AI safety
One certainty is that erratic behaviour is not a great option. We have good evidence that regulatory predictability matters for innovation.
In a recent speech, Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton acknowledged this:
“one of the important insurance policies we have is regulatory certainty, underpinned by clear principles with broad buy-in.”
So, what is the government’s plan?
The official plan to keep Australians safe is a section (action 7) in the National AI Plan. It argues existing Australian frameworks “can apply to AI and other emerging technologies”.



