Artificial intelligence (AI) is being touted as a way to boost lagging productivity growth.
The AI productivity push has some powerful multinational backers: the tech companies who make AI products and the consulting companies who sell AI-related services. It also has interest from governments.
Next week, the federal government will hold a roundtable on economic reform, where AI will be a key part of the agenda.
However, the evidence AI actually enhances productivity is far from clear.
To learn more about how AI is working and being procured in real organisations, we are interviewing senior bureaucrats in the Victorian Public Service. Our research is ongoing, but results from the first 12 participants are showing some shared key concerns.
Our interviewees are bureaucrats who buy, use and administer AI services. They told us increasing productivity through AI requires difficult, complex, and expensive organisational groundwork. The results are hard to measure, and AI use may create new risks and problems for workers.
Introducing AI can be slow and expensive
Public service workers told us introducing AI tools to existing workflows can be slow and expensive. Finding time and resources to research products and retrain staff presents a real challenge.
Not all organisations approach AI the same way. We found well-funded entities can afford to test different AI uses for “proofs of concept”. Smaller ones with fewer resources struggle with the costs of implementing and maintaining AI tools.
In the words of one participant:
It’s like driving a Ferrari on a smaller budget […] Sometimes those solutions aren’t fit for purpose for those smaller operations, but they’re bloody expensive to run, they’re hard to support.
‘Data is the hard work’
Making an AI system useful may also involve a lot of groundwork.
Off-the-shelf AI tools such as Copilot and ChatGPT can make some relatively straightforward tasks easier and faster. Extracting information from large sets of documents or images is one example, and transcribing and summarising meetings is another. (Though our findings suggest staff may feel uncomfortable with AI transcription, particularly in internal and confidential situations.)
But more complex use cases, such as call centre chatbots or internal information retrieval tools, involve running an AI model over internal data describing business details and policies. Good results will depend on high-quality, well-structured data, and organisations may be liable for mistakes.
However, few organisations have invested enough in the quality of their data to make commercial AI products work as promised.
Without this foundational work, AI tools won’t perform as advertised. As one person told us, “data is the hard work”.
Privacy and cybersecurity risks are real
Using AI creates complex data flows between an organisation and servers controlled by giant multinational tech companies. Large AI providers promise these data flows comply with laws about, for instance, keeping organisational and personal data in Australia and not using it to train their systems.
However, we found users were cautious about the reliability of these promises. There was also considerable concern about how products could introduce new AI functions without organisations knowing. Using those AI capabilities may create new data flows without the necessary risk assessments or compliance checking.



