People turn to the internet to run billions of search queries each year. These range from keeping tabs on world events and celebrities to learning new words and getting DIY help.
One of the most popular questions Australians recently asked was: “How to inspect a used car?”.
If you asked Google this at the beginning of 2024, you would have been served a list of individual search results and the order would have depended on several factors. If you asked the same question at the end of the year, the experience would be completely different.
That’s because Google, which controls about 94% of the Australian search engine market, introduced “AI Overviews” to Australia in October 2024. These AI-generated search result summaries have revolutionised how people search for and find information. They also have significant impacts on the quality of the results.
How do these AI search summaries work, though? Are they reliable? And is there a way to opt out?
Synthesising the internet
Legacy search engines work by evaluating dozens of different criteria and trying to show you the results that they think best match your search terms.
They take into account the content itself, including how unique, current and comprehensive it is, as well as how it’s structured and organised.
They also consider relationships between the content and other parts of the web. If trusted sources link to content, that can positively affect its placement in search results.
They try to infer the searcher’s intent – whether they’re trying to buy something, learn something new, or solve a practical problem. They also consider technical aspects such as how fast the content loads and whether the page is secure.
All of this adds up to an invisible score each webpage gets that affects its visibility in search results. But AI is changing all this.
Google is the only search engine that prominently displays AI summaries on its main results page. Bing and DuckDuckGo still use traditional search result layouts, offering AI summaries only through companion apps such as Copilot and Duck.ai.
Instead of directing users to one specific webpage, generative AI-powered search looks across webpages and sources to try to synthesise what they say. It then tries to summarise the results in a short, conversational and easy-to-understand way.
In theory, this can result in richer, more comprehensive, and potentially more unique answers. But AI doesn’t always get it right.

DIA TV/Shutterstock
How reliable are AI searches?
Early examples of Google’s AI-powered search from 2024 suggested users eat “at least one small rock per day” – and that they could use non-toxic glue to help cheese stick to pizza.
One issue is that machines are poorly equipped to detect satire or parody and can use these materials to respond in place of fact-based evidence.
Research suggests the rate of so-called “hallucinations” – instances of machines making up answers – is getting worse even as the models driving them are getting more sophisticated.
Machines can’t actually determine what’s true and false. They cannot grasp the nuances of idioms and colloquial language and can only make predictions based on fancy maths. But these predictions don’t always end up being correct, which is an issue – especially for sensitive medical or health questions or when seeking financial advice.