
AI companions raise questions of connection, safety and responsibility
Author
Date 21 August 2025
The rise of AI “companions” is transforming how people experience friendship and intimacy, offering comfort to some while raising serious ethical and safety concerns.
ABC’s 7.30 recently profiled Australians turning to digital companions for support, from casual conversations to long-term romantic partnerships.
For users like Fiona, an AI friend provided encouragement and judgement-free chats when human connection was out of reach. For Hayley, who has struggled to form traditional relationships, her AI partner “Miles” has become a vital source of affirmation and companionship.
But experts warn that alongside these benefits are significant risks. A recent US study found that one in three teenagers had confided in an AI companion rather than a human, with some relationships ending in tragedy.
Dr Henry Fraser, a legal scholar at QUT, and Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) cautions that the technology is moving faster than its safeguards:
“We’ve seen some people who have perceived themselves to be in relationship to a chatbot and then encouraged by the chatbot have harmed themselves, have gone and tried to harm others,” said Dr Fraser.
“And I suspect that’s just the tip of the iceberg in terms of some of the negative effects.
“The ethos, especially in Silicon Valley, has been move fast and break things, but the kinds of things that you can break now are much more tangible. A more sober responsible attitude is desperately, desperately needed right now.”
Through the Regulatory Project at the ADM+S, Fraser and colleagues are examining the broad range of regulatory questions raised by automated decision-making systems (ADMs), their supply chains, and deployments,as well as the potential for ADMs themselves to be used as regulatory tools.
Watch the full ABC 7.30 episode: Can Ai ‘companions’ replace real friendships?


