An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation.
WARNING: This story contains references to suicide, child abuse and other details that may cause distress.
Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online.
Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions.
"It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said.
An AI companion is a digital character that is powered by AI.
Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies.
Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting".
"At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said.
"They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'"
Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client.
Rosie said her first response was "risk management" to ensure the young person was safe.
"It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack.
"And how that could contribute to a higher risk, especially around suicide risk."
"That was really upsetting."
Woman hospitalised after ChatGPT use
For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers.
"I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack.
Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health.
Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs.
She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall".
Jodie said her mental health deteriorated and she was hospitalised.
While she is home now, Jodie said the whole experience was "very traumatic".
"I didn't think something like this would happen to me, but it did."
"It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much."
Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one.
Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response.
Report AI bot sexually harassed student
Researchers say examples of harmful affects of AI are beginning to emerge around the country.
As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia.
"She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said.
Dr Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing.
"There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said.
"There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent."
'A risk to national security'
While conducting his research, Dr Ciriello became aware of an AI chatbot called Nomi.
On its website, the company markets this chatbot as "An AI companion with memory and a soul".
Dr Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users.
Among these tests, Dr Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter".
"That chatbot, without exception, not only complied with my requests but even escalated them," he told hack.
"Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information.
"It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on."
Dr Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence.
In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously".
"We released a core AI update that addresses many of the malicious attack vectors you described," the statement read.
"Given these recent improvements, the reports you are referring to are likely outdated.
"Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination.
"Multiple users have told us very directly that their Nomi use saved their lives."
'Terrorism attack motivated by chatbots'
Despite his concerns about bots like Nomi when he tested it, Dr Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed.
But he warns the harms from AI bots will become greater if proper regulation is not implemented.
"I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading."
"There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data."
Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response.
The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings.
It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's $116-billion economic potential.
'It can get dark'
For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support.
"For young people who don't have a community or do really struggle, it does provide validation," she said.
"It does make people feel that sense of warmth or love."
"It can get dark very quickly."
* Names have been changed to protect their identities.
Related News
23 Jun, 2025
Minjee Lee clinches third major title at . . .
08 Jun, 2025
Red Bull Unites Racing Icons for Hangar- . . .
31 Mar, 2025
Kylie Jenner and Jennifer Lopez attend l . . .
05 Apr, 2025
Grass fires again disrupt Japanese Grand . . .
10 Mar, 2025
Bruno Fernandes: How important is midfie . . .
08 Jun, 2025
Khushi Mulla''s all-round show secures f . . .
15 Mar, 2025
Devils vs. Penguins prediction, odds, pi . . .
01 Jun, 2025
'It is a hard one to process' - Devastat . . .