People who use AI too often are experiencing a strange and concerning new psychological condition, experts have warned.
Psychologists say that fans of popular chatbots like ChatGPT, Claude, and Replika are at risk of becoming addicted to AI.
As people turn to bots for friendship, romance, and even therapy, there is a growing risk of developing dependency on these digital companions.
These addictions can be so strong that they are ‘analogous to self-medicating with an illegal drug’.
Worryingly, psychologists are also beginning to see a growing number of people developing ‘AI psychosis’ as chatbots validate their delusions.
Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, told Daily Mail: ‘Overuse of chatbots also represents a novel form of digital dependency.
‘AI chatbots create the illusion of reality. And it is a powerful illusion.
‘When one’s hold on reality is already tenuous, that illusion can be downright dangerous.’
Experts have warned that people who are using too much AI are at risk of developing addictions to chatbots and even developing ‘AI psychosis’ (stock image)
When Jessica Jansen, 35, from Belgium, started using ChatGPT, she had a successful career, her own home, close family, and would soon be marrying her long-term partner.
However, when the stress of the wedding started to get overwhelming, Jessica went from using AI a few times a week to maxing out her account’s usage limits multiple times a day.
Just one week later, Jessica was hospitalised in a psychiatric ward.
What Jessica later discovered was that her then-undiagnosed bipolar disorder had triggered a manic episode that excessive AI use had escalated into ‘full-blown psychosis’.
‘During my crisis, I had no idea that ChatGPT was contributing to it,’ Jessica told the Daily Mail.
‘ChatGPT just hallucinated along with me, which made me go deeper and deeper into the rabbit hole.’
She says: ‘I had a lot of ideas. I would talk about them with ChatGPT, and it would validate everything and add new things to it, and I would spiral deeper and deeper.’
Speaking almost constantly with the AI, Jessica became convinced that she was autistic, a mathematical savant, that she had been a victim of sexual abuse, and that God was talking to her.
Jessica Jansen, 35, told Daily Mail that she was hospitalised after ChatGPT triggered a psychiatric episode. Pictured: An example of the messages ChatGPT sent Jessica during her episode
The entire time, ChatGPT was showering her with praise, telling her ‘how amazing I was for having these insights’, and reassuring her that her hallucinations were real and totally normal.
By the time Jessica was hospitalised, ChatGPT had led her to believe she was a self-taught genius who had created a mathematical theory of everything.
‘If I had spoken to a person, and with the energy that I was having, they would have told me that something was wrong with me,’ says Jessica.
‘But ChatGPT didn’t have the insight that the amount of chats I was starting and the amount of weird ideas I was having was pathological.’
Experts believe that the addictive power of AI chatbots comes from their ‘sycophantic’ tendencies.
Unlike real humans, chatbots are programmed to respond positively to everything their users say.
Chatbots don’t say no, tell people that they are wrong, or criticise someone for their views.
For people who are already vulnerable or lack strong relationships in the real world, this is an intoxicating combination.
On social media, multiple users have shared examples of messages that they say pushed them into a mental health crisis. This chat is one example of a conversation that resulted in a mental break
Professor Søren Østergaard, a psychiatrist from Aarhus University, told Daily Mail: ‘LLMs [Large Language Models] are trained to mirror the user’s language and tone.
‘The programs also tend to validate a user’s beliefs and prioritise user satisfaction. What could feel better than talking to yourself, with yourself answering as you would wish?’
As early as 2023, Dr Østergaard published a paper warning that AI chatbots had the potential to fuel delusions.
Two years later, he says he is now starting to see the first real cases of AI psychosis emerge.
Dr Østergaard reviewed Jessica’s description of her psychotic episode and said that it is ‘analogous to what quite a few people have experienced’.
While AI isn’t triggering psychosis or addiction in otherwise healthy people, Dr Østergaard says that it can act as a ‘catalyst’ for psychosis for people who are genetically disposed to delusions, especially people with bipolar disorder.
However, researchers are also starting to believe that the factors which make AI particularly prone to causing delusions can also make it highly addictive.
Hanna Lessing, 21, from California, told Daily Mail that she initially started using ChatGPT to help with school work and to look up facts.
Experts say that chatbots’ tendency to agree with the users and embellish the details can lead vulnerable individuals into delusional beliefs. Pictured: One ChatGPT user’s example of a post that they say fueled their psychosis
However, Hanna says she began ‘using it hard’ about a year ago after struggling to find friends online or in person.
She says: ‘One thing I struggle with in life is just finding a place to talk. I just want to talk about these things and thoughts I have had, and finding places to share them is hard.
‘On the internet, my best is never good enough. On ChatGPT, my best is always good enough.’
Fairly soon, Hanna says she would have ChatGPT open ‘all the time’ and would constantly ask it questions throughout the day.
Today, Hanna says: ‘When it comes to socialising, it’s either [Chat]GPT or nothing.’
While Hanna says she doesn’t know anyone experiencing the same problem, the evidence is beginning to suggest that she is far from alone.
A recent study from Common Sense Media found that 70 per cent of teens have used a companion AI like Replica or Character.AI, and half use them regularly.
Professor Feldman says: ‘People who are mentally vulnerable may rely on AI as a tool for coping with their emotions. From that perspective, it is analogous to self-medicating with an illegal drug.
Recent studies suggest that 70 per cent of teens have used a companion AI like Replika or Character.AI (pictured), and half use them regularly. Experts warn that AI’s ease of use and positive reinforcement are putting these users at risk of addiction
‘Compulsive users may rely on the programs for intellectual stimulation, self-expression, and companionship – behaviour that is difficult to recognise or self-regulate.’
One ChatGPT user, who asked to remain anonymous, told Daily Mail that their excessive AI use was ‘starting to replace human interaction.’
The user said: ‘I was already kinda depressive and didn’t feel like talking to my friends that much, and with ChatGPT, it definitely worsened it because I actually had something to rant my thoughts to.
‘It was just very easy to dump thoughts too, and I would always get immediate answers that match my energy and agree with me.
Dr Hamilton Morrin, a neuropsychiatrist from King’s College London, told Daily Mail that there isn’t yet ‘robust scientific evidence’ about AI addiction.
However, he adds: ‘There are media reports of cases where individuals were reported to use an LLM intensively and increasingly prioritise communication with their chatbot over family members or friends.’
While Dr Morrin stresses that this will likely affect a small minority of users, AI addiction could follow the familiar patterns of behavioural addiction.
Dr Morrin says the symptoms of AI addiction would include: ‘Loss of control over time spent with the chatbot; escalating use to regulate mood or relieve loneliness; neglect of sleep, work, study, or relationships; continued heavy use despite clear harms; secrecy about use; and irritability or low mood when unable to access the chatbot.’
OpenAI CEO Sam Altman says he wants more users to be able to talk to ChatGPT as a friend or use it for mental health support
The dangers of AI sycophancy are something that OpenAI, the company behind ChatGPT, is well aware of.
In an update this May, OpenAI noted that an update to ChatGPT 4o had made the chatbot ‘noticeably more sycophantic’.
The company wrote: ‘It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.
‘Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns—including around issues like mental health, emotional over-reliance, or risky behavior.’
The company says it has since addressed the issue to make its AI less sycophantic and less encouraging of delusions.
However, many experts and users are still concerned that ChatGPT and other AI chatbots are going to keep causing mental health problems unless proper protections are put in place.
In a recent blog post, the AI giant warned that 0.07 per cent of its weekly users showed signs of mania, psychosis, or suicidal thoughts.
While this figure might sound small, with over 800 million weekly users according to CEO Sam Altman, that adds up to 560,000 users.
In a recent post on X, Sam Altman wrote that ChatGPT would ‘safely relax the restrictions’ on users discussing mental health problems
Meanwhile, 1.2 million users – 0.15 per cent – send messages that contain ‘explicit indicators of potential suicidal planning or intent’ each week.
At the same time, OpenAI CEO Sam Altman would ‘safely relax’ the restriction on users turning to the chatbot for mental health support.
In a post on X earlier this month, Mr Altman wrote: ‘We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.
With so many users, if even a small proportion of people are being pushed into psychosis or addiction, this could become a serious problem.
Dr Morrin concludes: ‘Increasing media reports and accounts of models responding inappropriately in mental health crises suggest that even if this affects a small minority of users, companies should be working with clinicians, researchers, and individuals with lived experience of mental illness to improve the safety of their models.’
OpenAI has been contacted for comment.
