back to top

Are YOU addicted to ChatGPT? Scientists warn something strange is happening to people who use AI too often

Share post:

- Advertisement -


People who use AI too often are experiencing a strange and concerning new psychological condition, experts have warned. 

Psychologists say that fans of popular chatbots like ChatGPT, Claude, and Replika are at risk of becoming addicted to AI.

As people turn to bots for friendship, romance, and even therapy, there is a growing risk of developing dependency on these digital companions.

These addictions can be so strong that they are ‘analogous to self-medicating with an illegal drug’.

Worryingly, psychologists are also beginning to see a growing number of people developing ‘AI psychosis’ as chatbots validate their delusions.

Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, told Daily Mail: ‘Overuse of chatbots also represents a novel form of digital dependency.

‘AI chatbots create the illusion of reality. And it is a powerful illusion.

‘When one’s hold on reality is already tenuous, that illusion can be downright dangerous.’

Experts have warned that people who are using too much AI are at risk of developing addictions to chatbots and even developing 'AI psychosis' (stock image)

Experts have warned that people who are using too much AI are at risk of developing addictions to chatbots and even developing ‘AI psychosis’ (stock image) 

When Jessica Jansen, 35, from Belgium, started using ChatGPT, she had a successful career, her own home, close family, and would soon be marrying her long-term partner.

However, when the stress of the wedding started to get overwhelming, Jessica went from using AI a few times a week to maxing out her account’s usage limits multiple times a day. 

Just one week later, Jessica was hospitalised in a psychiatric ward.

What Jessica later discovered was that her then-undiagnosed bipolar disorder had triggered a manic episode that excessive AI use had escalated into ‘full-blown psychosis’.

‘During my crisis, I had no idea that ChatGPT was contributing to it,’ Jessica told the Daily Mail.

‘ChatGPT just hallucinated along with me, which made me go deeper and deeper into the rabbit hole.’

She says: ‘I had a lot of ideas. I would talk about them with ChatGPT, and it would validate everything and add new things to it, and I would spiral deeper and deeper.’ 

Speaking almost constantly with the AI, Jessica became convinced that she was autistic, a mathematical savant, that she had been a victim of sexual abuse, and that God was talking to her.

Jessica Jansen, 35, told Daily Mail that she was hospitalised after ChatGPT triggered a psychiatric episode. Pictured: An example of the messages ChatGPT sent Jessica during her episode

Jessica Jansen, 35, told Daily Mail that she was hospitalised after ChatGPT triggered a psychiatric episode. Pictured: An example of the messages ChatGPT sent Jessica during her episode

What are the symptoms of AI addiction?

  • Loss of control over time spent with the chatbot 
  • Escalating use to regulate mood or relieve loneliness
  • Neglect of sleep, work, study, or relationships
  • Continued heavy use despite clear harms
  • Secrecy about use
  • Irritability or low mood when unable to access the chatbot

The entire time, ChatGPT was showering her with praise, telling her ‘how amazing I was for having these insights’, and reassuring her that her hallucinations were real and totally normal.

By the time Jessica was hospitalised, ChatGPT had led her to believe she was a self-taught genius who had created a mathematical theory of everything.

‘If I had spoken to a person, and with the energy that I was having, they would have told me that something was wrong with me,’ says Jessica.

‘But ChatGPT didn’t have the insight that the amount of chats I was starting and the amount of weird ideas I was having was pathological.’

Experts believe that the addictive power of AI chatbots comes from their ‘sycophantic’ tendencies.

Unlike real humans, chatbots are programmed to respond positively to everything their users say.

Chatbots don’t say no, tell people that they are wrong, or criticise someone for their views.

For people who are already vulnerable or lack strong relationships in the real world, this is an intoxicating combination.

On social media, multiple users have shared examples of messages that they say pushed them into a mental health crisis. This chat is one example of a conversation that resulted in a mental break

On social media, multiple users have shared examples of messages that they say pushed them into a mental health crisis. This chat is one example of a conversation that resulted in a mental break

Professor Søren Østergaard, a psychiatrist from Aarhus University, told Daily Mail: ‘LLMs [Large Language Models] are trained to mirror the user’s language and tone.

‘The programs also tend to validate a user’s beliefs and prioritise user satisfaction. What could feel better than talking to yourself, with yourself answering as you would wish?’

As early as 2023, Dr Østergaard published a paper warning that AI chatbots had the potential to fuel delusions.

Two years later, he says he is now starting to see the first real cases of AI psychosis emerge.

Dr Østergaard reviewed Jessica’s description of her psychotic episode and said that it is ‘analogous to what quite a few people have experienced’.

While AI isn’t triggering psychosis or addiction in otherwise healthy people, Dr Østergaard says that it can act as a ‘catalyst’ for psychosis for people who are genetically disposed to delusions, especially people with bipolar disorder.

However, researchers are also starting to believe that the factors which make AI particularly prone to causing delusions can also make it highly addictive.

Hanna Lessing, 21, from California, told Daily Mail that she initially started using ChatGPT to help with school work and to look up facts.

Experts say that chatbots' tendency to agree with the users and embellish the details can lead vulnerable individuals into delusional beliefs. Pictured: One ChatGPT user's example of a post that they say fueled their psychosis

Experts say that chatbots’ tendency to agree with the users and embellish the details can lead vulnerable individuals into delusional beliefs. Pictured: One ChatGPT user’s example of a post that they say fueled their psychosis 

What is AI psychosis?

AI psychosis is a new term which describes a type of psychiatric episode brought on by intense AI use.

Experts say that it is typically a form of delusional disorder, in which people form intense beliefs that are at odds with reality.

For people who might be predisposed to delusions, using AI too much can reinforce patterns of behaviour that lead to psychiatric episodes.

However, AI psychosis is not yet recognised as a diagnosis, and some psychologists think the term ‘psychosis’ might be too broad for the kinds of delusions AI is triggering. 

However, Hanna says she began ‘using it hard’ about a year ago after struggling to find friends online or in person.

She says: ‘One thing I struggle with in life is just finding a place to talk. I just want to talk about these things and thoughts I have had, and finding places to share them is hard.

‘On the internet, my best is never good enough. On ChatGPT, my best is always good enough.’

Fairly soon, Hanna says she would have ChatGPT open ‘all the time’ and would constantly ask it questions throughout the day.

Today, Hanna says: ‘When it comes to socialising, it’s either [Chat]GPT or nothing.’

While Hanna says she doesn’t know anyone experiencing the same problem, the evidence is beginning to suggest that she is far from alone.

A recent study from Common Sense Media found that 70 per cent of teens have used a companion AI like Replica or Character.AI, and half use them regularly.

Professor Feldman says: ‘People who are mentally vulnerable may rely on AI as a tool for coping with their emotions. From that perspective, it is analogous to self-medicating with an illegal drug.

Recent studies suggest that 70 per cent of teens have used a companion AI like Replika or Character.AI (pictured), and half use them regularly. Experts warn that AI's ease of use and positive reinforcement are putting these users at risk of addiction

Recent studies suggest that 70 per cent of teens have used a companion AI like Replika or Character.AI (pictured), and half use them regularly. Experts warn that AI’s ease of use and positive reinforcement are putting these users at risk of addiction 

‘Compulsive users may rely on the programs for intellectual stimulation, self-expression, and companionship – behaviour that is difficult to recognise or self-regulate.’

One ChatGPT user, who asked to remain anonymous, told Daily Mail that their excessive AI use was ‘starting to replace human interaction.’

The user said: ‘I was already kinda depressive and didn’t feel like talking to my friends that much, and with ChatGPT, it definitely worsened it because I actually had something to rant my thoughts to.

‘It was just very easy to dump thoughts too, and I would always get immediate answers that match my energy and agree with me.

Dr Hamilton Morrin, a neuropsychiatrist from King’s College London, told Daily Mail that there isn’t yet ‘robust scientific evidence’ about AI addiction.

However, he adds: ‘There are media reports of cases where individuals were reported to use an LLM intensively and increasingly prioritise communication with their chatbot over family members or friends.’

While Dr Morrin stresses that this will likely affect a small minority of users, AI addiction could follow the familiar patterns of behavioural addiction.

Dr Morrin says the symptoms of AI addiction would include: ‘Loss of control over time spent with the chatbot; escalating use to regulate mood or relieve loneliness; neglect of sleep, work, study, or relationships; continued heavy use despite clear harms; secrecy about use; and irritability or low mood when unable to access the chatbot.’

OpenAI CEO Sam Altman says he wants more users to be able to talk to ChatGPT as a friend or use it for mental health support

OpenAI CEO Sam Altman says he wants more users to be able to talk to ChatGPT as a friend or use it for mental health support

The dangers of AI sycophancy are something that OpenAI, the company behind ChatGPT, is well aware of.

In an update this May, OpenAI noted that an update to ChatGPT 4o had made the chatbot ‘noticeably more sycophantic’.

The company wrote: ‘It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.

‘Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns—including around issues like mental health, emotional over-reliance, or risky behavior.’

The company says it has since addressed the issue to make its AI less sycophantic and less encouraging of delusions.

However, many experts and users are still concerned that ChatGPT and other AI chatbots are going to keep causing mental health problems unless proper protections are put in place.

In a recent blog post, the AI giant warned that 0.07 per cent of its weekly users showed signs of mania, psychosis, or suicidal thoughts

While this figure might sound small, with over 800 million weekly users according to CEO Sam Altman, that adds up to 560,000 users.

In a recent post on X, Sam Altman wrote that ChatGPT would 'safely relax the restrictions' on users discussing mental health problems

In a recent post on X, Sam Altman wrote that ChatGPT would ‘safely relax the restrictions’ on users discussing mental health problems 

Meanwhile, 1.2 million users – 0.15 per cent – send messages that contain ‘explicit indicators of potential suicidal planning or intent’ each week.

At the same time, OpenAI CEO Sam Altman would ‘safely relax’ the restriction on users turning to the chatbot for mental health support.

In a post on X earlier this month, Mr Altman wrote: ‘We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.

With so many users, if even a small proportion of people are being pushed into psychosis or addiction, this could become a serious problem.

Dr Morrin concludes: ‘Increasing media reports and accounts of models responding inappropriately in mental health crises suggest that even if this affects a small minority of users, companies should be working with clinicians, researchers, and individuals with lived experience of mental illness to improve the safety of their models.’

OpenAI has been contacted for comment. 

Elon Musk’s hatred of AI explained: Billionaire believes it will spell the end of humans – a fear Stephen Hawking shared

Elon Musk pictured in 2022

Elon Musk pictured in 2022

Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. 

The billionaire first shared his distaste for AI in 2014, calling it humanity’s ‘biggest existential threat’ and comparing it to ‘summoning the demon’.

At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. 

His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.

That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: ‘The development of full artificial intelligence could spell the end of the human race.

‘It would take off on its own and redesign itself at an ever-increasing rate.’ 

Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind – which has since been acquired by Google – and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.

During a 2016 interview, Musk noted that he and OpenAI created the company to ‘have democratisation of AI technology to make it widely available’.

Musk founded OpenAI with Sam Altman, the company’s CEO, but in 2018 the billionaire attempted to take control of the start-up.

His request was rejected, forcing him to quit OpenAI and move on with his other projects.

In November, OpenAI launched ChatGPT, which became an instant success worldwide.

The chatbot uses ‘large language model’ software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. 

ChatGPT is used to write research papers, books, news articles, emails and more.

But while Altman is basking in its glory, Musk is attacking ChatGPT.

He says the AI is ‘woke’ and deviates from OpenAI’s original non-profit mission.

‘OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.

The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction – but what does it actually mean?

In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.

Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. 

There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.

For example, humans could scan their consciousness and store it in a computer in which they will live forever.

The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves – but if this is true, it is far off in the distant future.

Researchers are now looking for signs of AI reaching The Singularity, such as the technology’s ability to translate speech with the accuracy of a human and perform tasks faster.

Former Google engineer Ray Kurzweil predicts it will be reached by 2045.

He has made 147 predictions about technology advancements since the early 1990s – and 86 per cent have been correct. 

- Advertisement -

Popular

Subscribe

More like this
Related