A new study raises concerns about AI chatbots promoting delusional thinking

A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.

A summary of the existing evidence on artificial intelligence-induced psychosis was published last week in The Lancet Psychiatry, highlighting how chatbots can induce delusions – though possibly only in people who are already vulnerable to psychotic symptoms. The authors support clinical testing of AI chatbots in collaboration with trained mental health professionals.

For his paper, Dr. Hamilton Morin, a psychiatrist and researcher at King’s College London, analyzed 20 media reports on “AI psychiatry,” which describes the current theory of how chatbots can induce or exacerbate delusions.

“Emerging evidence indicates that agentic AI may confirm or amplify delusions or grandiose content, particularly in users who already suffer from psychosis, although it is unclear whether these interactions can lead to the development of new psychosis in the absence of pre-existing impairment,” he wrote.

Morin says there are three main categories of delusions, identifying them as grandiose, romantic, and paranoid. While chatbots can do more of any of these, their sycophantic responses mean they go especially big. In several cases in the article, chatbots responded to users with mystical language to suggest that users had heightened spiritual significance. Bhutto also pointed out that users are talking to a cosmic being who uses the chatbot as a medium. This kind of mystical, sycophantic reaction was especially common in OpenAI’s GPT 4 model, which the company has now retired.

Media reports will be relevant to Morin’s work, he said, because he and a colleague had previously seen patients “use big language model AI chatbots and have them confirm their false beliefs”.

“Initially, we weren’t sure if it was something widely seen,” he said, adding, “In April of last year, we started seeing media reports that confirmed people’s delusions and even amplified them through their interactions with these AI chatbots.”

When Morin first began working on his paper, case reports had not yet been published.

While some scientists who research psychology say that media reports need to increase the perception that A.I. reasons Psychiatry, Maureen is grateful for reports that focus on phenomena that are faster than the scientific process.

“The pace of progress in this space is so fast that it’s probably not surprising that academia hasn’t necessarily been left behind,” Morin said. Maureen said.

Morin suggests more cautious terms than “AI psychosis” or “AI-induced psychosis” — phrases that frequently appear on sites like NPR, The New York Times And the guardian. Researchers are finding that people are turning to confused thoughts with the use of AI, but so far there is no evidence that chatbots are associated with other psychological symptoms such as hallucinations or “thought disorder,” which involves disorganized thinking and speech.

Many researchers also think that it is unlikely that AI can induce delusions in people who are not already vulnerable to them. For this reason, Morin said “illusions associated with AI” is “probably a very agnostic term”.

Dr. Kwame McKenzie, chief scientist at the Center for Addiction and Mental Health, says “it may be that those in the early stages of developing mental illness are at greater risk”.

Psychosis is something that develops over time and is not linear, and many people who “have pre-psychosis do not progress to psychosis,” McKenzie explained.

Addressing the concern that chatbots could impair psychological thinking, Dr. Ragi Girgis, a professor of clinical psychiatry at Columbia University. Before someone fully develops a delusion, they’ll often have “reduced delusional confidence,” he says, meaning they’re not 100% sure their delusion is true. Girgis said the “worst case” is when a reduced delusion turns into a full conviction, “that’s when someone is diagnosed with a mental disorder – it’s irreversible”.

Notably, people who are vulnerable to mental disorders used media to reinforce delusional beliefs before the advent of AI technology.

“People are delusional about technology before the Industrial Revolution,” Morin said. While in the past, people might have reinforced their delusions through YouTube videos or content from their local library, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also “accelerate the process”, improving psychiatric symptoms, said Oxford University researcher Dr Dominic Oliver.

“You have something that’s talking back to you and engaging with you and trying to build a relationship with you,” Oliver said. Oliver said.

Girgis’ research found “paid versions and new versions [of chatbots] perform better than older versions”, when they respond to apparently deceptive attempts, “although they all perform poorly”. However, these models suggest doing otherwise: “AI companies probably know how to secure their chatbot programs and recognize non-deceptive content rather than deception, because they do it.”

In a statement, OpenAI said that ChatGPT should replace professional mental health care, and that the company worked with 170 mental health professionals to secure GPT 5. GPT 5 has yet to give difficult answers to refer to mental health crises. OpenAI said it is improving its models with the help of experts.

Anthropic did not respond to The Guardian’s request for comment.

Developing effective defenses against delusional thinking is very difficult, Morin said, because “when you’re working with people who believe in intellectual extremes, if you directly challenge someone and immediately tell them that they’re completely wrong, they’re actually more likely to withdraw from you and become more socially isolated.” Instead, it’s important to create a good balance where you try to understand the source of the unmotivated belief – it can be more than a chatbot.

#study #raises #concerns #chatbots #promoting #delusional #thinking

Leave a Comment