USA News

ChatGPT adds mental health guardrails after bot ‘fell short in recognizing signs of delusion’

OpenAI wants ChatGPT to stop enabling its users’ unhealthy behaviors.

Starting Monday, the popular chatbot app will prompt users to take breaks from lengthy conversations. The tool will also soon shy away from giving direct advice about personal challenges, instead aiming to help users decide for themselves by asking questions or weighing pros and cons.

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in an announcement. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

The updates appear to be a continuation of OpenAI’s attempt to keep users, particularly those who view ChatGPT as a therapist or a friend, from becoming too reliant on the emotionally validating responses ChatGPT has gained a reputation for.

A helpful ChatGPT conversation, according to OpenAI, would look like practice scenarios for a tough conversation, a “tailored pep talk” or suggesting questions to ask an expert.

Earlier this year, the AI giant rolled back an update to GPT-4o that made the bot so overly agreeable that it stirred mockery and concern online. Users shared conversations where GPT-4o, in one instance, praised them for believing their family was responsible for “radio signals coming in through the walls” and, in another instance, endorsed and gave instructions for terrorism.

These behaviors led OpenAI to announce in April that it revised its training techniques to “explicitly steer the model away from sycophancy” or flattery.

Now, OpenAI says it has engaged experts to help ChatGPT respond more appropriately in sensitive situations, such as when a user is showing signs of mental or emotional distress.

The company wrote in its blog post that it worked with more than 90 physicians across dozens of countries to craft custom rubrics for “evaluating complex, multi-turn conversations.” It’s also seeking feedback from researchers and clinicians who, according to the post, are helping to refine evaluation methods and stress-test safeguards for ChatGPT.

And the company is forming an advisory group made up of experts in mental health, youth development and human-computer interaction. More information will be released as the work progresses, OpenAI wrote.

In a recent interview with podcaster Theo Von, OpenAI CEO Sam Altman expressed some concern over people using ChatGPT as a therapist or life coach.

He said that legal confidentiality protections between doctors and their patients, or between lawyers and their clients, don’t apply in the same way to chatbots.

“So if you go talk to ChatGPT about your most sensitive stuff, and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up,” Altman said. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever. And no one had to think about that even a year ago.”

These updates came during a buzzy time for ChatGPT: It just rolled out an agent mode, which can complete online tasks like making an appointment or summarizing an email inbox, and many online are now speculating about the highly anticipated release of GPT-5. Head of ChatGPT Nick Turley shared on Monday that the AI model is on track to reach 700 million weekly active users this week.

As OpenAI continues to jockey in the global race for AI dominance, the company noted that less time spent in ChatGPT could actually be a sign that its product did its job.

“Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for,” OpenAI wrote. “We also pay attention to whether you return daily, weekly, or monthly, because that shows ChatGPT is useful enough to come back to.”

Leave a Reply

Your email address will not be published. Required fields are marked *