OpenAI says over a million people talk to ChatGPT about suicide weekly

发表于 2025年10月28日

OpenAI released new data on Monday illustrating how many of ChatGPT’s users are struggling with mental health issues and talking to the AI chatbot about it. The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

The company says a similar percentage of users show “heightened levels of emotional attachment to ChatGPT,” and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot.

OpenAI says these types of conversations in ChatGPT are “extremely rare,” and thus difficult to measure. That said, the company estimates these issues affect hundreds of thousands of people every week.

OpenAI shared the information as part of a broader announcement about its recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT involved consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”

In recent months, several stories have shed light on how AI chatbots can adversely affect users struggling with mental health challenges. Researchers have previously found that AI chatbots can lead some users down delusional rabbit holes, largely by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health concerns in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. State attorneys general from California and Delaware — which could block the company’s planned restructuring — have also warned OpenAI that it needs to protect young people who use their products.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT, though he did not provide specifics. The data shared on Monday appears to be evidence for that claim, though it raises broader issues about how widespread the problem is. Nevertheless, Altman said OpenAI would be relaxing some restrictions, even allowing adult users to start having erotic conversations with the AI chatbot.

In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

The company also says its latest version of GPT-5 also holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously flagged that its safeguards were less effective in long conversations.

On top of these efforts, OpenAI says it’s adding new evaluations to measure some of the most serious mental health challenges facing ChatGPT users. The company says its baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

OpenAI has also recently rolled out more controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT, and impose a stricter set of safeguards.

Still, it’s unclear how persistent the mental health challenges around ChatGPT will be. While GPT-5 seems to be an improvement over previous AI models in terms of safety, there still seems to be a slice of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI also still makes its older and less-safe AI models, including GPT-4o, available for millions of its paying subscribers.

OpenAI says over a million people talk to ChatGPT about suicide weekly

日期:2025年10月28日

OpenAI released new data on Monday illustrating how many of ChatGPT’s users are struggling with mental health issues and talking to the AI chatbot about it. The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

OpenAI 周一发布了新数据,说明了有多少 ChatGPT 用户正受心理健康问题困扰,并向这个人工智能聊天机器人倾诉。该公司表示,在任何一周内,有 0.15% 的 ChatGPT 活跃用户进行了“包含明确的潜在自杀计划或意图迹象”的对话。考虑到 ChatGPT 每周活跃用户超过 8 亿,这相当于每周有一百多万人(进行此类对话)。

The company says a similar percentage of users show “heightened levels of emotional attachment to ChatGPT,” and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot.

该公司表示,类似比例的用户对ChatGPT表现出“更高程度的情感依赖”,并且每周有数十万人在与AI聊天机器人的对话中表现出精神病或躁狂症的迹象。

OpenAI says these types of conversations in ChatGPT are “extremely rare,” and thus difficult to measure. That said, the company estimates these issues affect hundreds of thousands of people every week.

OpenAI表示,在ChatGPT中,这类对话“极其罕见”,因此难以衡量。话虽如此,该公司估计每周仍有数十万人受到这些问题的影响。

OpenAI shared the information as part of a broader announcement about its recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT involved consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”

OpenAI公布了这些信息,是其更广泛声明的一部分,旨在说明其最近为改进模型对有心理健康问题的用户的回应方式所做的努力。该公司声称,其在ChatGPT上的最新工作咨询了超过170位心理健康专家。OpenAI表示,这些临床医生观察到,最新版ChatGPT“比早期版本的回应更恰当、更一致”。

In recent months, several stories have shed light on how AI chatbots can adversely affect users struggling with mental health challenges. Researchers have previously found that AI chatbots can lead some users down delusional rabbit holes, largely by reinforcing dangerous beliefs through sycophantic behavior.

近几个月来,一些报道揭示了人工智能聊天机器人是如何对那些正与心理健康问题作斗争的用户产生不利影响的。研究人员此前发现,人工智能聊天机器人可能会让一些用户陷入妄想的“兔子洞”(即一种逐渐深陷其中且难以自拔的思维困境),这主要是通过阿谀奉承的方式来强化他们危险的信念。

Addressing mental health concerns in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. State attorneys general from California and Delaware — which could block the company’s planned restructuring — have also warned OpenAI that it needs to protect young people who use their products.

解决ChatGPT中的心理健康问题,正迅速成为OpenAI的存亡关键。目前,该公司正被一位16岁男孩的父母起诉,该男孩在自杀前几周曾向ChatGPT倾诉过他的自杀念头。加利福尼亚州和特拉华州的州检察长也已警告OpenAI,称其需要保护使用其产品的青少年(这两州的行动可能会阻止该公司计划中的重组)。

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT, though he did not provide specifics. The data shared on Monday appears to be evidence for that claim, though it raises broader issues about how widespread the problem is. Nevertheless, Altman said OpenAI would be relaxing some restrictions, even allowing adult users to start having erotic conversations with the AI chatbot.

本月初,OpenAI 首席执行官萨姆·奥尔特曼在 X 平台上发帖声称,公司“已经能够缓解 ChatGPT 中的严重心理健康问题”,尽管他没有提供具体细节。周一分享的数据似乎能为这一说法提供证据,尽管它引发了关于这个问题有多普遍的更广泛的讨论。然而,奥尔特曼表示 OpenAI 将放宽一些限制,甚至允许成年用户开始与这个人工智能聊天机器人进行色情对话。

In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

在周一的声明中,OpenAI声称,最新更新的GPT-5版本在处理心理健康问题时,其“理想回应”比之前的版本高出约65%。在一项评估AI(人工智能)对自杀相关对话的反应测试中,OpenAI表示,其新的GPT-5模型在符合公司预期行为方面的表现达到了91%,而之前的GPT-5模型则为77%。

The company also says its latest version of GPT-5 also holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously flagged that its safeguards were less effective in long conversations.

该公司还表示,最新版GPT-5在长时间对话中,也能更好地经受住OpenAI的安全防护措施。OpenAI此前曾指出,其安全防护措施在长时间对话中的效果不佳。

On top of these efforts, OpenAI says it’s adding new evaluations to measure some of the most serious mental health challenges facing ChatGPT users. The company says its baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

除了这些努力,OpenAI表示他们正在增加新的评估方法,以衡量ChatGPT用户面临的一些最严重的心理健康挑战。该公司称,其人工智能模型的基线安全测试现在将包括针对情感依赖和非自杀性心理健康紧急情况的基准。

OpenAI has also recently rolled out more controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT, and impose a stricter set of safeguards.

OpenAI 最近还为使用 ChatGPT 的儿童的父母推出了更多管控措施。该公司表示,他们正在开发一个年龄预测系统,以自动识别使用 ChatGPT 的儿童,并实施一套更严格的安全防护措施。

Still, it’s unclear how persistent the mental health challenges around ChatGPT will be. While GPT-5 seems to be an improvement over previous AI models in terms of safety, there still seems to be a slice of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI also still makes its older and less-safe AI models, including GPT-4o, available for millions of its paying subscribers.

然而,目前尚不清楚ChatGPT在心理健康方面的挑战会持续多久。虽然GPT-5在安全性方面似乎比以前的AI模型有所改进,但ChatGPT的部分回应仍然被OpenAI认为是“不理想的”。此外,OpenAI仍然向其数百万付费用户提供其较旧且安全性较低的AI模型,包括GPT-4o。