Users relying heavily on ChatGPT might be more lonely, emotionally dependent: OpenAI study

A joint study by OpenAI and MIT Media Lab has raised concerns that frequent and sustained use of ChatGPT may be linked to increased loneliness and emotional dependence.
The findings suggest that users who engage deeply with the AI chatbot-particularly those who trust or form an emotional bond with it-are more likely to experience social isolation.
The study comes as ChatGPT, which was launched over two years ago, continues to see widespread adoption, with over 400 million weekly active users worldwide. While the chatbot was not designed as an AI companion, a growing number of users engage with it for emotional support and personal conversations, prompting researchers to examine its psychological impact.
The researchers used a two-pronged approach. First, they analysed nearly 40 million ChatGPT interactions while surveying over 4,000 users about their self-reported behaviour. Additionally, the MIT Media Lab conducted a randomised controlled trial (RCT) with 1,000 participants, who used ChatGPT for a minimum of five minutes daily over four weeks.
The results indicated a clear trend: individuals who relied more heavily on ChatGPT, particularly those who perceived it as a “friend” or attributed humanlike emotions to it, reported higher levels of loneliness.
“Overall, higher daily usage-across all modalities and conversation types-correlated with higher loneliness, dependence, and problematic use, and lower socialisation,” the researchers noted in their report.
Notably, users who frequently engaged in personal or emotionally charged conversations with the chatbot were more likely to feel lonely. The study also found that those with stronger emotional attachment tendencies experienced greater loneliness, while those with a higher level of trust in the chatbot demonstrated increased emotional dependence.
The researchers conducted a detailed examination of ChatGPT’s Advanced Voice Mode-a speech-to-speech interface that allows users to converse with the bot in real time. Participants interacted with ChatGPT in two modes: a neutral mode, where the chatbot maintained a steady, emotionless tone, and an engaging mode, where it responded with expressiveness.
While voice-based chat initially appeared to mitigate loneliness compared to text-based interactions, the benefits diminished at high usage levels. Users who frequently engaged with the neutral-voice chatbot, in particular, were more likely to feel isolated.
“Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot,” the study stated.
The study comes amid growing concerns over the psychological effects of AI chatbots, particularly those marketed for companionship. While ChatGPT is not explicitly designed for emotional support, other AI companies, such as Replika and Character.ai, have built platforms around virtual companionship. However, these companies have faced scrutiny, with Character.ai currently facing two separate lawsuits involving interactions with minors and Replika drawing the attention of Italian regulators.
Despite these concerns, AI chatbots remain a popular alternative for those seeking companionship, with some users even turning to them for mental health support. A 2024 YouGov survey revealed that just over half of young Americans aged 18 to 29 felt comfortable discussing mental health concerns with an AI. Another study suggested that OpenAI’s chatbot provided better personal advice than professional columnists.
Researchers stressed that while the study is still in its early stages, it highlights the complex relationship between chatbot interactions and emotional well-being.
“A lot of what we’re doing here is preliminary, but we’re trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is,” said Jason Phang, an OpenAI safety researcher who worked on the project.
The findings come just as OpenAIhas released GPT-4.5, an updated model that it claims is more intuitive and emotionally intelligent than its predecessors.