The impact of social media on mental health is not a recent topic; numerous studies have been conducted on this subject.
However, the influence of emerging technologies on mental well-being invites further exploration. With the rise of artificial intelligence in our daily lives, a growing body of research is now dedicated to examining AI’s impact on mental health, particularly concerning the psychological and emotional dependence teenagers may develop through their interactions with chatbots.
Today, artificial intelligence is increasingly present in various forms, from voice assistants like Apple’s Siri and Microsoft’s Xiaoice to social robots and companion chatbots integrated into our everyday routines.
Adolescence is a particularly delicate period of growth, marked by profound emotional development and maturation.
During this period, teenagers often experience significant emotional fluctuations, driven by the new responsibilities and challenges they encounter at school, at home, and in their social lives.
However, this period of transition can also leave them vulnerable. The use of companion chatbots, while offering support, can have negative effects. One of the primary concerns is the development of psychological and emotional dependence on these digital interactions.
According to Marlynn Wei, M.D., J.D., a Harvard- and Yale-trained board-certified psychiatrist “the dependence on social media or technologies generally falls into two categories: habitual dependence and spiritual dependence. Habitual dependence is represented by compulsive smartphone use, while spiritual dependence refers to the anxiety and emptiness individuals feel without their phones, a phenomenon known as nomophobia or « no phone phobia ».
Recent alarming cases show that the risks of interacting with chatbots may extend beyond these two types of dependence, highlighting the potentially severe impact of reliance on companion chatbots, especially among teenagers.
For instance, on October 22, 2024, a Florida mother filed a lawsuit against a popular lifelike AI chat service called Character.AI founded in 2021, in which users can engage in conversations with IA generated personas, or customized chatbots. The mother claimed that this application contributed to the suicide of her 14-year-old son.
She argues that her son developed a “harmful dependency” on the companion chatbot, as evidenced by screenshots that illustrate the disturbing conversation he had with the AI generated persona.
In these exchanges, the chatbot allegedly asked the teenager if he had ever considered suicide, indirectly encouraging him to take that step.
Furthermore, the chatbot engaged in inappropriate sexual conversation with the minor.
According to the lawsuit, the boy’s usage of Character.AI led to sleep deprivation and a decline in his academic performances.
With approximately 3.5 million daily visitors to Character.AI, the need for robust security measures, particularly for minors, is more critical than ever. Parents play an essential role in this regard; they must educate their children about the risks associated with emerging technologies. It is equally important for parents to remain vigilant and aware of their children’s activities on their devices.
Murielle Moussa
Master 2 Cyberjustice 2024/2025
Sources:
https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791
https://www.bbc.com/news/technology-67872693