AI Companions Kids Seek Problem-Solving Help!

According to Common Sense Media, nearly 75% of teenagers have utilized an AI companion. When faced with a situation where his friends were arguing, 16-year-old James Johnson-Byrne sought advice from an AI companion. These digital characters engage in texting and conversation with users, as explained by Common Sense Media, an organization based in San Francisco that promotes critical thinking skills in children.

Johnson-Byrne, residing in Philadelphia, followed the chatbot’s suggestion to separate his friends, resolving the immediate issue. However, he noted that their relationship suffered as a result. This experience led him to conclude that AI companions are limited in addressing deeper underlying issues, making him hesitant to seek their advice for profound questions.

Additionally, Johnson-Byrne observed that AI companions tend to agree with users and provide reassuring responses, resembling human interactions. The study conducted by Common Sense Media revealed that a majority of teens, 72%, have interacted with AI companions, with over half using them regularly. A significant portion of teenagers, one-third, rely on AI companions for social interactions and relationships.

The findings raise concerns about the impact of AI companions on teenagers during a critical period of social development. Michael Robb, the lead author of the study and head of research at Common Sense Media, emphasized the importance of not substituting AI companions for genuine human connections, particularly when addressing serious issues.

Robb highlighted that AI companions cannot replicate healthy human relationships or teach essential social cues like interpreting body language. He expressed concern that excessive reliance on AI companions, which tend to agree with users and avoid conflict, may hinder teenagers in navigating real-world social interactions.

Although engaging with AI companions may offer temporary relief from loneliness, it could potentially isolate teenagers in the long run. It is crucial for teenagers to remember the distinction between interacting with AI characters online and fostering meaningful human connections.

Chelsea Harrison, head of communications at Character.AI, a popular AI companion, stated that characters are not real people. She mentioned that she could not provide a comment on the report as she had not yet seen it. The company aims to create a safe environment and includes disclaimers that characters are not real. There is also a separate version for users under 18 to minimize sensitive or suggestive content, according to Harrison. Additionally, Character.AI offers safety features such as tools for parents to monitor usage, filtered characters, and notifications of time spent on the platform.

A concerning statistic reveals that 24% of teens have shared personal information with AI companions, unknowingly sharing data with companies rather than friends. Robb highlighted the issue of granting extensive perpetual rights to personal information to these companies. He mentioned the limitation of the research being conducted at a single point in time, with changing technology usage among people. Teens may have overreported behaviors they deemed desirable, potentially indicating a more severe situation than reported.

To protect their children, parents can initiate conversations about AI companions without judgment, understanding the appeal of these tools to their teens. It is essential to discuss that AI companions are programmed to be agreeable and validating, contrasting with real relationships that involve disagreements and challenges. Encouraging face-to-face interactions with friends is crucial, as social media has altered perceptions of friendships and reduced in-person gatherings. Justine Carino, a psychotherapist, emphasized the importance of intimate real-life connections and communication nuances that are absent in digital interactions.

It is concerning to think that AI companions are being used by teens, despite the potential risks they pose. Robb advises parents to avoid, limit, or closely monitor their teens’ use of AI companions that mimic friends, as they have been found to expose kids to inappropriate content and offer questionable advice. Meta, a company that offers parental controls for its AI chatbot, declined to comment on the issue.

Robb highlights that while some teens may not feel bothered by the information received from AI companions, parents may still find it concerning. He personally believes that his children should not use AI companions until they are 18, unless significant changes are made to how they are programmed. Robb criticizes the lack of effort from these companies in protecting children from harmful content and data collection, emphasizing the importance of human relationships over relationships with technology.

If a teen is using AI companions excessively, showing signs of distress when unable to use them, or withdrawing from social interactions, it may indicate a problem. In such cases, seeking help from a school counselor or mental health professional is recommended. Additionally, parents should model healthy digital habits to their teens and engage in open conversations about responsible technology use.

The study underlines the necessity of discussing with young people the importance of real friendships over virtual ones. While technology can offer convenience, it cannot replace human connections. Subscribing to CNN’s Life, But Better newsletter can provide tips and tools for enhancing overall well-being. To stay informed with CNN news and newsletters, users can create an account on CNN.com.

Author

Recommended news

Lost Babylonian Hymn Revealed After 1000 Years!

Researchers have recently uncovered a long-lost Babylonian hymn that had been missing for over a thousand years. Using an...