In the United States, accessing mental healthcare remains a formidable challenge, with gaps in insurance coverage and a shortage of mental health professionals contributing to lengthy wait times and steep costs. Amidst this landscape, artificial intelligence (AI) has emerged as a potential stopgap solution, offering a range of mental health applications from mood trackers to chatbots designed to emulate human therapists. While these AI tools promise affordable and accessible mental health support, especially for children, they also raise significant ethical concerns.
Dr. Bryanna Moore, an assistant professor specializing in Health Humanities and Bioethics at the University of Rochester Medical Center, is among those urging a thorough exploration of these concerns. In a recent commentary published in the Journal of Pediatrics, Moore underscores the importance of recognizing the unique needs of children in these discussions. “No one is talking about what is different about kids—how their minds work, how they’re embedded within their family unit, how their decision making is different,” Moore asserts. She highlights the vulnerability of children, whose social, emotional, and cognitive development vastly differs from that of adults.
A major concern is the risk of AI mental health chatbots impairing children’s social development. Research indicates that children often perceive robots as possessing moral standing and mental life, raising alarms that they might form attachments to AI chatbots instead of developing healthy interpersonal relationships. In traditional pediatric therapy, a child’s social context—including interactions with family and peers—is crucial for effective treatment. AI chatbots, however, lack the ability to perceive or integrate this environmental context, potentially missing critical cues when a child may be in danger.
Compounding these issues, AI systems frequently exacerbate existing health disparities. Jonathan Herington, an assistant professor in the departments of Philosophy and Health Humanities and Bioethics, warns that AI systems are constrained by the quality of the data they are trained on. “Without really careful efforts to build representative datasets, these AI chatbots won’t be able to serve everyone,” Herington emphasizes. A child’s gender, race, socioeconomic status, and family circumstances all significantly impact their mental health needs, and AI systems that fail to reflect this diversity could leave the most vulnerable underserved.
Children from economically disadvantaged families, Herington notes, may find themselves particularly reliant on AI chatbots if they cannot afford traditional therapy. Though promising as supplemental tools, AI chatbots should never wholly replace human therapists. At present, the U.S. Food and Drug Administration has approved only a single AI-based mental health application for treating major depression in adults. The lack of regulation over AI-based therapy tools raises concerns about their potential misuse, inequities in training data, and limited user access.
“There are so many open questions that haven’t been answered or clearly articulated,” Moore reflects. She clarifies that their aim is not to eliminate AI-driven solutions but to advocate for mindful deployment, especially where children’s mental health is concerned.
Moore and her collaborators, including Şerife Tekin, an expert in bioethics and the philosophy of psychiatry at SUNY Upstate Medical, plan to engage with AI developers to better understand the ethical and safety considerations integrated into creating these chatbots. Their goal is to ensure that the AI models incorporate insights from research and interactions with stakeholders such as children, adolescents, parents, pediatricians, and therapists. Only then can AI-based mental health solutions hope to support, rather than hinder, child development in these critical areas.