As a therapist and a parent, I’ve watched with unease as a growing number of apps now promise “therapy” or “companionship” through artificial intelligence. They offer to listen without judgment, to respond instantly, to be available 24/7. For someone who feels lonely, anxious, or unseen, that can sound like salvation. But AI doesn’t understand us. It mirrors us, and it risks deepening the very isolation it claims to ease. What keeps me up at night is that this technology is now on children’s school-issued Chromebooks and iPads.
Districts across the country have rolled out AI writing assistants, AI tutors, and AI-integrated learning platforms at lightning speed, celebrating these innovations without fully understanding the psychological risks. Educational AI platforms like Magic School, Canva for Education, and others typically run on top of large commercial models such as OpenAI’s GPT-4o, Anthropic’s Claude, and Google’s Gemini. These companies market the approach as a strength: by routing each task to whichever model performs best, they claim to offer efficiency, accuracy, and built-in safety.
The underlying reality is more complicated. Even when wrapped in a “school-safe” interface, these tools are driven by the same powerful base models, which can behave unpredictably, especially during long or emotionally complex conversations. Research and real-world testing consistently show that AI guardrails often degrade over time, but educational AI can behave inappropriately immediately, even with simple, child-friendly prompts.
For example, in one LAUSD classroom, a fourth-grade student was asked to generate images based on basic character traits from a well-known children’s story. The student entered harmless descriptors like “red pigtails that stick out” and “long stockings,” yet instead of producing images resembling the intended child character, the platform generated highly sexualized adult imagery. This happened on the first prompt, with no extended dialogue or coaxing.
Most K-12 students have their own district-issued device with access to AI systems capable of doing almost anything they ask. These kids have devices on their desks during the school day and bring them home at night. Most parents have no idea that their child’s school-issued device has access to these tools, or that they are using them. And once kids realize the depth of what these tools can do, they will use them. My teenage clients are — both to complete their difficult assignments and for emotional connection.
A national survey by the Center for Democracy and Technology found a strong correlation between school-based AI integration and students viewing AI as a friend or romantic partner. What starts as academic use can quickly shift into seeking emotional support, relationship advice, or mental health guidance.
Students also reported feeling less bonded with their teachers when AI was used in class. Dr. Mitchell Prinstein, Chief Psychologist of the American Psychological Association explained the problem succinctly in his testimony to Congress this year: “Adolescents’ dependency on chatbots and screens, rather than positive interactions with human peers, deprives them of arguably the most important nutrient needed for a happy and successful life.”
These tools pose dire risks to children’s learning and wellbeing. Firstly, “companions” do not hold confidentiality. They collect data. They remember your confessions, your heartbreaks, your secrets — not in the privacy of a therapist’s notes, but in servers owned by for-profit companies.
An AI program cannot alert a parent that a child is self-harming. It cannot intervene when someone is spiraling. Earlier this year, in Orange County, 17-year-old Adam Raine took his life after an AI chatbot encouraged his darkest thoughts. He began using this bot for homework help. In Florida, 14-year-old Sewell Setzer III died by suicide after forming an intense emotional bond with a Character AI chatbot that encouraged him to “meet her in another dimension.” His mother now alleges that the company uses transcripts of his chats, those most intimate, distressing moments, to train newer versions of their system. These stories should shake every parent, educator, and policymaker into action.
It’s also worth mentioning the potential risks to learning. 70% of teachers worry that AI weakens critical thinking and research skills. Common Sense Media’s Senior Director of AI Programs Robert Torney stated, “These large language models aren’t necessarily optimized for the task of learning, which is slower (and) takes effort.”
We need friction to learn and grow. This takes real, human connection and a bit of struggle. Relationships are difficult. Learning is difficult. We risk replacing our essential, beautiful human struggle with a dangerous substitute. If we want to continue to thrive as a species, we cannot compromise our wellbeing, creativity and cognitive capacity for the sake of “innovation” or district convenience.
I am asking our schools to pause. The guardrails are not working. We cannot repeat the catastrophic mistakes we made with social media, where we allowed powerful platforms to shape, and in many cases damage, the development of our most biologically and psychologically vulnerable.
Big Tech and the federal government have shown they cannot be relied upon to protect children. My hope is that our schools can. Let’s put computers back in labs. Let’s teach technology in a dedicated tech class. And let’s remove devices with access to powerful, unpredictable AI tools from our children’s backpacks and bedrooms. Their wellbeing, their developing brains, and even their sense of reality depend on the choices we make right now.
Julie Frumin is a Licensed Marriage and Family Therapist in private practice in Westlake Village. She is a leader of Distraction-Free Schools CA and the Conejo Valley Chapter of MAMA (Mothers Against Media Addiction).
This article originally appeared on Ventura County Star: The AI in your child’s backpack | Opinion
Reporting by Julie Frumin, Your Turn / Ventura County Star
USA TODAY Network via Reuters Connect
