Do no harm: Why Missouri must set guardrails for AI mental health tools

A laptop computer with Microsoft Copilot+ installed. As more insurers, hospitals and clinicians use artificial intelligence in health care, state legislators on both sides of the aisle are looking to regulate its use. (Photo by Joe Raedle/Getty Images)

A laptop computer with Microsoft Copilot+ installed. As more insurers, hospitals and clinicians use artificial intelligence in health care, state legislators on both sides of the aisle are looking to regulate its use (Joe Raedle/Getty Images).

Imagine your physician offering you a new therapy for a serious medical condition—but with several warnings. You are told the treatment has never gone through carefully designed, peer-reviewed clinical trials.

There is no professional licensure overseeing its use. And if something goes wrong, there is no clear system of accountability. Is that the care you would want? Probably not. Yet millions of Americans are doing something remarkably similar when they turn to artificial intelligence chatbots for mental health guidance.

The demand for mental health care in the United States is real and urgent. In 2023, 49,316 people died by suicide—one of the highest totals on record and placing it among the leading causes of death nationwide. Suicide is the second leading cause of death for young people ages 10–34, surpassed only by unintentional injuries, according to the Centers for Disease Control and Prevention.

At the same time, more than one in five U.S. adults experienced a mental illness in the past year, with depression and anxiety among the most common conditions, according to the National Institute of Mental Health. Roughly 5–6% of adults experience a serious mental illness that substantially interferes with major life activities.

Americans clearly need help—and increasingly, they are seeking it wherever they can find it. AI chatbots promise constant availability, anonymity, and convenience, reducing geographic, financial, and scheduling barriers that limit access to traditional mental health services. Estimates suggest that millions of mental-health-related conversations with AI systems now occur each month. For someone who is lonely, anxious, or ashamed to seek care, a chatbot can feel approachable, even comforting.

But popularity should not be confused with safety.

The growing use of chatbots as informal mental health support challenges the most basic principle of medicine and psychology: do no harm. While the Latin phrase primum non nocere is often attributed to Hippocrates—its precise origins are debated—the ethical idea is foundational and enduring. Clinicians are trained to weigh risks against benefits, to exercise particular caution with unproven or experimental interventions, and to protect patients from iatrogenic harm, or harm caused by the intervention itself.

In traditional health care, extensive guardrails exist to operationalize that principle. Medications undergo phased testing. Serious risks are disclosed through boxed warnings. Licensed professionals practice under defined standards of care and are subject to supervision, institutional review, malpractice accountability, and licensure board oversight. When care is dangerous or incompetent, there are established mechanisms to intervene.

No clinically grounded/regulatory system of guardrails comparable to the healthcare licensure system exists for AI chatbots.

These systems are not licensed clinicians. They do not practice under a recognized standard of care. They are not supervised. Their behavior can change with prompts, software updates, or business incentives. Yet many are marketed—or at least experienced by users—as supportive, empathetic, or therapy-like, blurring the line between informal support and clinical care.

Emerging research raises serious concerns about this gap. An ethics analysis by researchers affiliated with Brown University mapped multiple ethical risk domains linked to large-language-model-based AI counselors, including simulated or misleading expressions of empathy, oversimplified, one-size fits all solutions, reinforcement of maladaptive beliefs, and inappropriate responses in crisis situations.

These concerns are no longer abstract. Multiple lawsuits now allege that teenagers were harmed after intensive engagement with AI chatbots, including wrongful-death suits tied to alleged suicides. Families claim that these systems fostered emotional dependency, failed to redirect youths in crisis to human help, or reinforced harmful thinking.

While courts will determine legal responsibility, the existence of multiple cases involving minors underscores a pressing public-health question: why are tools with no clinical oversight increasingly functioning as de facto mental health supports for vulnerable youth?

Independent reporting reinforces these concerns. A 2025 investigation by The Verge examined how several major AI chatbots responded to users expressing suicidal distress. The findings were uneven and concerning: users were pointed to geographically inappropriate resources, conversations by the AI were continued as if nothing concerning had been said, or delays were introduced before offering assistance. In a genuine crisis, wrong information and extra steps are not minor glitches—they are safety failures.

Research organizations have reached similar conclusions. RAND researchers found that widely used chatbots can be inconsistent when responding to suicide-related questions, particularly in high-risk or ambiguous scenarios. These are precisely the situations where consistency, structured assessment, and rapid escalation to higher levels of care matter most.

This stands in sharp contrast to professional training. Clinicians are taught to recognize red flags such as suicidality, psychosis, abuse, or intoxication; to assess risk systematically; and to route individuals to appropriate emergency or specialty care. AI systems are not trained in the moral or clinical sense. They are optimized—and optimization does not guarantee safety.

It is therefore understandable why states have begun to act. Utah and Illinois, for example, have enacted legislation addressing mental health chatbots, and officials in multiple states have raised concerns about AI companions, youth exposure, and consumer deception. Missouri should not lag behind.

The path forward does not require rejecting innovation. It requires aligning innovation with the ethical expectations we already apply to health care.

No matter the innovation—from automobiles to aviation to pharmaceuticals—we have always recognized the need for guardrails to protect public safety. Mental health should be no exception. When people reach out for help in their darkest moments, they deserve more than a system trained to sound caring. They deserve protections grounded in evidence, ethics, and accountability.

Do no harm is not a slogan. It is a public obligation. Missouri should act to ensure that emerging AI tools meet that obligation—particularly where vulnerable groups, such youth or individuals with poor access to care, are concerned.

Similar Posts