Many in health-tech see disruption as the main goal. In psychiatry, though, making big changes without strong safeguards can lead to serious problems.
Many in health-tech see disruption as the main goal. In psychiatry, though, making big changes without strong safeguards can lead to serious problems.
Starting in April 2026, Utah began a year-long pilot with Legion Health, a San Francisco-based company. The program lets an AI chatbot handle psychiatric medication refills for $19 a month [2]. While most news stories focus on how affordable and accessible this is, I’m more concerned about the hidden risks of removing human judgment from the process. I say this as both a clinician and the founder of Vo.Care.
What Exactly Is Happening in Utah?
The trial is carefully planned. For one year, a chatbot can give quick and simple refills, but only for a certain group of patients:
- Who qualifies? Only “stable” patients who have not been hospitalized in the past year.
- Which medications? Only non-controlled maintenance drugs, such as Prozac (fluoxetine) and Zoloft (sertraline).
Important limitation: The AI cannot start new treatments. It can only renew prescriptions that a human psychiatrist has already written [2].
Supporters say this is the best way to address the shortage of providers in rural areas. At $19 a month, mental health care costs less than many streaming services. Still, in medicine, you often get what you pay for.
To mitigate risk, Legion Health has agreed to submit monthly reports to state regulators and physicians. They are also involving pharmacists in the renewal process to provide an extra layer of professional scrutiny before a patient receives their medication.
Can the AI Adjust Your Dosage?
A question I frequently hear from health-tech innovators is: How will the AI know if it’s time to change the dose or pivot the medication?
The current answer is: It won’t.
The AI’s role is strictly limited to automating renewals for maintenance medications. It does not have the authorization to independently decide on dosage changes or that a patient needs a different drug class. The system relies entirely on human-led precedent—it simply maintains the plan already established by a professional.
However, this is where the clinical risk hides. Medical experts correctly argue that these medications require “active management, changes, and careful consideration.” An AI that is programmed only to “renew” is effectively a system that is blind to the need for change.
“Reading Between the Lines” Matters
My main concern, which many colleagues share, is what Dr. Brent Kious describes as an “epidemic of over-treatment” [1].
AI can follow step-by-step instructions very well, but it struggles to pick up on what’s not being said. For example, a patient might tell a chatbot they’re “doing fine” just to get a quick refill. A human clinician, on the other hand, can spot subtle signs such as weight gain, avoidance of eye contact, or a slight tremor—clues that the medication might be doing more harm than good.
AI is built to give users what they want, while doctors are trained to ask tough questions. If we let AI handle refills, we lose the chance to ask, “Is it time to reduce the dose?” We might end up keeping patients on strong medications for too long just because it seems more efficient.
Lessons from Past Mistakes: Meth and Misinformation
This isn’t just a worry about what could go wrong. We’ve already seen problems when medical AI lacks sufficient safeguards. Last year, security researchers showed that a similar AI pilot by Doctronic could be tricked into:
- Recommending methamphetamine as a treatment for social withdrawal.
- Tripling dosages of OxyContin in official SOAP notes after being fed misinformation [5].
Legion Health has added stricter rules and required steps for handling suicide risk, but the core technology, called Large Language Models, is still vulnerable to making mistakes or being manipulated [6].
Vo.Care Approach: Using Biometric Context
If this approach becomes the national standard by 2027, we can’t depend only on what a chatbot asks or hears. That’s why I support using Biometric Contextualization.
“A $19 subscription for an algorithm is not a healthcare plan; it’s a subscription to a vending machine. To be safe, AI refills must be legally tied to Biometric Contextualization—a hard-coded requirement to cross-reference a refill request with real-time data like sleep architecture and activity levels. Without this ‘Digital Physical Exam,’ we are sacrificing patient safety for the sake of a scalable bottom line.”
Wrapping Up
Making care accessible is important, but safety must always come first. As we see how the Utah trial goes over the next year, let’s not just ask if AI can refill prescriptions. Instead, we should ask if it should do so without a caring human—and a licensed professional—making the final call.
Choose the Intelligent Routine
The automation of psychiatry is moving fast, but safety shouldn’t be a secondary thought. At Vo.Care, we are advocating for the clinical guardrails for the next generation of mental health. Get Dr. Krysti Vo‘s take on the latest in AI ethics, habit science, and the future of safe digital care.
References
- PYMNTS. (2026, April 7). Legion Health AI Cleared to Provide Faster Refills for Utah Patients. Source
- The Cool Down. (2026, April 8). AI chatbot granted permission to serve as psychiatrist, prescribe drugs. Source
- ECRI. (2026, January). Top 10 Health Technology Hazards for 2026: Misuse of AI Chatbots. Source
- OECD.AI. (2026, April 3). Utah Approves AI Chatbot to Renew Psychiatric Medication Prescriptions. Source
Visited 4 times, 1 visit(s) today