Are AI Prescription a Step Forward or a Risk for Patients?

Many in health-tech see disruption as the main goal. In psychiatry, though, making big changes without strong safeguards can lead to serious problems.

Starting in April 2026, Utah began a year-long pilot with Legion Health, a San Francisco-based company. The program lets an AI chatbot handle psychiatric medication refills for $19 a month. While most news stories focus on how affordable and accessible this is, I’m more concerned about the hidden risks of removing human judgment from the process. I say this as both a clinician and the founder of Vo.Care.

What Exactly Is Happening in Utah?

The trial is carefully planned. For one year, a chatbot can give quick and simple refills, but only for a certain group of patients:
  • Who qualifies? Only “stable” patients who have not been hospitalized in the past year.
  • Which medications? Only non-controlled maintenance drugs, such as Prozac (fluoxetine) and Zoloft (sertraline).

Important limitation: The AI cannot start new treatments. It can only renew prescriptions that a human psychiatrist has already written.

Supporters say this is the best way to address the shortage of providers in rural areas. At $19 a month, mental health care costs less than many streaming services. Still, in medicine, you often get what you pay for.

To mitigate risk, Legion Health has agreed to submit monthly reports to state regulators and physicians. They are also involving pharmacists in the renewal process to provide an extra layer of professional scrutiny before a patient receives their medication.

Can the AI Adjust Your Dosage?

A question I frequently hear from health-tech innovators is: How will the AI know if it’s time to change the dose or pivot the medication?
 
The current answer is: It won’t.
 
The AI’s role is strictly limited to automating renewals for maintenance medications. It does not have the authorization to independently decide on dosage changes or that a patient needs a different drug class. The system relies entirely on human-led precedent—it simply maintains the plan already established by a professional.

However, this is where the clinical risk hides. Medical experts correctly argue that these medications require “active management, changes, and careful consideration.” An AI that is programmed only to “renew” is effectively a system that is blind to the need for change.

“Reading Between the Lines” Matters

My main concern, which many colleagues share, is what Dr. Brent Kious describes as an “epidemic of over-treatment”.
 

AI follows step-by-step instructions perfectly, but it struggles to pick up on what remains unsaid. Clinical intuition isn’t a luxury; it is a safety mechanism. A human clinician spots subtle signs—weight gain, avoidance of eye contact, or a slight tremor—that suggest a medication might be doing more harm than good.

Beyond these physical signs, there are two critical “blind spots” an algorithm simply cannot see:

Insight Gap (Induced Mania/Psychosis)

Certain medications can inadvertently trigger mania or psychosis. In these states, patients often lose the ability to accurately report their symptoms. An AI asking “Are you feeling okay?” will receive a “Yes” from a patient who is currently experiencing a manic episode, potentially leading the bot to renew a prescription that is actively fueling a psychiatric crisis.

The “Sub-optimal Baseline” Trap

Patients often settle for “feeling okay” because they don’t realize how much better they could actually feel. A doctor knows how to push for total wellness, whereas a “maintenance” AI will keep a patient on a low, sub-optimal dose indefinitely. Without a human to ask, “Could we be doing better?” or “Is it time to taper?”, we miss the chance to reach an optimal therapeutic outcome.

Maintaining a Delicate Balance

Psychiatry is a constant search for the right dose. The goal is to provide effective treatment while strictly avoiding the risks of overprescribing and life-altering side effects. AI is built to give users what they want, but doctors are trained to provide what the patient actually needs. If we let AI handle refills without oversight, we lose the chance to optimize care, leaving patients stuck at a “stable” but sub-par version of health.

Doctronic Regulatory Sandbox

A separate and distinct pilot program with the developer Doctronic operates within a state-sanctioned “regulatory sandbox.” This initiative authorizes an autonomous AI agent to renew prescriptions for 192 drugs linked to chronic conditions. Utah suspended certain unprofessional conduct laws for this experiment to address structural healthcare failures, including rural clinician shortages and the administrative load of unreimbursed renewal requests. This program represents a formal deployment of an agentic system at scale, functioning under a specific legal framework rather than a subscription-based chatbot model.

“Many prescriptions for treating chronic conditions change little over time.”

The core logic for this autonomous approach centers on the fact that many prescriptions for treating chronic conditions change little over time. This inherent stability makes long-term treatments highly relevant subjects for automation experiments. Because these cases often remain consistent for years, the use of an agentic system serves as a thoughtful response to documented healthcare barriers. This experiment aims to determine if autonomous agents can safely manage routine maintenance, allowing human providers to focus their energy on complex clinical needs.

Wrapping Up

Accessibility is important, but safety must always come first. Monthly reports and pharmacist check-ins are administrative steps, but they do not replace the clinical relationship. As we watch the Utah experiments unfold, let’s not just ask if AI can refill a prescription. Instead, let’s ask if it should do so without a licensed professional making the final call on whether that medication is still the right choice for the patient’s evolving life.

Advocating for Clinical Guardrails in the Age of AI for Psychiatry and Mental Health

The automation of psychiatry is moving fast, but safety shouldn’t be a secondary thought. At Vo.Care, we are advocating for the clinical guardrails for the next generation of mental health. Get Dr. Krysti Vo‘s take on the latest in AI ethics, habit science, and the future of safe digital care.

Stay connected to our latest work.

New articles and upcoming event details are delivered directly to you. Joining the Vo.Care list ensures you stay informed on our newest insights and community opportunities without needing to check back.

Sharing is caring: