Will AI Be Allowed to Prescribe Your Medication?

Streamlining Pharmaceutical Artwork Management

Most Americans would picture a doctor when getting their prescription for medicine—someone who is licensed, trained, and accountable for the actions they take. Few would imagine that a software system would be making that call instead.

In recent years, Artificial Intelligence has become intertwined with many people’s lives. Despite controversy, investment issues, and the legality of said projects, it has permeated every facet of modern society. AI’s growth is likely inevitable.

As machine learning technology is expected to develop at unprecedented rates, 2026 is potentially the most pivotal year for AI’s future. It is necessary that Americans all around the world recognize that decisions made this year may fundamentally shape the course of the development and integration of AI.

House of Representatives Bill 238 (H.R. 238) is yet another push for integration into everyday lives.

The text of the bill is as follows:

In this subsection, the term ‘practitioner licensed by law to administer such drug’ includes artificial intelligence and machine learning technology that are—“(A) authorized pursuant to a statute of the State involved to prescribe the drug involved; and “(B) approved, cleared, or authorized under section 510(k), 513, 515, or 564.”

** It is important to note that the bill does not mention specific drug classes or clinical settings.

Robotic Prescriptions

The increasing use of AI in healthcare has developed into a controversial topic—despite initially attractive benefits in efficiency, AI’s unreliable performance has drawn several critics. This discrepancy has led to the implementation of strict government regulations regarding AI use in medical fields.

4 ways backend AI integrations in healthcare can improve electronic ...

To put this caution into context, AI is expected to reach a market size of almost 187.95 billion USD by 2030, representing more than a seventeen-fold increase from 2021. While in the workforce, it is expected to free up 10% of all jobs.

Against this backdrop, the Healthy Technology Act of 2025 would fundamentally shift this political landscape. H.R.238 allows AI to prescribe drugs when two conditions are met:

The AI system must be both authorized under state law to prescribe the specific drug and approved by the FDA under relevant regulatory pathways.

As per accountability, the FDA pathways mentioned in this bill, such as 510(k) clearance, are used to evaluate medical devices. The Food and Drug Administration itself notes that clearance often hinges on “substantial equivalence” to existing technologies, not on long-term safety.

The Upsides

It is undeniable that, in terms of raw analytical capability, AI far exceeds humans in task completion. This holds in the context of healthcare, with new medical AI support systems being shown to enhance the accuracy of medical decisions by processing larger datasets.

For instance, studies indicate a decrease in prescription errors of up to 55% when AI systems are utilized. Considering the lethality of medical errors, any improvement in accuracy may prove critical in saving countless lives while avoiding preventable events in the United States.

Beyond accuracy, AI offers significant gains in speed and consistency. Unlike human clinicians, AI systems do not experience fatigue, cognitive overload, or attentional decline over long shifts. AI systems could bring healthcare to rural areas, while also reducing the risk of burnout for doctors.

AI-driven medical diagnostic programs could “bridge healthcare gaps,” ensuring prescriptions for individuals in underprivileged areas. Data from 2025 indicates that basic healthcare is only accessible to 51% of Americans, with 38% reporting experiences of not being able to pay for necessary care.

Specifically, AI could generate $200–$360 billion in annual value across U.S. healthcare, as administrative costs account for 25-30% of healthcare spending. When it comes to speed, physicians currently spend nearly half of their working hours on administrative tasks—something AI could automate.

However, H.R. 238 stands a long way from achieving the benefits of AI’s integration in healthcare.

Reliability Concerns

In early 2023, a popular internet trend involved “gaslighting” the AI model ChatGPT into admitting that 2 + 2 = 5. Despite numerous improvements in accuracy and intelligence since then, AI continues to hallucinate. In the context of AI in healthcare, the question of reliability has remained persistent among critics.

AI “hallucinations” persist in medical applications, leading to a notable risk of inaccurate prescriptions. In fact, a study published in late 2025 warns that AI is not yet ready for clinical deployment, pointing to a litany of systemic flaws that cause misdiagnosis.

Across six large language models evaluated on clinical case tasks, hallucination rates exceeded 80% in certain categories. Even heavily funded models hallucinated almost 50% of the time. With data running out over time, the issue of hallucination is growing at a faster and more dangerous rate.

These hallucinations, alongside misrepresentation of numerous ethnic and racial groups within biomedical AI training datasets, exacerbate biases in healthcare. For instance, one study found AI was 26% more likely to diagnose black participants with chronic diseases compared to their white counterparts.

Therefore, along with increasing the probability of misdiagnosis for these excluded populations, AI integration through H.R. 238 may normalize and sanction medical discrimination.

Implications for People with Disabilities

Perhaps one of the most vulnerable populations concerning healthcare is the disabled community. Accounting for more than 12% of the US population, the disabled community is often underprioritized in public health policies.

The exclusionary nature of AI datasets defines the parameters of a “proper human.” A widespread use of AI in healthcare may prove devastating for disabled individuals around the country. AI systems may interpret disability-related variation as error, anomaly, or pathology rather than a legitimate difference.

Should H.R.238 be officially enacted, the government would legislate this problematization of disabled bodies.

Legality and Accountability

While it is important to acknowledge that humans make mistakes, the issue of accountability also arises when it comes to Artificial Intelligence. When an AI system contributes to a clinical decision that causes harm, prosecution becomes difficult. Traditional malpractice regimes are designed around human negligence, and it is not fully established whether developers, hospitals, or clinicians should be held liable for mistakes made by algorithmic systems.


A Cautious Optimism

Although the presence of potential benefits of AI use in healthcare is evident, the risks are simply too high today. Despite providing an undeniable edge in data processing, the probability of inaccurate prescriptions coupled with the bias of AI datasets toward minorities and disabled folks, the enactment of H.R.238 would prove reckless and harmful.

However, there is reason to look optimistically to the future. AI is an ever-evolving technology that holds the potential to mitigate both errors and bias, substantially improving healthcare conditions. For now, however, this future is a far cry from what H.R.238 will deliver.

Leave a Reply

Discover more from Between the Bills

Subscribe now to keep reading and get access to the full archive.

Continue reading