Everyone is debating whether MedVI is an AI success story, a marketing story, or a billion-dollar fraud. They’re all missing the real story. This is a payments and regulatory enforcement story — and I can tell you exactly how it ends, regardless of whether the allegations are true.
The coverage has been breathless. Forbes, TechCrunch, and a hundred LinkedIn threads are fighting about whether the AI actually worked, whether the results were real, whether the growth metrics were manufactured. That debate is interesting. It is not the important debate.
The AI community got caught in the crossfire of a story that was never fundamentally about AI. It was about a company that found a real market signal, moved fast, and didn’t govern what that speed was generating. The AI made the speed possible. The absence of governance made the exposure inevitable.
In Regulated Markets, Distribution Is Not the Moat
What MedVI actually built was a distribution layer for healthcare products — a system that could acquire patients, generate compelling content at scale, route them through intake flows, and convert them into paying customers. That is a real capability. AI made it fast and cheap in ways that weren’t possible three years ago.
That works. Until it doesn’t.
In unregulated consumer markets, distribution is the moat. Build the audience, own the channel, and the product almost doesn’t matter. In regulated healthcare markets, distribution is the exposure surface. Every patient acquired is a documented interaction. Every claim made in the acquisition funnel is a marketing statement subject to FTC and FDA scrutiny. Every transaction is a data point in a pattern that payment networks, regulators, and upstream partners are continuously analyzing.
Healthcare payments companies don’t fail because their technology stops working. They fail because they trigger three enforcement layers simultaneously — and those layers don’t negotiate.
If claims are misleading, it is not a branding issue. It is an enforcement action. The FTC and FDA don’t issue warnings and wait. When the pattern is clear — and AI-generated marketing at scale makes patterns very clear, very fast — they act. Consent decrees, civil penalties, injunctions. The business doesn’t pause while they investigate. The investigation is the pause.
If deceptive marketing is proven, the payment rails don’t negotiate. They terminate. Visa and Mastercard maintain their own compliance programs for healthcare merchants. High chargeback ratios, regulatory flags, and media exposure all feed risk models that operate independently of whatever legal proceedings are underway. A merchant category code violation or a network-level fraud determination doesn’t require a court ruling. It requires a threshold being crossed.
Telehealth providers, compounding pharmacies, and payment processors don’t absorb regulatory risk. They cut it off. When Layer 1 and Layer 2 start moving, every upstream partner reassesses their exposure. Contracts have termination-for-cause clauses. Relationships that took years to build dissolve in the time it takes for a regulatory press release to circulate in a compliance Slack channel. The product can’t ship without the supply chain. When the supply chain withdraws, the distribution layer is worthless.
Healthcare merchants operating online are expected to meet certification and ongoing monitoring standards. LegitScript’s Healthcare Merchant Certification is not optional guidance — it is the bar that payment processors and card brands use to assess risk. If your company is acquiring healthcare patients and processing payments for healthcare products, this certification is the baseline, not a nice-to-have.
→ LegitScript Healthcare Certification RequirementsThis Wasn’t an AI Failure. It Was a Governance Failure.
MedVI had access to the rules. Every company operating in this space has access to the rules. The FTC Act is public. The FDA’s guidance on health claims is public. The card brand merchant compliance requirements are documented. LegitScript’s certification standards are published. None of this was hidden.
They weren’t operating in a regulatory gray area that required specialized legal interpretation. They were operating in a well-lit space with clearly posted boundaries, at high velocity, generating output that touched those boundaries thousands of times per day.
Speed and conversion are what AI optimizes for when you don’t tell it what else to optimize for. If you deploy an AI content system into a regulated market and your only success metrics are acquisition cost and conversion rate, you will get content that is very good at acquiring and converting — and completely indifferent to the regulatory environment it’s operating in.
“AI fluency is not domain fluency. Just because you can generate at scale doesn’t mean you understand the system you’re operating inside.”
What AI optimizes for and what regulated markets require are not the same list.
- Conversion and engagement
- Speed of output
- Scale and volume
- Stylistic consistency
- Audience resonance
- Regulatory boundaries
- Evidentiary standards for claims
- Acceptable pharmaceutical marketing
- Enforcement risk thresholds
- Domain-specific legal exposure
So unless you explicitly govern it — unless you build intent verification into the system before content is generated, not after — you will produce fast, polished, high-converting content that is wrong in ways that only matter when a regulator, a card network, or a supply chain partner decides they’ve seen enough.
The Uncomfortable Signal Everyone Is Ignoring
MedVI found a legitimate signal. There is genuine demand in the market they were serving. Patients want accessible, affordable healthcare. The friction in traditional healthcare delivery is real and the opportunity to reduce it with technology is real. None of that is in dispute.
The mistake wasn’t identifying the signal. The mistake was treating a regulated market like a consumer internet market and assuming that if the product worked and people wanted it, the compliance details would sort themselves out or could be addressed after scale was achieved.
Operating without governance in a regulated market doesn’t mean you won’t get caught. It means you’re accumulating a debt that compounds every day you operate. The enforcement mechanisms aren’t watching in real time — they’re watching in aggregate. By the time they respond, the pattern is already large enough to define the narrative and large enough to make remediation extremely difficult.
AI compressed the timeline. A human-operated marketing team building the same non-compliant content would have produced it slowly enough that the pattern might have been caught and corrected internally. An AI system producing the same content at scale built the pattern at machine speed. The enforcement response runs on its own clock. The accumulation of exposure doesn’t wait for the response.
The Real Lesson for Builders Using AI
The one-person unicorn narrative is seductive right now, and it’s not wrong. AI genuinely does compress execution timelines in ways that change the economics of building. A small team can now build what used to require hundreds of people. That capability is real.
What that narrative skips is the other side of compression. You can “Uber-ify” the front end of a regulated market — the acquisition, the intake, the experience layer — without Uber-ifying the back end. The back end of healthcare is still CMS billing rules, FDA approval pathways, HIPAA requirements, state medical board regulations, and card brand merchant compliance. AI didn’t change any of that. It just made it possible to generate exposure across all of it faster than any previous technology allowed.
The businesses that will win in AI-enabled regulated markets are not the ones that move fastest. They’re the ones that move with governed intent — where the AI is executing a direction that has been explicitly defined against the constraints of the market it’s operating in, verified before output reaches a channel, and monitored continuously against the signals that enforcement systems are tracking.
MedVI isn’t a cautionary tale about AI. It’s a cautionary tale about ungoverned execution in a market that has no tolerance for it. The AI worked. The governance didn’t exist. Those are different problems, and conflating them leads to the wrong conclusion — that the solution is a more careful AI, when the actual solution is an explicit governance layer between the AI and the market it’s operating in.
The question every builder should be asking before they deploy AI into a regulated market isn’t “can this AI do what I need it to do?” It’s “have I defined what this AI is allowed to do, verified that its outputs conform to those definitions, and built detection for when it drifts?”
AI is not the risk. Ungoverned intent is.
This is precisely the gap Intent Engineering is designed to close — structural verification between the AI output and the human decision it's supposed to inform.
For a deeper look at why the model itself isn't the fix, read You Can't Fix the Model.
Intent governance for regulated markets.
VertixIQ verifies AI output against source truth before it becomes exposure. Built for legal, financial services, and healthcare.
Try Preflight →