[h] home [b] blog [n] notebook

the DoNotPay settlement

last year the FTC settled with DoNotPay — the company that marketed its AI as "the world's first robot lawyer" — for $193,000 and a requirement to notify customers that its AI couldn't actually do most of what it claimed. the monetary penalty was small enough that most of the coverage treated it as a slap on the wrist. i think that framing misses the point entirely.

the $193,000 isn't the story. the story is the legal theory. the FTC brought the case under Section 5 of the FTC Act — unfair or deceptive acts or practices — and the core allegation was not that DoNotPay's software crashed or leaked data. it was that the AI couldn't reliably do what the company said it could do. that's a capability claim enforced as a consumer protection matter. that's new, and it has implications that go well beyond one legal-tech startup.

the capability claim problem

almost every AI company makes capability claims. this is unavoidable — you can't sell a product without describing what it does. but AI capability claims are different from traditional software capability claims in a way that creates legal exposure most founders aren't tracking.

when a traditional software company says "our product does X," there's usually a fairly binary answer: it does or it doesn't. if it doesn't, that's a bug. the company fixes it or gets sued for misrepresentation. the epistemic situation is clear enough.

when an AI company says "our AI does X," the answer is usually probabilistic. the AI does X most of the time, in most contexts, against most inputs. sometimes it doesn't. sometimes it confidently claims to have done X when it actually produced something adjacent to X that looks like X if you're not paying close attention. there's no bright line between "working" and "broken." there's a distribution of outputs across a distribution of inputs, and somewhere in that distribution there are cases where the product clearly doesn't deliver what was claimed.

DoNotPay's specific problem was that it marketed its AI as capable of providing legal representation — drafting legal documents, arguing cases, navigating courts — at a quality level it couldn't consistently achieve. the FTC's position was that this is deceptive regardless of whether any specific user was harmed. the marketing created an impression the product couldn't reliably live up to. that's the violation.

what this means for AI product marketing

the immediate implication is that "our AI can do X" is now a legally significant statement in a way it wasn't before. the FTC action establishes a precedent — still soft, still early, but real — that overstating AI capabilities is actionable under consumer protection law. not just in the "we might get sued by a customer who relied on a wrong output" sense, but in the "the regulator can come after us for the marketing itself" sense.

this creates a strange incentive structure. the companies most likely to be exposed are the ones that marketed hardest. if you raised a Series A on the back of "AI that can replace your lawyer" or "AI that handles your medical decisions" and your product can't consistently do that, you have a gap between your public claims and your actual performance. that gap is now a regulatory surface.

the fix is not to stop making capability claims. the fix is to make accurate ones. which sounds obvious but requires actually knowing what your AI can and can't do, in what contexts, at what reliability level. most AI companies don't have a rigorous answer to this. they have benchmark numbers that were computed on a curated eval set, and vibes about how it performs in the wild. that's not enough anymore.

the insurance angle

the DoNotPay settlement sits in an interesting place for insurance purposes. it's not quite professional liability — DoNotPay wasn't a law firm and wasn't providing legal representation in the traditional sense. it's not quite product liability — the software didn't malfunction in a hardware sense. it's not quite a data breach — no customer data was exfiltrated. it's a regulatory action against a company for marketing an AI product it couldn't back up with performance.

this category — call it regulatory and reputational exposure from AI capability gaps — doesn't fit cleanly into any existing coverage. a standard technology E&O policy covers claims by customers who suffered harm from a specific failure of your software. it doesn't obviously cover an FTC investigation into whether your website copy was accurate. a media liability policy might cover some of the marketing exposure, but media liability wasn't written with "our AI claims exceeded what our model can reliably deliver" in mind.

there's a product here for someone: AI-specific regulatory defense coverage. cover the legal costs of responding to a regulatory investigation into AI capability claims, the cost of any resulting fines or settlement, and the cost of any required customer notification or remediation. price it based on what the company claims its AI can do and whether those claims are independently verifiable. companies that have been through third-party capability testing get lower premiums. companies that are making aggressive claims they can't back up with evidence get priced accordingly.

the shape of what's coming

the FTC is not done here. the DoNotPay settlement was an early case against an easy target — a company that made very specific, very audacious claims about a product that couldn't deliver them. but the underlying theory is applicable broadly. any AI company that markets capabilities it can't reliably achieve is potentially exposed.

the EU AI Act creates a parallel dynamic on the other side of the Atlantic. it requires risk assessments, transparency, and documentation of AI system capabilities, particularly for high-risk applications. the record-keeping requirements alone create liability exposure: if you documented that your AI can do something, and then it demonstrably couldn't in a specific case, you've created evidence against yourself.

none of this means AI companies should stop building or stop marketing. it means the legal environment around AI capability claims is hardening, faster than most founders are tracking. the ones who get ahead of it — with rigorous internal testing, honest external marketing, and some form of insurance against the exposure they're taking on — will be in a better position when the next FTC action lands and it's not DoNotPay.


DoNotPay's founder responded to the settlement by saying the FTC was "going after the little guy." maybe. but the FTC was also establishing a legal framework that will apply to the big guys when they get there. $193,000 was the price of the first case. the next settlement will cost more, against a company with more users and more aggressive claims. that's how regulatory precedent works.