AI Healthcare

New York Bill Would Let Users Sue AI Chatbots for Medical and Legal Advice

NY Senate Bill S7263 targets 14 licensed professions and creates a private right of action with fee-shifting.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
March 5, 20264 min read
Share:
New York State Capitol building with abstract AI interface elements overlaid, representing technology regulation legislation

A bill heading to the New York State Senate floor would make chatbot operators civilly liable when their AI dispenses advice in fields that require a professional license. Senate Bill S7263, sponsored by Senator Kristen Gonzalez, reached the Senate floor calendar on February 26, 2026, after advancing through the Internet and Technology Committee as part of a broader AI regulation package.

The bill covers 14 licensed professions, including medicine, law, dentistry, psychology, nursing, social work, and engineering, plus the unauthorized practice of law under the state Judiciary Law. If a chatbot gives a response that would constitute unauthorized professional practice when said by a human, the operator is on the hook.

The real weapon is the lawsuit mechanism

Plenty of proposed AI regulations land on AG enforcement or agency rulemaking. S7263 does something different: it creates a private right of action with fee-shifting. Winning plaintiffs collect legal costs from the defendant. That changes the economics in a way that anyone familiar with ADA web accessibility litigation will immediately recognize.

A legal analysis by Folding Sky drew the comparison directly: ADA digital accessibility cases produced over 5,000 lawsuits in 2025, with roughly 45% targeting repeat defendants and smaller businesses absorbing most of the early settlement pressure. S7263 has the same ingredients: private lawsuits, damages, fee-shifting, and an ambiguous standard that favors plaintiffs. The template-complaint-and-settle model practically writes itself.

Nobody defined "substantive"

The bill prohibits chatbots from providing any "substantive response, information, or advice" in covered domains. It never defines what "substantive" means. Is telling someone that ibuprofen is an anti-inflammatory substantive? Probably not. What about suggesting a dosage? Almost certainly yes. Explaining what an eviction notice is versus telling a tenant what to file and when? The bill doesn't say.

That ambiguity is simultaneously the bill's biggest vulnerability and its greatest source of leverage. Vague standards invite litigation. They also invite constitutional challenges: restricting speech based on its content tends to draw First Amendment scrutiny. Kevin Frazier, a research fellow at the Cato Institute, told Reason the bill limits access to information in a way that is unconstitutional and "contrary to both democratic values and a free market economy," which is a predictable objection from Cato but not necessarily wrong.

Taylor Barkley at the Abundance Institute called it "shortsighted at best and protectionist at worst." Both critics land on the same point: people who can't afford a lawyer or a doctor are the ones currently using chatbots as a substitute. Cutting off that access doesn't create more licensed professionals.

Who pays? Not OpenAI (probably)

One detail worth pulling out of the bill text: liability falls on the "proprietor," defined as whoever owns, operates, or deploys the chatbot. Third-party developers who license their technology to a proprietor are explicitly excluded. So if a hospital builds a triage tool on top of an API, the hospital holds the risk, not the company that built the model. A nonprofit running a tenant-rights chatbot faces the same exposure as a Fortune 500 company deploying a customer service bot.

The bill also says disclaimers don't help. Operators cannot escape liability by telling users they're talking to a bot. That kills the industry's current playbook of posting a small-print warning and moving on.

Timing and context

The bill landed on the floor calendar about three weeks after Character.AI and Google settled multiple lawsuits from families alleging chatbots contributed to teenage suicides. Gonzalez cited warnings from the American Psychological Association about chatbot therapists potentially driving vulnerable users toward self-harm. The broader legislative package includes protections for minors and synthetic content labeling requirements.

S7263 still needs a full Senate vote, and its companion bill in the Assembly (A6545) hasn't cleared committee. Even if it passes both chambers and gets the governor's signature, operators would have 90 days to comply. That's a short window to audit chatbot responses across 14 professional domains and build filters that don't gut the product's usefulness in the process.

The lawyers, at least, should do well regardless of outcome. If the bill passes, plaintiffs' firms get a new revenue stream. If it doesn't, the lobbying fees alone make the fight worthwhile.

Tags:AI regulationNew Yorkchatbot liabilityAI legislationhealthcare AIlegal techprivate right of actionprofessional licensingS7263AI policy
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

NY Bill Would Let Users Sue AI Chatbots Over Advice | aiHola