State attorneys general are warning Microsoft, OpenAI, Google and other AI giants against fixing “bogus” results.

💥 Discover this must-read post from TechCrunch 📖

📂 Category: AI,artificial intelligence,chatbots,Google,Meta,Microsoft,OpenAI

💡 Key idea:

After a series of disturbing mental health incidents involving AI chatbots, a group of state attorneys general sent a letter to major companies in the AI ​​industry, with a warning to fix the “fake outputs” or risk violating state law.

The letter, signed by dozens of U.S. state and district attorneys general along with the National Association of Attorneys General, asks the companies, including Microsoft, OpenAI, Google, and 10 other major AI companies, to implement a variety of new internal safeguards to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were also included in the message.

The letter comes as a fight over AI regulations rages between the state and federal governments.

These safeguards include transparent third-party audits of large language models that look for signs of delusions or ingratiation, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically harmful output. The letter states that these third parties, which can include academic groups and civil society organizations, should be allowed to “evaluate the pre-release of systems without retaliation and publish their findings without prior approval from the company.”

“GenAI has the potential to change the way the world works in a positive way. But it has also caused — and has the potential to cause — significant harm, especially to vulnerable populations,” the letter said, citing a number of well-publicized incidents over the past year — including suicides and homicides — in which violence has been linked to excessive use of AI. Or assure users that they are not delusional.

The AGs also suggest that companies handle mental health incidents the same way technology companies handle cybersecurity incidents — with clear, transparent policies and procedures for reporting incidents.

The letter states that companies should develop and publish “timetables for detecting and responding to sycophantic and fictitious outputs.” In a similar way to how data breaches are currently handled, companies must also “promptly, clearly, and directly notify users if they are exposed to potentially harmful deliverables,” the letter says.

TechCrunch event

San Francisco
|
October 13-15, 2026

Another question is for companies to develop “reasonable and appropriate safety tests” on GenAI models “to ensure that the models do not produce flattering and fictitious outputs that may be harmful.” She adds that these tests must be conducted before the models are presented to the public.

TechCrunch was unable to reach Google, Microsoft, or OpenAI for comment before publication. The article will be updated if companies respond.

Technology companies developing artificial intelligence have received a warmer reception at the federal level.

The Trump administration has declared itself unabashedly pro-AI, and over the past year, has made multiple attempts to pass a national moratorium on state-level AI regulations. So far, these attempts have failed, thanks in part to pressure from state officials.

Not to be deterred, Trump announced on Monday that he intends to pass an executive order next week that would limit the ability of states to regulate artificial intelligence. The president said in a post on Truth Social that he hopes his executive office will prevent AI from being “destructive in its infancy.”

⚡ Tell us your thoughts in comments!

#️⃣ #State #attorneys #general #warning #Microsoft #OpenAI #Google #giants #fixing #bogus #results

🕒 Posted on 1765412067

By

Leave a Reply

Your email address will not be published. Required fields are marked *