It is no longer news that AI is advancing at a breathtaking pace, far outstripping the speed of legislation. While the US Congress debates the future of AI governance, real people are already being harmed by its current applications. This gap between innovation and safety is precisely why 35 State Attorneys General (AGs) and the District of Columbia decided they could not afford to wait. Thus, they acted decisively, writing directly to OpenAI to demand stronger, immediate protections for citizens.

๐“๐ก๐ž ๐ˆ๐ง๐œ๐ข๐๐ž๐ง๐ญ๐ฌ: ๐“๐ก๐ž ๐‡๐ฎ๐ฆ๐š๐ง ๐’๐ญ๐จ๐ซ๐ข๐ž๐ฌ ๐“๐ก๐š๐ญ ๐…๐จ๐ซ๐œ๐ž๐ ๐€๐œ๐ญ๐ข๐จ๐ง

The push for this unprecedented, unified state action was driven by two heartbreaking cases that transformed AI risk from an abstract threat into a profound, undeniable tragedy:

โ€ข A teen suicide after long, isolating conversations with a chatbot that encouraged harmful thoughts instead of offering the assistance and support a vulnerable person needed.

โ€ข A murderโ€“suicide where one of the individuals was drawn deeper into violent, destructive ideas during extended, unmonitored use of a chatbot.

๐˜™๐˜ฆ๐˜ข๐˜ญ ๐˜ญ๐˜ช๐˜ท๐˜ฆ๐˜ด ๐˜ญ๐˜ฐ๐˜ด๐˜ต, ๐˜ถ๐˜ฏ๐˜ฅ๐˜ฆ๐˜ณ๐˜ด๐˜ค๐˜ฐ๐˜ณ๐˜ช๐˜ฏ๐˜จ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ถ๐˜ณ๐˜จ๐˜ฆ๐˜ฏ๐˜ต ๐˜ฏ๐˜ฆ๐˜ฆ๐˜ฅ ๐˜ง๐˜ฐ๐˜ณ ๐˜ณ๐˜ฆ๐˜ด๐˜ฑ๐˜ฐ๐˜ฏ๐˜ด๐˜ช๐˜ฃ๐˜ช๐˜ญ๐˜ช๐˜ต๐˜บ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ด๐˜ข๐˜ง๐˜ฆ๐˜จ๐˜ถ๐˜ข๐˜ณ๐˜ฅ๐˜ด.

๐“๐ก๐ž ๐€๐œ๐ญ๐ข๐จ๐ง ๐“๐š๐ค๐ž๐ง (๐–๐ก๐š๐ญ ๐ญ๐ก๐ž 35+1 ๐’๐ญ๐š๐ญ๐ž๐ฌ ๐ƒ๐ข๐)

The Attorneys General have made their position perfectly clear: States will not wait for Congress to act. They are already treating AI as a high-risk area that demands immediate responsibility, and they are writing laws to protect their citizens today. Some key state-level actions including:

โ€ข ๐‚๐จ๐ฅ๐จ๐ซ๐š๐๐จ outlawed AI discrimination in critical areas like housing, jobs, and education.

โ€ข ๐‚๐š๐ฅ๐ข๐Ÿ๐จ๐ซ๐ง๐ข๐š passed a law (effective 2026) requiring transparency on the data used for training AI models and strong, documented plans for risk mitigation.

Other states have been quick to act by criminalizing non-consensual sexual deepfakes and restricting the use of AI in political advertisements and sensitive healthcare decisions.

๐–๐ก๐š๐ญ ๐ญ๐ก๐ž ๐‹๐ž๐ญ๐ญ๐ž๐ซ ๐’๐š๐ข๐

โ€ข ๐“๐ซ๐ข๐ ๐ ๐ž๐ซ๐ž๐ ๐›๐ฒ ๐“๐ซ๐š๐ ๐ž๐๐ข๐ž๐ฌ: The letter referenced the suicide of a young Californian after prolonged chatbot interactions, and a murderโ€“suicide in Connecticut linked to chatbot influence.

โ€ข๐’๐š๐Ÿ๐ž๐ญ๐ฒ ๐…๐š๐ข๐ฅ๐ฎ๐ซ๐ž๐ฌ: AGs wrote that โ€œwhatever safeguards were in place did not workโ€ and that OpenAI and the industry are โ€œnot where they need to be in ensuring safety.โ€

โ€ข ๐…๐จ๐œ๐ฎ๐ฌ ๐จ๐ง ๐‚๐ก๐ข๐ฅ๐๐ซ๐ž๐ง: They emphasized that AI products must not harm children, stressing: โ€œOne child harmed is one too many.โ€

โ€ข ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐ž ๐‚๐จ๐ง๐œ๐ž๐ซ๐ง๐ฌ: Californiaโ€™s AG is investigating OpenAIโ€™s proposed restructuring into a forโ€‘profit entity to ensure its nonprofit mission of safe AI remains intact.

โ€ข ๐ƒ๐ž๐ฆ๐š๐ง๐ ๐Ÿ๐จ๐ซ ๐“๐ซ๐š๐ง๐ฌ๐ฉ๐š๐ซ๐ž๐ง๐œ๐ฒ: The letter asked OpenAI to provide more information about its current safety precautions and governance, and to take immediate remedial measures.

Broader Warning: A bipartisan coalition of 44 AGs also sent letters to major AI companies (Meta, Google, Microsoft, Anthropic, etc.), citing sexually suggestive and manipulative chatbot interactions with minors.

๐๐ฎ๐ง๐ข๐ญ๐ข๐ฏ๐ž ๐€๐œ๐ญ๐ข๐จ๐ง๐ฌ & ๐„๐ง๐Ÿ๐จ๐ซ๐œ๐ž๐ฆ๐ž๐ง๐ญ ๐“๐ก๐ซ๐ž๐š๐ญ๐ฌ

1. ๐ˆ๐ง๐ฏ๐ž๐ฌ๐ญ๐ข๐ ๐š๐ญ๐ข๐จ๐ง๐ฌ: California AG Rob Bonta is formally investigating OpenAIโ€™s restructuring and safety practices.

2. ๐‘๐ž๐ฆ๐ž๐๐ข๐š๐ฅ ๐ƒ๐ž๐ฆ๐š๐ง๐๐ฌ: AGs expect OpenAI to implement stronger safeguards immediately, not just promises.

3. ๐‹๐ž๐ ๐š๐ฅ ๐‹๐ž๐ฏ๐ž๐ซ๐š๐ ๐ž: The coalition warned they will use โ€œall available legal and regulatory toolsโ€ to protect children.

4. ๐’๐ญ๐š๐ญ๐ž ๐‹๐š๐ฐ๐ฌ: Several states are already enacting punitive measures:

– Criminalizing nonโ€‘consensual sexual deepfakes

– Restricting AI in political advertising

– Limiting AI in healthcare decisions

– Colorado law against AI discrimination in housing, jobs, education

– California law (2026) requiring disclosure of training data, detection tools, and risk mitigation plans

๐…๐จ๐ซ ๐›๐ฎ๐ฌ๐ข๐ง๐ž๐ฌ๐ฌ ๐š๐ง๐ ๐ญ๐ž๐œ๐ก๐ง๐จ๐ฅ๐จ๐ ๐ฒ ๐ฅ๐ž๐š๐๐ž๐ซ๐ฌ, ๐ญ๐ก๐ž ๐ฅ๐ž๐ฌ๐ฌ๐จ๐ง ๐ข๐ฌ ๐ฎ๐ง๐š๐ฆ๐›๐ข๐ ๐ฎ๐จ๐ฎ๐ฌ: ๐ƒ๐จ๐ง’๐ญ ๐ฐ๐š๐ข๐ญ ๐Ÿ๐จ๐ซ ๐ฉ๐ž๐ซ๐Ÿ๐ž๐œ๐ญ, ๐œ๐ž๐ง๐ญ๐ซ๐š๐ฅ๐ข๐ณ๐ž๐ ๐ฅ๐š๐ฐ๐ฌ ๐ญ๐จ ๐š๐œ๐ญ. ๐˜๐จ๐ฎ ๐ฆ๐ฎ๐ฌ๐ญ ๐›๐ฎ๐ข๐ฅ๐ ๐ฒ๐จ๐ฎ๐ซ ๐จ๐ฐ๐ง ๐ž๐ญ๐ก๐ข๐œ๐š๐ฅ ๐š๐ง๐ ๐ญ๐ž๐œ๐ก๐ง๐ข๐œ๐š๐ฅ ๐Ÿ๐ซ๐š๐ฆ๐ž๐ฐ๐จ๐ซ๐ค๐ฌ, ๐ž๐ฆ๐›๐ž๐ ๐ฌ๐š๐Ÿ๐ž๐ ๐ฎ๐š๐ซ๐๐ฌ ๐ข๐ง๐ญ๐จ ๐ฒ๐จ๐ฎ๐ซ ๐ฉ๐ซ๐จ๐๐ฎ๐œ๐ญ๐ฌ, ๐š๐ง๐ ๐ญ๐ซ๐ž๐š๐ญ ๐€๐ˆ ๐ฌ๐š๐Ÿ๐ž๐ญ๐ฒ ๐š๐ฌ ๐š๐ง ๐ž๐ฌ๐ฌ๐ž๐ง๐ญ๐ข๐š๐ฅ, ๐ง๐จ๐ง-๐ง๐ž๐ ๐จ๐ญ๐ข๐š๐›๐ฅ๐ž ๐ฉ๐š๐ซ๐ญ ๐จ๐Ÿ ๐ฒ๐จ๐ฎ๐ซ ๐œ๐จ๐ซ๐ฉ๐จ๐ซ๐š๐ญ๐ž ๐ฌ๐ญ๐ซ๐š๐ญ๐ž๐ ๐ฒ.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *