Facebook Pixel

We just delivered a major accountability win: The AI No Defense Act (AB 316), co-sponsored by the Organization for Social Media Safety, is now law in California!

When AI-based features on social media cause harm, companies can no longer blame “the AI” and escape liability. Because of our coalition and co-sponsorship, the AI No Defense Act puts responsibility on Big Social when social media platforms choose to deploy this new technology.

When AI steers our children to sexual content, drugs, or self-harm on social media, platforms now cannot simply shrug and say “the AI did it.” They will be held accountable, and our families will be safer.
— Marc Berkman, CEO, Organization for Social Media Safety

WHY THIS MATTERS

Newly deployed AI companions and chatbots accessed through social media have already caused harm, feeding children sexually explicit content, promoting illicit drug use, and encouraging self-harm.  Recently, in Florida, 16-year-old Adam Raine tragically died by suicide. His parents “say that he had been using [an] artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.” (SourceAB 316 makes clear: companies deploying dangerous AI chatbots will be held accountable in California.

A special thank-you to Assemblymember Maggy Krell for authoring the legislation.

MORE PROTECTIONS WE HELPED ADVANCE THIS YEAR IN CALIFORNIA

  • Cyberbullying (Off-Campus) Model Policy (AB 772): The California Department of Education must adopt a model policy for off-campus cyberbullying; school districts must adopt it or a locally developed equivalent. Our Teen Board of Directors Member Aerin Cohn bravely testified in support of this legislation, helping secure its passage. Thank you to Assemblymember Josh Lowenthal for authoring the legislation.
  • Stronger protections against non-consensual pornographic deepfakes (AB 621) Builds on the malicious pornographic deepfake fight the Organization for Social Media Safety began in 2019, adding new enforcement tools to deter the creation and distribution of sexually explicit videos depicting an individual without consent. Thank you to Assemblymembers Bauer-Kahan and Berman for authoring the legislation.

WE ARE ON A MISSION

OFSMS is leading the fight to make social media safer.
 Our policy, education, research, and technology initiatives are built to eliminate social media-related harm.  

Leave a Reply

Your email address will not be published. Required fields are marked *