Welcome to the Organization for Social Media Safety’s May newsletter. Thank you for taking the time out of your day to learn the latest tips, news, and best practices to keep you and your family safe on social media.
Our mission is to make social media safe for everyone. That includes not only protecting your family from current social media-related dangers, but also fighting to preempt emerging threats. One of our biggest concerns right now stems from new, so-called deepfake technology.
The seriousness of the threat posed by deepfake technology stands in stark contrast to the fact that most people do not yet realize what this technology is and what it will be able to do. Deepfakes are forged or fake videos created via artificial intelligence where a person’s likeness, including their face and voice, is realistically swapped with someone else’s. (Click here to view a sample.)
Deepfake technology could not better exemplify the Organization for Social Media Safety’s reason for being. This is technology that was born and raised on social media. It first appeared in November 2017 when an anonymous user on the social media platform Reddit posted an algorithm that leveraged existing artificial intelligence algorithms to create realistic fake videos. Other users then shared the code on GitHub, a major online code sharing service, where it became free software and publicly available. Applications, like FakeApp, soon appeared simplifying the programming process. And now, the technology continues to improve while its accessibility increases.
The ability of the everyday person to create realistic, fake videos is nothing society has seen before. And, to be very clear, deepfake videos look real. It is specifically because of this hyperrealism, that deepfake technology is so dangerous. Deepfake videos will be used to:
- Interfere with elections
- And more
And, these fears have already been justified. Since the introduction of deepfakes, they have been used extensively to insert women’s likenesses into pornographic films without consent, otherwise known as malicious deepfake pornography. Many of these victims have been celebrities but non-public persons have also been targeted and left with ongoing mental anguish, emotional distress, and long-term reputational damage:
- A California woman with a young child was targeted with pornographic deepfakes. She became wracked with fear and anguish over possibly losing custody of her child to her ex-spouse because of the videos.
- A Texas woman had pornographic deepfakes added to her business pages causing her lost income and serious reputational damage.
- An investigative reporter, Rana Ayuub, was targeted with pornographic deepfakes because of her work as a journalist and suffered severe emotional anguish as a result.
We believe there is an immediate need to act to protect against deepfakes. That is why, working with California State Assemblymember Timothy Grayson, the Organization for Social Media Safety has sponsored AB 1280 in the California State Assembly to protect against malicious deepfake pornography. AB 1280 will criminalize the creation and distribution of malicious pornographic deepfakes and provide a grant to the University of California to create technology to protect against the dangers associated with deepfake technology.
We hope this is an instance where the government acts to preempt a threat before it becomes widespread and causes serious harm to vulnerable women and girls. That is why we need your help. Please consider signing our petition for AB 1280 as a show of your support. With your help, we are hoping to pass this vital legislation in California next year.