Pass AB 1280: Stop Malicious Pornographic Deepfakes
- Target:
- California State Assembly
- Region:
- United States of America
- Website:
- www.ofsms.org
The seriousness of the threat posed by deepfake technology stands in stark contrast to the fact that most people do not yet realize what this technology is and what it will be able to do. Deepfakes are forged or fake videos created via artificial intelligence where a person’s likeness, including their face and voice, is realistically swapped with someone else’s.
Deepfake technology could not better exemplify the Organization for Social Media Safety's reason for being. This is technology that was born and raised on social media. It first appeared in November 2017 when an anonymous user on the social media platform Reddit posted an algorithm that leveraged existing artificial intelligence algorithms to create realistic fake videos. Other users then shared the code on GitHub, a major online code sharing service, where it became free software and publicly available. Applications, like FakeApp, soon appeared simplifying the programming process. And, the technology continues to improve while its accessibility increases.
The ability of the everyday person to create realistic, fake videos is nothing society has seen before. And, to be very clear, deepfake videos look real. It is specifically because of this hyperrealism, that deepfake technology is so dangerous. Deepfake videos will be used to: cyberbully, harass, defraud, defame, and interfere with elections.
And, these fears have already been justified. Since the introduction of deepfakes, they have been used extensively to insert women’s likenesses into pornographic films without consent. Many of these victims have been celebrities but non-public persons have also been targeted and left with ongoing mental anguish, emotional distress, and long-term reputational damage:
- A California woman with a young child was targeted with pornographic deepfakes. She became wracked with fear and anguish over possibly losing custody of her child to her ex-spouse because of the videos.
- A Texas woman had pornographic deepfakes added to her business pages causing her lost income and serious reputational damage.
- An investigative reporter, Rana Ayuub, was targeted with pornographic deepfakes because of her work as a journalist and suffered severe emotional anguish as a result.
We believe there is an immediate need to act to protect against deepfakes. That is why, working with California State Assemblymember Timothy Grayson, the Organization for Social Media Safety has sponsored AB 1280 in the California State Assembly to protect against malicious deepfake pornography, deepfake vidoes with explicit sexual content that use an individual's likeness without consent or use a minor's likeness. AB 1280 will criminalize the creation and distribution of malicious pornographic deepfakes and provide a grant to the University of California to create technology to protect against the dangers associated with deepfake technology.
We hope this is an instance where the government acts to preempt a threat before it becomes widespread and causes serious harm to vulnerable women and girls. That is why we need your help. Please consider signing our petition for AB 1280 as a show of your support for the members of the California Assembly that will be considering it.
We, the undersigned, call on the Public Safety Committee of the California State Assembly to pass AB 1280 to protect from malicious pornographic deepfakes.
You can further help this campaign by sponsoring it
The Pass AB 1280: Stop Malicious Pornographic Deepfakes petition to California State Assembly was written by Organization for Social Media Safety and is in the category Government at GoPetition.