Oversight Board Recommends Meta Refine AI-Generated Image Policies
3 min readThe Oversight Board has called on Meta to overhaul its policies regarding AI-generated explicit images. This move follows recent controversies involving AI-generated content.
The Board suggests changing language from “derogatory” to “non-consensual” and shifting policies to the Sexual Exploitation Community Standards. These recommendations aim to better protect victims of AI-generated content.
Current Policies and Areas of Concern
Meta’s current rules classify AI-generated explicit images under the “derogatory sexualized photoshop” rule in the Bullying and Harassment section. The Oversight Board believes this terminology does not adequately represent the non-consensual nature of these images.
Additionally, the Board recommends Meta to use a more generalized term for manipulated media instead of ‘photoshop’ and suggests that policy should not require images to be non-commercial or produced in private settings for them to be banned or removed.
The Board’s suggestions come after two high-profile cases of AI-generated explicit images of public figures were mishandled by Meta, highlighting the need for policy revisions.
High-Profile Incidents and Their Impact
One incident involved an AI-generated nude image of an Indian public figure on Instagram, which was not promptly removed despite user reports. Meta only acted after the Oversight Board intervened.
A similar situation occurred with an image of a U.S. public figure on Facebook. Though this image was quickly removed, the differing responses reveal inconsistencies in Meta’s handling of such content.
The inclusion of these images in Meta’s Media Matching Service repository occurred only after Board intervention, raising concerns about the proactive measures taken by the company.
Cultural Implications and Victimization
Breakthrough Trust, an Indian organization, emphasizes the cultural impact of non-consensual imagery, noting it is often trivialized as identity theft instead of gender-based violence.
Victims face additional victimization when reporting such content, often being questioned about their actions. This secondary victimization makes the reporting process even more distressing.
Barsha Charkorborty from Breakthrough highlighted the rapid spread of such images across platforms, suggesting that merely removing content from the source platform is insufficient to protect victims from harm.
User Reporting and Policy Challenges
Devika Malik, a platform policy expert, pointed out that relying on user reporting is not a reliable approach for managing non-consensual AI-generated media. This method places an unfair burden on victims to prove their identity and lack of consent.
Malik also mentioned that verifying these external signals could cause delays, allowing harmful content to gain traction. This challenge is exacerbated by the nature of AI-generated media.
The need for Meta to build more user awareness and improve reporting systems was stressed to ensure better handling of such cases.
Suggestions for Policy Improvements
Aparajita Bharti from The Quantum Hub suggested that Meta should offer more context and flexibility in reporting channels to help users accurately report content violations.
Meta’s current system can lead to real issues being overlooked due to technicalities. Bharti advocates for systems that prevent such oversights, ensuring that all harmful content is adequately addressed.
The Oversight Board’s recommendations also include longer review times for reports and better communication with users regarding their report status.
Community Standards and Enforcement
Moving AI-generated explicit content policies to the Sexual Exploitation Community Standards section aligns with the nature of the content and ensures stricter enforcement.
Meta’s prohibition on non-consensual imagery should be clear and enforced consistently, regardless of the commercial or private nature of the content.
These policy changes are crucial for protecting individuals from the proliferation of harmful AI-generated images.
Conclusion
The Oversight Board’s recommendations are a crucial step towards refining Meta’s policies on AI-generated explicit content.
Implementing these changes will better protect individuals and ensure a more consistent approach in handling non-consensual imagery across Meta’s platforms.
Refining Meta’s policies on AI-generated explicit images is essential to protect victims and ensure consistency across platforms. These changes aim to provide a safer online environment.
The Oversight Board’s recommendations highlight the importance of clear and enforceable policies to address the challenges posed by AI-generated media.