Meta to adjust AI policies on content after board said they were "incoherent and confusing"

Meta will adjust its policies on manipulated and A.I.-generated content to begin to label ahead of the fall elections, after an independent body overseeing the company's content moderation found that previous policies were "incoherent and confusing," and said they should be "reconsidered."

The changes stem from the Meta Oversight Board's recomendations earlier this year issued in its review of a highly edited video of President Biden that appeared on Facebook. The video had been manipulated to make it appear as if Mr. Biden was repeatedly inappropriately touching his adult granddaughter's chest.

In the original video, taken in 2022, the president places an "I voted" sticker on his granddaughter after voting in the midterm elections. But the video under review by Meta's Oversight Board was looped and edited into a seven-second clip that critics said left a misleading impression.

The Oversight Board said that the video did not violate Meta's policies because it had not been manipulated with artificial intelligence (AI) and did not show Mr. Biden "saying words he did not say" or "doing something he did not do."

But the board added that the company's current policy on the issue was "incoherent, lacking in persuasive justification and inappropriately focused on how content is created, rather than on which specific harms it aims to prevent, such as disrupting electoral processes." 

In a blog post published on Friday, Meta's Vice President of Content Policy Monika Bickert wrote that the company would begin to start labeling AI-generated content starting in May and will adjust its policies to label manipulated media with "informational labels and context," instead of removing video based on whether or not the post violates Meta's community standards, which include bans on voter interference, bullying and harassment or violence and incitement.

"The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling," Bickert wrote. "If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context."

Meta conceded that the Oversight Board's assessment of the social media giant's approach to manipulated videos had been "too narrow" because it only covered those "that are created or altered by AI to make a person appear to say something they didn't say."

Bickert said that the company's policy was written in 2020, "when realistic AI-generated content was rare and the overarching concern was about videos." She noted that AI technology has evolved to the point where "people have developed other kinds of realistic AI-generated content like audio and photos," and she agreed with the board that it's "important to address manipulation that shows a person doing something they didn't do."

"We welcome these commitments which represent significant changes in how Meta treats manipulated content," the Oversight Board wrote on X in response to the policy announcement.

This decision comes as AI and other editing tools make it easier than ever for users to alter or fabricate realistic-seeming video and audio clips. Ahead of the New Hampshire presidential primary in January, a fake robocall impersonating President Biden encouraged Democrats not to vote, raising concerns about misinformation and voter suppression going into November's general election.AI-generated content about former President Trump and Mr. Biden continues to be spread online.

    In:
  • Facebook
  • Meta
  • Artificial Intelligence

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.