Facebook, Instagram, and Threads are the platforms where it will be implemented.
Meta already annotates AI photos produced by its own tools. It states that it intends to provide “momentum” for the industry to address AI fakery through the new technology that it is presently developing.
However, a BBC expert on AI claimed that these technologies are “easily evadable”.
He acknowledged that the technology was “not yet fully mature” in an interview with Reuters, but said that the company aimed to “create a sense of momentum and incentive for the rest of the industry to follow”.
‘Easy to evade’
However, Professor Soheil Feizi of the University of Maryland’s Reliable AI Lab asserted that it might be simple to circumvent such a system.
“They may be able to train their detector to be able to flag some images specifically generated by some specific models,” he told the BBC.
However, those detectors can have a large false positive rate and be readily circumvented by applying simple light processing on top of the photos.
“So I don’t think that it’s possible for a broad range of applications.”
Although a lot of the concern about AI fakes is focused on video and audio, Meta has stated that its tool is not compatible with these types of media.
Instead, the company says that it is requesting that users mark their own posts that include audio and video, and it “may apply penalties if they fail to do so”.
Furthermore, Sir Nick Clegg acknowledged that it would be hard to test for text produced by programs like ChatGPT.
“That ship has sailed,” he informed Reuters.
‘Incoherent’ Media Policy
The company’s policy on altered media was criticized by Meta’s Oversight Board on Monday, who described it as “incoherent, lacking in persuasive justification, and inappropriately focused on how content has been created”.
Meta provides funding for the Oversight Board, although it is separate from the business.
It did not violate Meta’s manipulated media policy and was not removed because it was not artificially manipulated and instead showed Mr. Biden acting in a way that he did not rather than saying something he did not say.
Although the Board acknowledged that the video did not violate Meta’s present policies against false media, they suggested that the policies be revised.
According to Reuters, Sir Nick generally agreed with the decision.
The current Meta policy, he said, “is just simply not fit for purpose in an environment where you’re going to have way more synthetic content and hybrid content than before.”
The corporation implemented a guideline in January stating that political advertisements must disclose when they include photos or videos that have been digitally changed.