Meta’s plan to fight AI-generated misinformation and deepfakes is too little, too late - Rickey J. White, Jr. | RJW™
30015
post-template-default,single,single-post,postid-30015,single-format-standard,ajax_fade,page_not_loaded,,qode-theme-ver-16.3,qode-theme-bridge,wpb-js-composer js-comp-ver-5.4.7,vc_responsive
 

Meta’s plan to fight AI-generated misinformation and deepfakes is too little, too late

Meta’s plan to fight AI-generated misinformation and deepfakes is too little, too late

Meta announced on Tuesday it’s taking steps to label AI-generated content, including misinformation and deepfakes, on its Facebook, Instagram, and Threads social platforms. But its mitigation strategy has some major holes, and it’s arriving long after the threat of deepfakes has become real.

Meta said that “in the coming months” (when we’re in the thick of the 2024 presidential election), it will be able to detect AI-generated content on its platforms created by tools from the likes of Adobe and Microsoft. It’ll rely on the toolmakers to inject encrypted metadata into AI-generated content, according to the specifications of an industry standards body. Meta points out that it’s always added “visible markers, invisible watermarks, and metadata” to identify, and label, images generated by its own AI tools.

But those labeling tools are for the good guys; bad actors that spread AI-generated mis/disinformation use lesser-known, open-source tools to create content that’s hard to trace back to the tool or the creator. Or they may select tools that make it easy to disable the addition of metadata or watermarks.

There’s little data to suggest that Meta has the technology to detect and label that kind of content at scale. The company says it’s “working hard” to develop classifier AI models to detect AI-generated content that lacks watermarks or metadata. It also says it isn’t yet able to detect AI-generated videos or audio recordings. Instead, Meta says it’s relying on users to label “photorealistic video or realistic-sounding audio that was digitally created or altered” when they post it, and says it may “apply penalties” for those that don’t.

In a blog post, Meta public policy guy Nick Clegg portrays the problem of AI-generated mis/disinformation as an industry problem, a society-wide problem, and a problem of the future. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.” But Meta controls, by far, the biggest distribution network for such content now—and the need to detect and label deepfakes is now, not a few months from now. Just ask Joe Biden or Taylor Swift. It’s too late to be talking about future plans and approaches when another high-stakes election cycle is already upon us.

“Meta has been a pioneer in AI development for more than a decade,” Clegg says. “We know that progress and responsibility can and must go hand in hand.”

Meta has been developing its own generative AI tools for years now. Can the company really say that it has devoted equal time, resources, and brain power to mitigating the disinformation risk of the technology?

Since its Facebook days, and for more than the past decade, the company has played a profound role in blurring the lines between truth and misinformation, and now it appears to be slow-walking its response to the next major threat to truth, trust, and authenticity.

Source: Fast Company

Tags:
No Comments

Sorry, the comment form is closed at this time.