Facebook Is Already Mistakenly Tagging Real Photos as "Made With AI"

Not Made With AI

Photographers are furious, accusing Meta of mistakenly labeling their legitimate work with a "Made with AI" label across Facebook, Instagram, and Threads.

As TechCrunch reports, the social media giant's efforts to tamp down on the proliferation of unlabeled, AI-generated images are having plenty of undesirable consequences.

That's despite an urgent need, with tampered — and at times deceptively photorealistic — media spreading misinformation online largely unchecked, especially on Facebook.

The false positives also highlight just how difficult it has become even for a top tech company to tell what's real and what isn't. Even OpenAI has struggled greatly to develop an effective detection tool that can distinguish between AI-generated and human-authored text.

Insult to Injury

Former White House photographer Pete Souza told TechCrunch that he believes changes Adobe made to its photo editing tools may be leading to false positives on Meta's end.

"What’s annoying is that the post forced me to include the 'Made with AI' even though I unchecked it," he said.

in February, Meta made a concerted PR effort to announce that it's working on "common technical standards for identifying AI content, including video and audio."

"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies," the company's president of global affairs Nick Clegg wrote in a blog post at the time. "People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology."

While well-meaning, the company's algorithms now strongly appear to be falsely accusing photographers of uploading AI-generated content.

"My first brush with the dreaded 'made with AI' tag," one film photographer wrote in a Threads post. "Not a single bit of generative fill used anywhere on any of these (because I f**king hate AI) so that's a tad concerning."

"How is this photo 'Made with AI'?" photographer Peter Yan wrote in a separate post, tagging Instagram head Adam Mosseri. "Btw, I did not use generative AI, only Photoshop to clean up some spots. This 'Made with AI' was auto-labeled by Instagram when I posted it, I did not select this option."

It's a troubling new development, highlighting Meta's faltering efforts to combat a tidal wave of AI-generated garbage. Meanwhile, Facebook is quickly succumbing to countless images that are going viral and garnering hundreds of likes despite being AI-generated fever dreams.

Meta, however, maintains that its faulty AI labeling algorithm is simply a work in progress.

"We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent," the company told TechCrunch.

More on AI labeling: Those "AI Influencers" Are Deepfaking Fake Faces Onto Real Women's Bodies Without Permission