Really, you made this without AI? Prove it

Why it matters: The ineffectual implementation of content authentication standards like C2PA allows platforms to profit from unlabelled AI content.
- Human creators are seeking an "AI-free" label for their work, but face challenges due to numerous, often unreliable, labeling alternatives.
- Instagram head Adam Mosseri proposed that it will be "more practical to fingerprint real media than fake media" as AI advances.
- The C2PA content credentials standard, already used by Meta, was intended to authenticate human-made works but has been "wholly ineffectual" due to motivations to hide AI content origins for clicks and cash.
- A Reuters Institute survey indicates a widespread perception that news sites, social media, and search results are "rife" with AI-generated content.
- Various solutions like Proudly Human, Not by AI, Made by Human, and No-AI-Icon exist, but suffer from questionable verification processes, reliance on trust, or use of notoriously unreliable AI detection services.
As generative AI blurs the lines between human and machine-made content, creators are desperately seeking a universal "AI-free" label to authenticate their work, but a lack of consensus on standards and an abundance of unreliable solutions hinder widespread adoption. While Instagram's Adam Mosseri suggests fingerprinting real media, and the C2PA standard exists, its ineffectual implementation is due to the financial incentives for platforms to obscure AI content origins.

