AI disclosure labels may do more harm than good, study warns

Why it matters: Mis‑labeled AI content could mislead health choices and erode trust in science.
- Teng Lin and Yiqing Zhang (UCASS) ran a controlled experiment with 433 online participants to test AI‑label effects on scientific posts.
- JCOM published the findings, highlighting a “truth‑falsity crossover effect” where AI tags diminish trust in accurate content and inflate trust in misinformation.
- Regulators and platforms (e.g., EU AI‑Act, US FTC guidance) are mandating AI‑generated content disclosures to curb misinformation, but the study suggests these rules may backfire.
- Public‑health implications: Mis‑labeled AI content could sway health decisions, vaccine uptake, and treatment choices if false claims appear more credible.
- AI developers and fact‑checking groups stand to benefit from clearer guidelines that avoid counterproductive labeling.
A new JCOM study by researchers from the University of Chinese Academy of Social Sciences shows that AI‑generated disclosure labels on social‑media science posts can paradoxically lower credibility of true information while boosting false claims, undermining the transparency rules many governments and platforms are implementing.




