PARIS — A sweeping, multi-national study coordinated by the European Broadcasting Union (EBU) and spearheaded by the BBC has issued a strong cautionary statement: Artificial Intelligence (AI) assistants are consistently unreliable sources for news and current events. The comprehensive research, which brought together 22 public service media organizations across 18 countries, discovered systemic failures in how leading AI tools process and present journalistic content, regardless of the language or the specific platform being tested.
The study involved professional journalists evaluating over 3,000 responses from four major AI assistants—ChatGPT, Copilot, Gemini, and Perplexity—against key criteria for journalistic integrity. The findings are stark, revealing that 45% of all AI answers contained at least one significant issue, pointing to deep-seated problems with accuracy and attribution across the fast-growing technology.

Key Findings: Systemic Flaws Threaten Public Trust
The research determined that the errors found were systemic, not isolated, and included factual inaccuracies, the use of outdated information, and complete fabrication, collectively undermining the public’s ability to trust their information sources. A startling 20% of responses contained major accuracy issues, which included entirely hallucinated details and the reporting of demonstrably old news. For instance, the study cited cases where an AI assistant incorrectly claimed a deceased religious figure was still alive.
The single biggest source of issues was sourcing, with 31% of responses showing serious problems, such as missing, misleading, or incorrect attributions. This failure to provide verifiable links or even falsely attributing incorrect information to legitimate news brands makes it nearly impossible for users to check the credibility of the AI’s summary. Of the platforms tested, Gemini was noted to perform the worst, with significant issues found in 76% of its responses, an anomaly largely attributed to its poor sourcing performance. Compounding these issues, the AI assistants sometimes failed to distinguish between genuine news and satirical columns or parody, presenting invented events as fact.

Danger to the News Ecosystem
The distortion of news content is seen as a critical threat, especially since AI assistants are rapidly taking over the role of traditional search engines for many users. The EBU and BBC stressed that these systemic, cross-border, and multilingual failures endanger public trust. When the public can’t trust its information, it can ultimately undermine democratic participation and media literacy. The study concludes that AI assistants are simply “still not a reliable way to access and consume news” and is urging AI companies, regulators, and news organizations to work together to improve transparency and enhance the sourcing mechanisms to protect the integrity of the news ecosystem.
With additional report: abs-cbn.com





