Article in Slashdot by BeauHD, 2/12/25
Headline: “AI Summaries Turn Real News Into Nonsense, BBC Finds”
“A BBC study published yesterday found that AI news summarization tools frequently generate inaccurate or misleading summaries, with 51% of responses containing significant issues.
“The Register reports: The research focused on OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity assistants, assessing their ability to provide ‘accurate responses to questions about the news; and if their answers faithfully represented BBC news stories used as sources.’ The assistants were granted access to the BBC website for the duration of the research and asked 100 questions about the news, being prompted to draw from BBC News articles as sources where possible. . . Overall:
– 51 percent of all AI answers to questions about the news were judged to have significant issues of some form.
– 19 percent of AI answers which cited BBC content introduced factual errors — incorrect factual statements, numbers, and dates.
– 13 percent of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.”
https://news.slashdot.org/story/25/02/12/2139233/ai-summaries-turn-real-news-into-nonsense-bbc-finds