bbc[.]com/news/articles/c0m17d8827ko
Four major artificial intelligence (AI) chatbots are inaccurately summarising news stories, according to research carried out by the BBC.The BBC gave OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI content from the BBC website then asked them questions about the news.
It said the resulting answers contained "significant inaccuracies" and distortions.
In a blog, Deborah Turness, the CEO of BBC News and Current Affairs, said AI brought "endless opportunities" but the companies developing the tools were "playing with fire".
"We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?", she asked.
***
In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer.It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.
It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.
Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
***
The report said that as well as containing factual inaccuracies, the chatbots "struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context".