People’s view on AI in media

Worried about ‘stories of consequence’
Data shows audiences are still deeply ambivalent about the use of these technologies, which means publishers need to be extremely cautious about where and how they deploy them. Amy Ross Arguedas, a postdoctoral researcher fellow at the Reuters Institute for the Study of Journalism at the University of Oxford in the UK, and Nic Newman, a senior research associate at the same institution, unpack the issue.
Advances in artificial intelligence (AI) are disrupting many aspects of modern life, and the news industry is no exception.
In a year with a record-breaking number of elections worldwide, there has been considerable soul searching about the potential effect of so-called “deepfakes”, and other synthetic content, on democracies.
There have also been further disruptions to the business models and trust underpinning independent journalism.
Most audiences are just starting to form opinions about AI and news, but in this year’s Digital News Report survey, which we produced at the University of Oxford’s Reuters Institute for the Study of Journalism, we included questions about the subject in 28 markets, backed up with in-depth interviews in the UK, US and Mexico.

Ambivalence
Our findings reveal a high level of ambivalence about the use of these technologies.
It also offers insights to publishers looking to implement the technologies without further eroding trust in news, which has fallen in many countries in recent years.
It is important to keep in mind that awareness of AI is still relatively low, with around half of our sample (49% globally and 56% in the UK) having read little or nothing about it.
However, concerns about the accuracy of information and the potential for misinformation are top of the list when talking to those who are better informed.
Manipulated images and videos, for example around the war in Gaza, are increasingly common on social media and are already causing confusion.
As one male participant said: “I have seen many examples before, and they can sometimes be very good. Thankfully, they are still pretty easy to detect but within five years they will be indistinguishable.”
Some participants felt widespread use of generative AI technologies – those that can produce content for users in text, images and video – would probably make identifying misinformation harder, which is especially worrying when it comes to important subjects, such as politics and elections.
Across 47 countries, 59% say they are worried about being able to tell what is real and fake on the internet, up three percentage points on last year.
Others took a more optimistic view, noting that these technologies could be used to provide more relevant and useful content.

News industry
The news industry is turning to AI for two reasons.
First, they hope that automating behind-the-scenes processes such as transcription, copy editing and layout will reduce costs.
Second, AI technologies could help personalise the content itself, making it more appealing for audiences.
In the last year, we have seen media companies deploying a range of AI solutions, with varying degrees of human oversight, from AI-generated summaries and illustrations to stories written by AI robots and even AI-generated newsreaders.
How do audiences feel about all of this?
Across 28 markets, our survey respondents were mostly uncomfortable with the use of AI when content is created mostly by AI with some human oversight.
By contrast, there is less discomfort when AI is used to assist (human) journalists, for example in transcribing interviews or summarising materials for research.
Here, respondents are broadly more comfortable than uncomfortable.
However, we see country-level differences, possibly linked to cues people are getting from the media.
British press coverage of AI, for example, has been characterised as largely negative and sensationalist, while US media narratives are shaped by the leading role of US companies and the opportunities for jobs and growth.

Subject matter
Comfort with AI is also closely related to the importance and seriousness of the subject being discussed. People say they feel less comfortable with AI-generated news on topics such as politics and crime, and more comfortable with sports or entertainment news, subjects where mistakes tend to have less serious consequences.
“Chatbots really shouldn’t be used for more important news like war or politics as the potential misinformation could be the reason someone votes for a candidate over another one,” a 20-year-old man in the UK told us.
Our research also shows that people who tend to trust the news in general are more likely to be comfortable with the uses of AI where humans (journalists) remain in control, compared with those who don’t.
This is because those who tend to trust the news also tend to have greater faith in publishers’ ability to responsibly use AI.
Interviews we conducted show a similar pattern at the level of specific news outlets: people who trust specific news organisations, especially those they describe as most reputable, also tend to be more comfortable with them using AI.
On the flipside, audiences who are already sceptical of or cynical about news organisations may view their trust further eroded by the implementation of these technologies.
As one woman from the US put it: “If any news organisation was caught using fake images or videos in any way it should be held accountable and I’d lose trust with them, even if they were being transparent that the content was created with AI.”

Disclosure
Carefully thinking about when disclosure is necessary and how to communicate it, especially in the early stages, when AI is still foreign to many people, will be a crucial element for maintaining trust.
This is particularly so when AI is used to create new content that audiences will come into direct contact with. Our interviews tell us this is what audiences are most suspicious of.
Overall, we are still in the early stages of journalists’ usage of AI, but this makes it a time of maximum risk for news organisations.
Our data shows audiences are still deeply ambivalent about the use of these technologies, which means publishers need to be extremely cautious about where and how they deploy them.
Wider concerns about synthetic content flooding online platforms mean trusted brands that use the technologies responsibly could be rewarded.
But get things wrong and that trust could be easily lost. - The Conversation