NEWSTECHFORUM 2023

Gen AI Will Transform News. Experts Say The Rulebook Must Be Written Now

Leading technology executives from the BBC, AP, the Partnership on AI and Adobe said news organizations won’t be able to avoid the profound changes being ushered in by generative AI, and the time to frame up ethical and safe guidelines for its use is today.

Generative AI is expected to radically transform the way newsroom personnel work — that is, if it doesn’t take their jobs away completely. Discourse about the dangers that generative AI may bring to journalism coursed through TVNewsCheck’s NewsTECHForum last week, particularly during the panel “Chasing AI: Threatening Or Enhancing The News?” But before acknowledging some of the work being done to address potential doom-and-gloom scenarios, the panelists first outlined how the technology is already improving news production.

Ray Thompson, senior director of partners and alliances at Avid, said AI’s assistance with transcript creation has already proven valuable. Not only does it construct word-for-word transcripts, but it also produces summaries of interviews, allowing TV news producers to make quicker decisions about which portions of the footage to use in stories. Thompson added that the tech can then churn out a new transcript for a finished package, locate where key phrases were said within it and drop permanent markers onto it to aid in searches. He also said that Avid recently added a “mix searches” feature to its MediaCentral platform, allowing users to combine metadata and phonetic searches into one. 

“It’s driving efficiencies,” said Thompson about gen AI. “It’s basically making things go faster, and hopefully allowing you to … deliver more content and deliver at scale and do so much faster.”

From the publisher’s perspective, Aimee Rinehart, senior project manager for AI strategy at the Associated Press, said that her organization has leveraged AI for nearly a decade. Starting in 2014, the AP used the tech to build earnings reports, growing the number of reports that year tenfold, from 300 (written by humans) to 3,000. 

“There was actually a white paper that indicated that there was an uptick in the stock market around the time those were released because there were 2,700 companies that had never been written about and suddenly they had some visibility,” Rinehart recalled. “So those continue to run today and we’ll keep experimenting around workflow efficiencies.”

Over at the BBC, Laura Ellis, the company’s head of technology forecasting, said the company is using generative AI for “lots of language work translation, transcription and personalization.” She said the tech is helping dyslexic workers at the company write more quickly, and it’s also being used to generate story headlines — with human oversight.

BRAND CONNECTIONS

Ellis went so far as to say that the BBC is “a technology organization as well as a content organization,” one that is not “cutting humans out of the loop.” However, she also noted that the company is not blind to the potential pitfalls of AI. 

“We’re now trying to work out how all these new capabilities [help us] can create new things, new products for our audience, and how we can do that ethically and safely, because if we lose trust with our audiences, we lose everything,” Ellis said. “Generative AI has its moments and has its foibles, so [we’re] just trying to create a cross organization conversation about where we want to go with it, what we want to do with it, and how we do that safely.”

Adobe is one organization trying to figure out ways to help newsgroups navigate this minefield. Also sitting on the panel was Santiago Lyon, head of advocacy and education content for the Authenticity Initiative at Adobe, who said the effort he oversees involves a “community of over 2,000 media and technology companies and others, working to set and implement an open standard around provenance,” referring to the origin of digital files and the tracking of potential manipulation. Such information, Lyon said, can be shared with viewers, boosting transparency about the content in front of them.

“You can think about it sort of like a digital nutrition label, in the same way that you might look at a food product in the supermarket and understand what’s in it,” Lyon said. “We’re also doing this work with hardware manufacturers, so it’s already in production cameras out there, working with smartphone manufacturers working with editing tools, working with publishers and CMS manufacturers, and the whole initiative [is] underwritten by Adobe and [the tech] is incorporated into Adobe products.”

Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit coalition committed to the responsible use of artificial intelligence, said that there will certainly be “unintended consequences” with greater deployment of gen AI in journalism. However, her group is working on guidelines for AI use in newsrooms to help address related issues. It includes suggestions for newsroom leaders about how to approach AI integration, what problems to look out for and how to talk to production teams about its use. 

“Journalistic standards are the helpful conduit to making a decision about what you disclose,” Leibowicz said. “What do you have as a starting place in terms of journalistic ethics about what requires a correction or what requires explanation of methods? … There’s certain precedent that we do have on our side to help us in the AI moment, in the sense of what we do in terms of disclosure.”

Another ongoing effort is in place to generate guidelines for generative AI tech use and transparency associated with it. Lyon said that literature from the Coalition for Content Provenance and Authenticity (C2PA) has emerged as a “best-in-class” standard for such guidance. He advocated for its use in newsrooms, saying it is even consulted by legislators charged with building policy on the matter. 

But in-house discourse among publishers and other decision makers will only be impactful if consumers understand gen AI and what role it plays in content production. “We’ve done a fair amount of research and consumers get confused quite quickly, depending on the demographic, as to what they are looking at, what is being conveyed,” Lyon said. “So you’re trying to find that balance to simplify it and make it effective.”

Consumers are also going to have to learn to trust gen AI. Leibowicz cited an Axios poll that said half of Americans believe AI will be used to create content that will impact the upcoming presidential election. 

“If you’re implementing AI in your newsroom, that’s amidst an ecosystem of public literacy that AI is kind of going to be infused in all types of content,” she said. “So this question of, what responsibility do you have to both meet people where they are in terms of thinking that most content is AI generated to date, but also not to induce a degree of skepticism, that’s going to make an already distrusting population more distrusting of the news media — and there’s no perfect answer.”

According to research, Leibowicz said that consumers don’t just want a label that says news producers use AI, they want to understand where it is being used and what it is being used for. “There’s even a question of how noticeable they are” in broadcasts, she added. “That’s a design consideration more so than it is a linguistic one, but you want to make sure people can see these things.”

On top of all that, news stakeholders and consumers have varying perspectives on the threshold for conveying disclosures of AI use, Leibowicz said. Some people might want to know whether or not a writer used Google to generate some ideas for headlines, while others might not think it crucial at all.

Education initiatives can’t arrive quickly enough. Not only is AI already used in newsrooms on a growing scale, it’s also relied upon in spaces covered by journalists, creating yet another impetus to stay in lockstep with the tech.

Rinehart said, “If you live in a town with a bank or school or a police bureau, chances are they’re using some type of algorithm to determine where they do arrests, or how many people are going to graduate,” as well as other data that informs policy. She said local reporters should ask community leaders what is being used and how, and noted the Tampa Bay Times recently won a Pulitzer Prize for a story about a sheriff executing preemptive policing based on AI forecasting. 

“So, AI is happening at your level, no matter what happens in Silicon Valley,” Rinehart said.


Read more coverage of NewsTECHForum 2023 here.

Watch this session and all the NewsTECHForum 2023 videos here.


Comments (0)

Leave a Reply