AI Lightens The Newsgathering Load

Broadcasters continue to turn to artificial intelligence for help with a widening array of tasks heavy on manual process from surfacing trending topics and content to generating transcriptions, tagging with metadata, offering facial and object recognition and offering help with clips, rights management and moderation.

To beat the competition, broadcasters are increasingly taking advantage of tools that use artificial intelligence (AI) to speed up and improve the newsgathering and content creation processes.

While AI and related technologies can remove manual process from broadcast workflows, vendors are largely focusing on making it easier for journalists to use the media they have. AI-powered tools are helping journalists efficiently find trending topics that audiences care about, as well as surface content on those stories.

Not all AI engines are created equal, but they can generate automated transcriptions and perform facial or object recognition on fresh or archival footage, tagging that media with metadata to make it easily searchable. That technology can also help with things like generating highlights clips, caption compliance, rights management and content moderation.

AI Finds Trends

Tim Wolff, Futuri’s VP for TV and digital publishing innovation, says he’s mostly seeing local stations use AI tools to help replace mundane repetitive tasks typically done by humans. AI-driven transcription, for example, is a “big time and money saver” because “logging sound bites used to take forever,” he says.

Futuri’s TopicPulse is intended to help journalists find news relevant to their audience. Based on “10 years of AI crunching data,” it predicts the stories an audience will care about, he says.


“It brings a massive amount of data to bear for you to use,” Wolff says. It tells a journalist that “people care about this and will keep caring about it for the next few hours.”

It also helps a journalist tell whether a topic is truly trending with the audience of if there’s just a single social media post that took off, he adds.

X.News’ Conceptor, launched in April 2021, automatically analyzes the assets coming into the system and tells the user what is “dynamically growing and bubbling,” says Andreas Pongratz, CEO at X.News.

Emerging Topics from X.News

“They don’t need to search if they don’t want to,” Pongratz says. “They are getting told what’s happening in their area of interest.”

He says the system provides game-changing support for journalists.

“There’s no search needed. The system tells them what’s happening. They select the topic, click on premium search, and get those articles,” he says.

TVU Networks CEO Paul Shen says AI-powered tools can help a media organization “be the first one to bring the news to their audience” because they make it possible to help news people find relevant raw material quickly.

“If you talk about AI, at the end of the day, it’s about speed and efficiency,” Shen says.

TVU Media Mind, for instance, helps find relevant content on a given topic and push that out over social media quickly, he says. In the past, he adds, such an activity could take an hour or more, but now it takes 10 minutes or less.

Removing The Grunt Work

News production is full of time-consuming tasks, notes Rob Gonsalves, engineering fellow at Avid.

“Someone putting together a story could spend a lot of time doing grunt work,” Gonsalves says. “AI is good at that.”

For example, he says, it takes a long time for humans to sift through footage when seeking specific content, but AI makes it possible to efficiently find content by tagging it with metadata.

“It’s a lot easier to search for things if it’s been transcribed,” he says.

Raoul Cospen, director of business development at Dalet, says speech-to-text tools continue to improve.

“Out-of-the-box engines like speech-to-text are really major. They work with many different languages that can provide good out-of-the-box captions,” he says.

Dalet Media Cortex

Dalet already offers a tool that analyzes stories and transcripts of videos in the system to generate keywords that can be used to recommend fresh content or content from the archive to be used in a new story, he says.

“It’s like making a search based on keywords in an automated way. We do that thanks to what we call natural language processing, which is a text-based technology to extract keywords and give meaning to them,” he says.

The next iteration of this technology, he says, will likely be based mainly on semantic encoding of text content to improve the accuracy of the recommendations.

Octopus Newsroom, which produces newsroom computer systems, also provides a speech-to-text feature to free up journalistic resources that would be better spent elsewhere.

“We actually have an open platform to work with different engines because some tools are very good with English, but not necessarily good with other languages, so we have support for many different language tools that can do speech to text,” says Gabriel Janko, Octopus COO, who notes that automatic translation services are also in the pipeline.

“All the text coming back from the speech-to-text engine is indexed by our search engine, so if you are searching for any that may have been said, it will show you the video,” he adds. “You can click on the particular word, and it will show you exactly where the person said it.”

Octopus also offers facial and object recognition to identify the presence of a character or object in video content even before staffers watch it. That recognition can range from specific people and things — President Biden, for instance — to more abstract examples like fires or flood, Janko says. He notes the service can also integrate with speech to text allowing a user to click on the word or proper noun in the transcript, bringing them right to the moment(s) in the video when it appears.

Processing Archives

Archives are also a lot easier to sort through when content has gone through facial or object recognition, transcriptions and other AI engines, tagging that content with metadata to make it more searchable.

Tim Camara, senior product manager at Veritone, says AI engines are not created equally, so the company works with a range of first- and third-party “best of breed” AI engines to meet customer needs. For example, one celebrity recognition engine may work better for news figures while another one works better for sports figures, he says.

Camara says Veritone can automatically run content through AI engines during ingest.

Jacob Leveton, director of product management at Veritone, says the company is working with CBS News and other news agencies to process current content. This is helpful for creating and licensing clips of news hosts talking about certain topics, he says.

Archives projects can turn up valuable footage, he adds.

“Digitalizing everything, making it digital and searchable can uncover a lot of opportunity for them,” Leveton says.

As far as Avid’s Gonsalves is concerned, it’s all in the metadata. In the past, humans would have to tag content with metadata, but now AI systems “are really smart” at tagging content with metadata, and such approaches typically append more metadata to content than a human would.

Avid’s Phrase Find can help find specific phrases in content, using AI to search for phrases phonetically.

“In the last four to five years, it can recognize famous people, transcribe the words that they’re saying, populate a database with who was at a press conference and what words were spoken,” he says.

Avid’s Media Composer uses Phrase Find and Media Central uses a similar tool called Dialogue Search to help find specific phrases in content. The tools use AI to search for phrases phonetically, he says.

Currently, he says, the systems are only able to look for verbatim content, but the ability to search for paraphrased content is an area of research, and Gonzalves has submitted a paper on the topic for the October SMPTE Media Technology Summit.

Moderation, Clips And Highlights Help

Saket Dandotia, COO of VideoVerse/Magnifi, says AI helps with tasks like content moderation and ad insertion. And in the world of sports, it can make it possible to “watch an entire match in five minutes, he says.”

VideoVerse’s Magnifi technology uses AI to detect key moments in games, such as goals or fouls, and to track a ball or player, he says.

In addition to flagging and clipping highlights in sports, it can also predict the outcome of a match based on analysis of data, he says.

Veritone’s Camara says that clients still tend to prefer to “have a human in the loop at some point” when it comes to automated clip creation, but that the company is working to make it easier for AI to be part of the workflow.

He also says AI can also perform copyright checks on content that’s going out to ensure a media organization doesn’t expose itself to legal risk.

Futuri’s Wolff says media organizations should not seek out AI-powered technology merely to be able to do tasks faster, but to provide better news coverage and how it can help make journalism better. When it comes to the AI tools that are being created, he says, “the end goal needs to be, for all of us, how are we improving journalism and how are we serving our viewers better?”

As technologies come to the market, workers often worry that such technologies will leave them without a job. TVU’s Shen doesn’t believe that’s the case with AI, which he says won’t be smarter than people for a long time.

“I don’t believe that AI can replace people. AI makes the process more efficient. For now, people will be the one who make the story. The AI makes the media easy to use,” Shen says. “Customers who don’t use AI become much less competitive.”

Comments (0)

Leave a Reply