As Gen AI Inches Closer To TV News, Stations Build ‘Policy Groups’ To Be Ready
Recently, leaders at Gray Television conducted a training session to help employees better understand the applications — and pitfalls — of generative artificial intelligence, or “gen AI.” With many of her colleagues watching, Claire Ferguson, the company’s assistant general counsel, put herself in the crosshairs of a gen AI platform.
“I wanted to sort of scare folks, so I asked AI to write a biography about me,” she says. “I gave it a ton of information about me, which I probably shouldn’t have done, but I wanted it to find my digital footprint.”
Ferguson ultimately asked the gen AI platform to craft a bio about her three times. On each occasion she fed it her full name, her birth year and other personal details. The results?
“In two of the three times I asked it to do it, it said I had died in a car accident the year before,” Ferguson says. Though the facts within the text were “just wholesale made up,” she says the copy read with an authoritative, convincing tone.
Such relative horror stories point out why gen AI cannot completely replace reporters who work for reputable newsgroups — at least for now. But that hasn’t stopped the technology from infiltrating newsrooms to smooth out workflows for its human overseers. (This very article was made possible with help from gen AI, which was used to transcribe interviews.)
To address the many worms emerging from the now-open can of gen AI, a number of news organizations are mobilizing focused task forces. Gray Television, E.W. Scripps and the Graham Media Group provided TVNewsCheck details about the personnel groups they’ve formed: who’s included, what issues they discuss, what policies they’ve formed so far and more. Here’s what they disclosed.
Proactive Disclosure At Gray
Gray formed what James Finch, the company’s VP of news services, calls its “AI policy group” in spring 2023. Both he and Ferguson are members, as are, according to Finch, multiple GMs and executives from news, sales and marketing. The collective also includes engineers and representatives from HR, as well as Gray’s COO Sandy Breland.
They initially met up to three times a week to establish company guidelines on AI, which were published in June 2023 on Gray stations’ websites. (Ferguson believes Gray was the first major news corporation to take such a transparent step.) The guidelines are found in the footer of each station site and, for now, runs a single page long.
The document acknowledges “the significant potential for generative AI and other emerging technologies to be powerful tools in the editorial, creative and business processes.” However, it also points out that “AI-generated content can distort and misrepresent information, lack context and create convincing but false statements.”
The policy page goes on to say that such content may infringe on intellectual property rights and create security risks, thus, it must be “used responsibly and helpfully, consistent with standards applied to our core business of impactful and trustworthy journalism.” In the document, Gray also promises to prioritize people — “viewers, employees, and communities” — over the benefits of AI use and only distribute news content “created by humans.”
“With media, whether it’s well-founded or not, [in consumers] there’s an eroding sense of, ‘Can I really believe that is true?’ even for stations that have been on the air for 60 years,” Finch says. “We don’t want to lose equity with the audience thinking that there are bots out there generating the content that is valuable to them.”
The Gray policy outline also says — in boldface — that all AI vendors with whom company employees may engage must be vetted and approved by someone from the Gray legal and technology teams. Providing an example, Finch says a transcription-production service used by Gray personnel was approved in part because it guarantees that any files sent by Gray employees, as well as the documents it produces, won’t be leaked. Once deleted by a user, the files will also disappear from all data caches permanently.
Gray’s AI policy group now meets on at least a quarterly basis, though Finch says if a pressing issue arises they’ll discuss it as soon as possible. The team regularly fields questions and hears feedback from Gray’s stations; they’ve also conducted gen AI-focused training webinars and produced training videos with curated content pertinent to different departments.
There is a chance that gen AI might forge a fingerprint on a Gray newscast at some point. Ferguson says the policy group is discussing a hypothetical circumstance where a station report about rising sea levels might feature an AI-generated visual representation of what a specified area could look like under, say, six feet of water.
Should such a package be developed, Ferguson says it would invoke many considerations and require “high, high level managerial approval.”
She adds: “Importantly, it has to be disclosed.”
At Scripps, Checks And Balances
According to Christina Hartman, VP of news standards and editorial operations for Scripps News and Court TV, the origin story of her company’s AI “governance committee” begins with an email sent to employees by Scripps’ Chief Ethics Officer Dave Giles in February 2023. In it, he encouraged workers to familiarize themselves with gen AI technology. He also wrote that Scripps journalists are responsible for the facts they gather, and that the use of gen AI for article and image generation is prohibited.
Since then, Hartman, who co-chairs the governance committee, says the group has met numerous times to shape decisions about how the company will be both “proactive and reactive” to gen AI issues.
“It started with a review of use cases already in practice, but also perspective, like: ‘What are people’s ideas? Where do folks who are real enthusiasts see it going?’” Hartman says. The committee announced itself as a resource to employees, telling them to freely ask questions and present their concerns. They also issued initial company guidelines for gen AI use, which Hartman says will soon be updated.
The group is exploring the launch of a compliance module associated with the guidelines as well, to ensure employees are familiar with them. Hartman also says the committee may develop programming like “hackathons” to help personnel learn how to deal with problems associated with gen AI, while also highlighting the ways it can make certain tasks easier for them.
The Scripps AI governance committee’s other co-chair is CIO Pat Browning. Giles is also among its ranks, as are reps from development, data and legal teams, per Hartman. Specialists in privacy and copyright are included, too.
Stephen Turnham, Washington bureau chief and managing editor of Scripps News, is also on the team. He worries that news personnel will come to “trust” gen AI too much and won’t see “hidden manipulation coming our way.
“It’s the smartest, most experienced person in the room that you can bounce ideas off of,” Turnham says. “It has everything at its fingertips, and that could be pretty intoxicating.”
But while “cautious” about using gen AI, Turnham says he can’t turn his cheek to its potential “tremendous applications” in TV.
“We will have people asking questions with an earpiece in their ear, while somebody with an AI could very quickly ask, say, ‘What did this politician say in the past about this issue?’” Turnham says. “And the follow up question could come very quickly out of the mouth of the anchor: ‘But you said in 2010, this.’ That’s the kind of work that’s currently being done by researchers, anticipating what the likely answer to a question might be and what the follow ups are.”
One area where Scripps is experimenting with gen AI now, though, is versioning stories.
“You take a broadcast packet script, pop it in and AI gives you a digital text, a web version,” Hartman explains. “Of course, if we were to put it in practice it would still require a human set of eyes before publication.”
Turnham says gen AI can also format these texts for social media posting.
“Right now that kind of work is done by a physical editor, changing something into vertical, shortening it, doing whatever,” he says. “That might be a use case where it does impact what the viewer sees, and we would probably want to disclose that, depending on how meaningful it was. I think we’d have to talk about it.”
Though there are “a lot of things to be on guard for,” Hartman says today’s ethical concerns around gen AI use can probably be addressed by long-held, basic journalist standards. For newsies, she says, they are quite accessible “checks and balances.”
Careful Deliberation At Graham
Around the same time Scripps’ chief ethics officer sent his email about gen AI out to his company, Jeremy Allen, director of content innovation at Graham, interacted with an AI-powered image generator. Startled by its efficacy, but concerned about associated ethics issues that would likely be raised if the news team integrated such an image into its content, he reached out to his group’s news directors and organized a powwow to discuss the emerging technology.
“We really leaned into, ‘OK, this is a really cool tool,’” says Michael Newman, director of transformation at Graham, a position where he focuses on digital revenue operations and product development, among other areas. “There’s also risks and everything involved with that, but at the same time, quickly we realized as a group that there was a lot of excitement.”
Newman says a few months later, a survey focusing on how company workers were already leveraging gen AI was sent to Graham personnel. The survey response rate was comparatively among the highest they’d ever seen.
“We were talking with news directors and lawyers, and we realized we need to put some guardrails in place so we don’t end up getting ourselves in trouble or putting the business at risk in any way,” Newman says.
Newman also highlights the document’s warning to employees about using gen AI in the making of “crucial decisions.” This should be avoided because, he points out, the tech “will have its own biases based on the programming.”
Since publishing those guidelines, another group led by Dustin Block, Graham’s audience development lead, has been “pushing and developing use cases,” per Newman. They’ve developed an internal gen AI tool dubbed “First Dibs” that creates a prospective initial comment an editor can post underneath a web-based story. Doing so, Newman says, fights against “blank-page syndrome,” helping encourage consumers to comment further. More page interactions means loosening algorithms and greater web exposure.
“It’s sort of like writing a first draft of the comment, but it’s ultimately up to the editor — who, by the way, has signed our AI policy — to post it,” Block says. “Then it actually goes on the site as a comment from that editor. So, it’s a writing assist, but it’s not [actually] writing a comment.”
Block believes very few of the editors who use First Dibs are posting verbatim from the gen AI results, but also that it is helping them find inspiration.
Newman says that Graham has up to 50 employees meeting in various groups focused on different topics associated with gen AI, with a primary focus on ways the company may leverage the tech to enhance workflows. But Allen, the director of content innovation, emphasizes the deliberation Graham’s decision makers are taking before pushing forward with impactful utilization.
“Allison McGinley, the news director at WKMG [Orlando, Fla.,] said, in one of our first calls, ‘We want to make sure we’re on the right side of history when it comes to AI,’” Allen recalls. “Moving cautiously forward is the stance that every single one of our newsrooms takes, and I think that we all want to use it as a tool, we all want to implement it as much as possible, but no one’s rushing to do anything.”