TVN’S MANAGING MEDIA BY MARY COLLINS

Deep Concerns Emerge In Generative AI’s Media Applications

While AI has been a powerful tool for media across multiple functions, generative AI poses numerous threats that should give all companies pause as they assess its potential upsides.

Mary Collins

Artificial intelligence (AI), a current hot topic, has been around for some time. Think about functions including spell check, autocorrect, dictating text messages or even voice assistants including Amazon’s Alexa and Apple’s Siri. In recent years media companies have been using AI applications to: create written transcripts of stories (both audio and video); produce closed captioning and subtitles; generate metadata for digital assets; to write routine stories such as earnings reports for small companies or news stories about local sporting events; and even to power chatbots to answer routine customer questions or to automate rote accounting department functions.

It’s the evolution to generative AI that is really the cause for concern, or at least discussion. These are programs that can spit out seemingly informed answers to questions.

In February, not long after the free test version of ChatGPT-3 was released, I wrote a column about these chatbots and their potential downsides for media. It turns out that I barely scratched the surface. As I wrote then, the key thing to know about these programs is that they’ve been trained by being fed hundreds of billions of words and images. The biggest ones including OpenAI, which powers ChatGPT, have been very secretive about the specific data ingested by their programs.

What I now understand is that the real secret to using generative AI programs is in writing the query, the request for output. The more specific the query, the more useful the response. With the right prompt, such programs can generate compelling copy and images for marketing communications including direct response — mail/email, social media postings — with hashtags, and so on. They can even be trained to use a company’s brand images and communications style. Further, circling back to input response rate data allows additional refinements and personalization to improve future results.

It’s important to remember that generative AI applications do not create anything new. They are simply trained to recombine the information they’ve been fed to respond to queries. The better the prompt, the better the output, keeping in mind the very real problem of what is euphemistically called “hallucinations.” Hallucinations are when the program cannot find data to provide an answer and spits out verifiably false information. See, for example, the instance of the lawyer who cited precedents made up by ChatGPT.

BRAND CONNECTIONS

The Use Case For Media

Google, which operates the Bard AI program, recently published an online article touting three ways media leaders can use generative AI to improve their businesses. The first suggested use is for content creation and improvement of content management. The second potential use is to enhance and personalize audience experiences; a big part of Netflix’s success is said to be related to its proprietary content recommendation algorithm.

The third recommendation from Google is the one I find the most compelling and concerning. This recommendation essentially combines the first two points to improve monetization. There are three ways to do this according to the company. Driving ad revenues with increased personalization is the third use case suggested; I wonder whether this runs afoul of privacy regulations. The second idea is to use such applications to curate and assemble personalized content; this seems a restatement of the enhancement of audience experience above. It’s the first monetization opportunity that I find the most disconcerting. That is “to free writers, artists, editors, and many others from the tedious and mundane aspects of their work.” While I don’t know exactly what is being suggested here, it sounds like the search giant is advocating for ways to eliminate payments to creative talent.

This last point, the one about using generative AI to produce creative content for media companies, is also one of the areas of contention in the current Writers Guild of America (WGA) strike. The Guild doesn’t want AI trained on member writers’ output, nor does it want AI used to create either draft scripts or new scripts. Clearly such actions must be under consideration or the Alliance of Motion Picture and Television Producers wouldn’t have counter-offered “annual meetings to discuss advancements in technology.”

Follow The Money

None of these discussions would be unfolding without a business case for generative AI. As I pointed out in February, these services are hideously expensive to support — then estimated at $100,000 a day.

The current thinking is that the majority of revenues will come from software as a service developers. Not only does this allow the creators to capture substantial connection (API) fees from developers, but it also transfers risks from using the data to third parties. There’s also speculation that, because they involve so much data that needs to be stored and manipulated, the big cloud storage providers including Amazon, Microsoft and Google will see significant new revenues.

Despite disconcerting experiments to the contrary, it seems unlikely that Google will pursue its web search summaries strategy. While this would significantly cut into traffic to publishers’ sites, and thus their ad revenues, it also has real potential to reduce Google’s search ad revenues, which were $162 billion in 2022.

Dangers From Generative AI

There are real concerns about the dangers posed by generative AI. In a blog dated June 4, Shelly Palmer, professor of advanced Media in residence at Syracuse University’s S.I. Newhouse School of Public Communications and a technology and media consultant, lists what he sees as 10 threats specific to the technology. Among them are the personalization that can lead to various types of discrimination and extremely persuasive marketing, hacking and cyber warfare, financial market manipulation and hyper-realistic misinformation.

We began to see the results of misinformation in the 2020 presidential election with the launch of so-called “deepfakes.” This newly released software means such misleading images and information can be created quickly and with minimal effort. False content already created for 2024 campaigns includes an image of Donald Trump hugging former National Institutes of Health Director Anthony Fauci, and a video of a speech by Ron DeSantis modified to include fighter jets flying overhead. Both were reportedly released by the DeSantis campaign. The Republican National Committee responded to Joe Biden’s 2024 campaign announcement with a chilling video purporting to show the state of the world if Biden is reelected; admittedly this video was marked as AI-generated in a small font.

The concern, at least among some politicians, is enough to have persuaded one group to ask the Federal Election Commission (FEC) if false videos can be regulated under the law that bans impersonating of other candidates. The FEC has yet to respond.

Clearly something must be done about generative AI to prevent the worst outcomes. Naturally, the AI industry recommends self-regulation. At the same time, the Biden administration is asking for solutions to help manage the situation. It’s formed a working group to evaluate the technology and is supporting efforts including requiring disclaimers on generated materials.

Overseas, the European Union is proposing legislation requiring disclosure of data sources used to train generative AI programs; this would take effect in 2025 at the earliest. Other countries proposing other legislation and solutions.

If all of this frightens you, it should. Generative AI has the potential to dramatically change our business and our world, and not for the better. We are already living in a country divided by deeply entrenched beliefs reinforced by digital information the two sides consume. The U.S. is also seeing great wealth disparities, which can only be exacerbated if jobs are replaced or degraded by AI. Without real intervention, we stand to lose the innovation and creativity that defines humanity.


Former president and CEO of the Media Financial Management Association and its BCCA subsidiary, Mary M. Collins is a change agent, entrepreneur and senior management executive. She can be reached at [email protected].


Comments (0)

Leave a Reply