TVN’S MANAGING MEDIA BY MARY COLLINS

Chatbots’ Downside For Media Is Already Clear

ChatGPT may be grabbing attention and lots of experimental early use, but the technology’s limitations are already widely evident. Media ought to take stock of the caveats, especially as chatbots are destined to play a larger role in the business.

Mary Collins

The other day I prompted ChatGPT to write five paragraphs about its value for media businesses. Frankly, the results were disappointing.

All five paragraphs focused on responding to customers and analyzing customer data. They included such things as: “Enhanced Customer Experience,” described as offering quick and accurate responses to inquiries; “Improved Productivity,” explained as freeing up employee time by answering routine customer inquiries; “Data Collection and Analysis,” focused on compiling data from customer interactions; and “Cost Effective,” which essentially meant automating customer support. Even the paragraph headed “Increased Engagement” was simply using the chatbot to answer routine customer questions posed via social media.

When I refined my request to focus specifically on television businesses, the only real change in the response was that most of supporting paragraphs included the phrase “television businesses.” They still seemed to center on the ability to provide a cost-effective solution to routine customer inquiries and the ability to collect data from such interactions. All-in-all, the responses didn’t convince me that such a tool could be valuable for managing a media business.

Chatbots Aren’t Media Business Experts

Given that experience, I was understandably concerned when I read that media buyers had been querying ChatGPT about planning for a recession. According to MediaPost’s Joe Mandese, at least one “major media holding company executive” has been asking the chatbot how company clients should respond to a recession. ChatGPT keeps responding, “reducing advertising.”

BRAND CONNECTIONS

Clearly, if my experience is any indication, ChatGPT is not an expert on media and marketing, which makes sense. The application simply uses all of the data it has ingested to predict the next word in a sequence. The programming on which it’s based — GPT-3 — is Generative Pretrained Transformer 3, the operative word being “pretrained.” Internet databases through 2021 were fed into the system and used to train the program. According to BBC’s Science Focus, it ingested “300 billion words.” Then a team tested and improved the program by feeding it questions and correcting or ranking the answers. The free version is technically in test mode. The query screen includes this disclaimer — “ChatGPT Jan 30 Version. Free Research Preview. Our goal is to make AI [Artificial Intelligence] systems more natural and safe to interact with. Your feedback will help us improve.”

What Chatbots Can Do

While the program itself cannot currently explain it, ChatGPT has the potential to be a useful tool for media business executives. I assume the same will be true of Google’s Bard app when it becomes more widely available.

Several writers have noted chatbots’ ability to do preliminary or rote work. The Palmer Group’s Shelly Palmer writes about using ChatGPT multiple times a day to do things such as summarizing a meeting, drafting preliminary bullet points for a presentation, or simplifying a complex topic. An article about ChatGPT for journalists suggests the tool can be helpful for drafting interview questions or as a subeditor to ensure that an article conforms to a specific format. However, most everyone who suggests business uses for the tool also stresses the importance of fact-checking before using the finished piece.

Another suggested use, as reinforced by my own experiment with ChatGPT, is routine customer service inquiries. My cable operator already uses something similar; it’s nearly impossible to talk to a person without first engaging with the virtual customer service assistant.

The Downsides

Discussions about the downsides of deploying an AI app to do routine work remind me that it wasn’t long ago that the finance and accounting side of the media industry became alternately excited and concerned about the potential to deploy bots to complete routine finance and accounting tasks. One of the greatest fears was that, without having to complete these tasks manually, newer employees wouldn’t have the knowledge necessary to spot incorrect outputs.

I have first-hand experience with how that works having once had a boss who could spot the one incorrect number in a full-page spreadsheet. That’s not surprising given that he’d begun his career as an accountant doing spreadsheets by hand. Presumably, he would not have been so quick to spot my errors if he hadn’t had that experience.

The same theory applies to writing. An author friend once commented that he needs to read 100 books before he’s ready to write one of his own. Where do you get the background knowledge that informs your thoughts if you reply upon a chatbot to do the preliminary work? Additionally, I’d say it’s the actual practice of drafting, rewriting and editing that makes me a better writer.

We already know that tools such as ChatGPT or Google’s Bard are not infallible. That point was reinforced very publicly by the beating Google’s stock took immediately after announcing Bard. The problem is that the promotional screenshot highlighting the chatbot’s capabilities includes a sample query and response featuring the James Webb Space Telescope (JWST). The answer incorrectly credits the JWST for a photo taken in 2004 – 19 years before the telescope was deployed.

Also consider that chat applications such as ChatGPT and Bard have their foundations in the contents of internet itself. Have you heard the story of the Twitch-based Seinfeldesque comic “Nothing, Forever”? It launched in mid-December of last year, “streaming a live, endless AI-generated cartoon.” Late on Feb. 5, the lead character, Larry Feinberg, launched into a “transphobic rant,” violating Twitch community guidelines and earning a the program a 14-day suspension. In addition to being an interesting experiment about the capabilities of AI to generate scripts and visual content, this is an object lesson about using unfiltered internet content.

There are other things to consider before unabashedly deploying chatbots or other AI applications. In addition to having the potential for providing overly general, incorrect or nonsense information, there are some reports of outright plagiarism. That’s one of the reasons many experienced users recommend employing these tools for drafts and not for final copy. It stands to reason that the more specific the query, the smaller the number of available sources for the information.

This leads to the question of current and future legal issues. Programs that generate images based on text prompts are already being sued for copyright infringement. My guess is that it won’t be long before we see the first suits against chatbot owners.

Finally, the current free-to-users business model for ChatGPT is not sustainable. By some estimates, it costs $100,000 a day to keep it running. That figure excludes the millions (or billions) spent on research and development. Even with ChatGPT-Pro subscriptions at $20 a month, I don’t see a short-term breakeven. Backers must be factoring in future revenue from search, advertising and other sources to justify their investments. Those considerations will certainly play a role in shaping future versions.

Media’s Future Undoubtably Includes Chatbots

Despite the pitfalls and concerns, many of which I haven’t named, media professionals must familiarize themselves with this technology and explore the ways it can help with their jobs. I know you will find a number of uses beyond those ChatGPT provided in response to my initial queries. Just be sure to use AI responsibly. Love it or hate it, this technology is going to affect media businesses’ future.

P.S. Thank you to the media finance friends and the “really cool financial professional” who suggested topics for this column. I hope to use your other ideas in the future.


Former president and CEO of the Media Financial Management Association and its BCCA subsidiary, Mary M. Collins is a change agent, entrepreneur and senior management executive. She can be reached at [email protected].


Comments (2)

Leave a Reply

RustbeltAlumnus2 says:

February 15, 2023 at 9:40 am

This criticism fits into the oh-it’s-wonderful-but-it’s-not-yet-perfect box. As if we should just focus on the flaws of version 3 or 3.5 and ignore versions 4 and 5 and 6. Chatbots can learn and their growth seems exponential. Will they put everyone out of their job? Of course not. Will they put a lot of people out of work? Well, it looks that way now and we worry about tomorrow. Not so fast with the naysaying. The robots have arrived and they learn much faster than humans can dream. Ask any grandmaster chess player.

elizabethgorgon1 says:

January 22, 2024 at 6:29 am

Any tool has its advantages and disadvantages; among all the abundance, it is important to find exactly what is suitable for your purposes. In addition, there are now more than enough similar GPT solutions; here https://aimers.io/blog/345-gpts-list-for-digital-marketers/ you can find a whole list