Talking TV: How The BBC Is Grappling With Generative AI

Laura Ellis, head of technology forecasting at the BBC, say the time is nigh for news organizations to confront the manifold opportunities — and dangers — that generative AI has ushered in. A full transcript of the conversation is included.

It’s impossible to overstate the transformative impact that generative AI will have on newsrooms.

In its best applications, it can help dramatically lighten the load of journalists’ more mundane and time-consuming tasks, freeing them up to do more reporting in the field. Think versioning content across multimedia, for instance. And for on-air journalists who feel shakier in their writing skills, it can aid in writing versions of their stories, helping them level up.

On the flip side, this fast-learning technology can also potentially elbow staffers out of newsrooms. AI-composed content can, unlabeled, deceive audiences. And in AI, bad actors looking to propagate misinformation have a tool that greatly expedites — and improves the quality of — their nefarious work.

Laura Ellis, head of technology forecasting at the BBC, is one of the media industry’s best-informed experts on the minute-by-minute developments in generative AI and how they will impact the industry. In this Talking TV conversation, she lays out the likeliest ways in which it will transform newsrooms and recast the role of the journalist. She also shares her greatest concerns for the harm it can do and how news organizations should best position themselves to keep informed, test new AI applications and ultimately implement — or veer away from — the array of capacities it can offer.

Episode transcript below, edited for clarity.

Michael Depp: AI has been woven into broadcast technology for years, but the development of generative AI has taken it to an entirely new level. I’m talking here about the chatGPT variety, AI that can synthesize massive bodies of knowledge, textual audio and video material and produce whole cloth new content. A news story, for instance, a promo or a whole package, among many other things.

BRAND CONNECTIONS

The implications of generative AI for broadcasting and numerous other industries and professions are absolutely massive. This has news organizations scrambling to keep up with a technology that improves by orders of magnitude continuously. There are business, ethical and even existential considerations to undertake as generative AI presents a step change on par with the emergence of the internet.

I’m Michael Depp, editor of TVNewsCheck, and this is Talking TV. Today, a conversation with Laura Ellis, head of technology forecasting at the BBC. We’ll talk about what that interesting position entails, along with the developments in generative AI and what they mean for the business of media and the practice of journalism. We’ll also look at what news organizations can and should be doing to meet the moment that AI has brought upon us. We’ll be right back.

Welcome, Laura Ellis.

Laura Ellis: Nice to see you.

Laura, let’s start by clarifying what your position is at the BBC. Head of technology forecasting. What does that mean, and what does it entail?

It’s a great title, right? It’s a job which has many facets. Some of it is a balance — getting out into business and seeing what the technology that is coming is going to mean for the BBC and how we respond to that and how as an organization we get everybody involved and get them having conversations with us.

I run something called The Blue Room, which is a tech engagement space, and we bring people in, and we say to them, you know, this is happening, this new technological development, whatever it is. That might be generative AI, it might be digital identity, it might be something like picture quality. It’s a really big range of things. And we say to them, how is it going to affect where you work? What can we use it to do what you can’t do already? And how can we hear from you about your concerns and your thoughts?

It’s a very big conversation that we try to have with the organization, just linking up the kind of capabilities that are coming with it. And we [consider] the potential use to serve audiences. That’s a big part of the role. And then as another part of the role, I do some work on things like disinformation. When I started the role, we had begun a media provenance project looking at how we put signals into media to track it back to the source, which with the advent of generative AI, has become even more important. So, it’s wide ranging. I speak to a lot of people every single day inside the organization and outside. It’s never, never dull. You’re reading these things every minute. It’s exhausting, and I love it.

I want to just ask you a little bit more specifically about the process by which you go about keeping yourself informed. How do you do that? I mean, what are you scanning continuously in order to stay abreast?

Everything. I’ve got a stash of newsletters, and I’ve become very adept at scanning very quickly to see whether new stuff is [there]. It’s quite reassuring and comforting when you see the same thing two or three times, you think, yep, I’ve got that now. Some things are just like a flash in the pan and it’s catching those flash-in-the-pan moments. Some things you think, hang on a minute, it’s not a new trend. Is that something that’s starting? It’s not something we have heard about.

And one of the things I do to kind of process is write a newsletter, which sounds like a really odd way to address things because that’s even more time being used up. But I find the discipline good. A colleague of mine [and I are] writing it at the moment. It’s a generative AI newsletter. Every Monday we publish to about 200 people within the organization. It’s a great discipline because it makes you keep across it and it makes you absolutely scan those newsletters and those kind of chunks of content that you’re bringing in.

I used to go to lots of events, so I’m lucky enough to be invited to things. I turn up to others. I have conversations with colleagues. Last week we set up a little salon, as we called it, in London with some interesting broadcasters just to kind of get our heads around what we were thinking and share where that was appropriate. So, there’s lots of different ways and it does feel incredibly full on. It’s like proper fire hose, gen AI, at the moment.

That seems so. Well, let’s talk about generative AI and how deeply as of this recording it can wend into the newsgathering and compositional process of journalism. Catch us up to where it presently and competently can play a role in that.

So, before we go into where it can play a role, I just want to kind of talk a little bit about why we’re doing this carefully. And I think the reason that we need to say that is — you’ll be only too aware of this — but there are so many issues around copyrights, around legalities, around our own data, making sure that that’s safe. And I want accuracy. And I think all of these things have potential to impact trust, and trust in news is absolutely our kind of number one thing. And if we lose that, we lose everything. So, we’re approaching this, I think, as quickly as we can, but also with incredible caution when it comes to making sure that we don’t breach trust by messing up on any of those fronts.

The kind of primary thing we find genuinely is really useful is in things like summarizing. So, in the first instance, we’ve not had any of this audience facing [AI]. But what we have started to do is look at how we can take all pieces of material and genuinely kind of reconstitute them and give us different experiences of them.

Now, longer term, we’re not doing this yet, but you know, what could I say that I’ve spoken to believe that you could perhaps give experiences of content for different audience groupings as a result of that incredibly quickly, something that would take a team of journalists an awful lot of time to write a whole new set of content, you know, for audiences, say under 35 audiences with English language. We’re still investigating those things. And I think one of the things we’re very keen to do is to make sure that whatever we do with this technology, and the same applies to any new technology, is that we don’t get carried away with capabilities before we’ve really understood how we can use this strategically. The BBC, that’s a public service. Getting that groundwork in is where we’re at the moment, and that’s really important to us.

You posit that generative AI can help journalists level up? How so?

I have a lot of journalist colleagues who talk about this a lot. And it’s something we consider in quite a lot of detail. There are two elements to it. One of them is that some people, even journalists, some of our broadcasters, for example, have said this to me, don’t actually like writing. Don’t really enjoy writing and getting stuff down. Our online journalists, it’s their bread and butter, but broadcast journalists will sometimes say it’s not what I do. I speak.

And what generally I can do is to help you. If you’ve got some ideas, a rough assemblage of ideas, it can help you write them in different ways. If you want to write an email to somebody, if you want to write a story. And I think that is a really clever technology. One of the things that I think in the society generally we could do is potentially use generative AI as a kind of exoskeleton for people to help level up.

Imagine if you were struggling because of your literacy, because maybe with the language to write a letter to the local council of your child’s school. Generative AI that I can help you do that and with a little bit of input can give you the perfect way of communicating, which I think is an incredible piece of leveling up technology.

Might there be a problematic side to that as well? I mean, you know, if you have somebody who isn’t very good at cutting people open and messing around inside, do you want them to be your surgeon?

I think this is more about personal use cases. Let’s say, for example, that I have a debt issue and I want to kind of write to my bank or to my credit card provider, and I just don’t know how to do that. I don’t know how to find the words to do that. I can, sure, I can go online, and I can look at the Citizens Advice Bureau or whatever it is, and I can find a way of doing it that. If I can use generative AI to take my knowledge and match it up with some skills around language that gives me a leg up, it gives me something that I can then use to take to that credit card agency, to that bank and say, you know, this is what I want to say to you. So, the whole kind of literacy aspect of it…

I know of a case of a guy, he runs a small business and he’s using it to write his customer emails and he says, you know, I got to the GCSE level in schools at 16. I left school. I left school without any qualifications, and I struggle to write. And I when I write stuff, I’m aware that people think, oh, that’s not grammatical, that’s not correct. And he’s using those to communicate with his customers. And that’s one of the things that I’ve been thinking about when we were looking at how we might use in newsrooms. How can we use this incredible capacity to give people skills that they don’t currently have, see the benefit for a lot of people.

Just to that point, quickly, do you think that right from the jump that it’s important to let the audience know that AI has been used, that there might be some sort of notation in the story or maybe at the end of it that that indicates that? Because otherwise we think if it’s got a byline on the piece that that person has composed that piece.

I really do. I think it’s absolutely essential. Ever since the first days of us writing automated journalism, so just, you know, marrying up a database with a story which had gaps in it. We were able to write a story once and publish it a thousand times in a few different areas of the U.K., we thought it was essential to put on the bottom of the story that this story was written partly by a machine. Basically, we didn’t want to mislead the audiences and let them think any other way.

And I do think that’s a principle which in maybe a decade’s time we might find quaint and old fashioned, but for now I think is really important. And it’s a fantastic piece of work that’s been done by the Partnership on AI, an organization [that has] a framework for the use of synthetic media, which we worked on with them. And I think this this deep thinking going into how you communicate with audiences without being intrusive and kind of awkward.

Especially if there’s so much versioning work going on, as you were just citing through some examples of all the ways in which one bit of material might be versioned for various different types of audiences, it seems there needs to be some acknowledgment of that for transparency sake. I wonder how might generative AI effectively redefine what a journalist does and what consumers expect of a journalist?

Strong views about this as well, which is that, you know, at its very simplest, journalism is a human response to a situation, and it’s interpreting something that’s happening. I’m going to just use the kind of the broadest sense of this, and I’m going to just put you into Turkey where the earthquake was taking place. Having a journalist who’d had a really terrible journey to get to a place and was then reporting from the front line of this terrible tragedy is something which you cannot replicate, I don’t believe, using AI.

So, let’s say in years to come, we just had pictures shot by somebody else and we tried to put a generative AI voice over that. For me, that doesn’t cut it. And for me that is a specific piece of trust that we need to have in our media, which is that you have a real person, whatever their prejudices, you know, and we all have them, whatever the situation, you know, arriving, giving you that story through their eyes and ears and sentences is, for me something we should be very, very careful we don’t lose in this process because it is precious and it is the soul of journalism and what we do.

I think having said that, there’s lots of ways that this is an assistive technology. It isn’t a replacement technology. I don’t think it ever will be. But there are all kinds of people constructing all kinds of scenarios which mean that we might just need to guard against them.

We’ve teed up the very next thing I want to ask you, which is, you know, at least a couple of different companies have developed or are developing wholly AI-based media organizations. What is your take on the viability and the ethics of what they’re able to produce?

I think if you explaining again to your audiences that this is what you’re doing and that, you know, this is something which is generated by AI, there’s nothing wrong with that. Some organizations will see it as a fit for their business and others won’t. If you were in the business of providing real journalism, you know, from telling stories from places where things are happening that the world wants to know about, that’s never going to be a business model, but it might be a business model if you’re a celebrity gossip site and you know that you’ve got a corpus of material. You don’t have to be on a red carpet potentially. You know, you’re pulling stuff together.

I worry about two things. I worry about the proliferation of automatically generated material sort of swamping the whole zone where our news lives. And I also worry a little bit about just a sense of losing track of the sort of human creativity that we talked about, the observant nature of journalism, but also there’s a creativity involved in it. And however wonderful an output is, is it as valuable as having a human in the loop?

I always think about my favorite soap opera in the U.K., which is called EastEnders, and I think to myself, would I love it as much if I knew it had been written by an AI? And I tell myself that I wouldn’t. Now you might say, how would you know? How would you know? You might not know. And I might not know. And I would feel quite cheated if I didn’t know and then found out. But I like to see and feel the sense of a human hand and a human brain in the things that I’m consuming now. Maybe that’ll feel like an impossibly old-fashioned view in a decade, 20 years’ time. But I do think it’s something we need to bear in mind. And I think the kind of the preciousness of human creativity is something we should really work to discuss and to preserve.

Of course, in a decade’s time that preciousness of human creativity may be something that AI can effectively replicate.

It may. I mean, it is very clever. And I was musing only this week on the retirement of OpenAI’s tool, which was designed to detect whether generative content can be used. There are many ways they can tell you that there are ways of putting metadata in. There are all sorts of things you can do. Little visual signals. A lot of this is still being thought about. But if the person who’s created it is not minded to tell you, then, then, yes, you know, there is a chance that you might see something and you might consume something, and it would have been created entirely by an AI. You might enjoy it. You might go to bed at night thinking that is really good. That was fantastic. What’s wrong with that? In my view, the only thing that’s wrong with it is I do think you should be told, because we have lived with consuming the sum of human creativity for quite a long time.

And this is a big jump for us. And I think we need to ease people into it and see how we do feel about it, because there will be issues and there will be things that we need to deal with.

The BBC is among a number of news organizations developing guidance for newsroom implementation of AI. Can you share some of the outlines of that guidance? I realize it’s very nascent right now.

Yeah, I think that in common with most organizations, some of it feels quite superficial at the moment. It will develop and it will iterate along with these very fast-leaning tools that we’re seeing, you know, new capabilities from every week. For us, there is a bond of trust with the audience, which means that explainability is a really big part of what we are asking people to do.

Secondly, there is an absolute ban, if you like, on putting BBC data into these tools. We don’t know where that data is going, and we don’t want that to be a risk that our own or third-party data will wash up somewhere where it shouldn’t. So, there’s a real sense that we need to be very careful what the inputs are to these tools.

Thirdly, there is this accuracy issue, which is really pernicious because the tools are designed to please and they’re designed to make you believe that, you know, they’re having a great conversation with you and they’re telling you stuff that that works. Let’s just use chatGPT as an example. And the problem with that is I always say to people, when you first start using these tools, ask them about something you know, because then you’ll see. Ask them detailed questions. Do that before you ask them about things you don’t know.

I know a lot of organizations are trying to address it a lot. The fact is that’s something we’re going to need to keep a very close eye on. You will see in the court cases… there was a lawyer that used ChatGPT to try to create legal arguments, and it was completely wrong. And the judge was pretty angry. And there are consequences.

And I think that’s one of the things that we’re putting out in our guidance, you know, that we need to think about the consequences. Nothing is risk free. No technology is risk free. We need to be very cognizant of the risks. And finally, training, just making sure that we provide adequate training to people who are using these tools. We will be using these tools to give them the best chance of understanding what they’re doing and using it to the best extent and the most appropriate way.

Organizationally, what do you recommend is a good process for continuing to iterate that guidance as the technology obviously continues to rapidly improve? How should organizations set themselves up for that?

That’s a good question, and I can draw on some of the first things we did in this space, which was something called the Machine Learning Engine Principles we developed back in, gosh, must have been 2019 now. And these were designed in the early days to, first of all, examine what the use case was and then how this particular bit of machine learning they were using was going to help with that. And then look at the consequences, the potential unintended consequences.

And then once they’ve used it to look at what actually happened then go back and reflect on it, so very much a cyclical tool examining why somebody had done something, what it had achieved, what the outcomes were, maybe some of them unintended. And then do we need to revisit the whole process again and how do we take this on?

I think having that kind of process where you’ve got learnings being specifically captured, where you’ve got, you know, red flags where they exist because obviously, they will at some point being raised and having a proper process for acting on those and making sure that, you know, the problems that they have surfaced are addressed is going to be really crucial. Most organizations I speak to are thinking about having some kind of body in place or potentially a function attributed to an existing body. We have an editorial policy [panel] who are absolutely brilliant and have worked incredibly tirelessly for years on answering questions and dealing with this sort of thing. And it might be an offshoot of them. It might be a separate team. I don’t think we know yet. But somebody will need to be making sure that we learn as we go along because this stuff moves quickly. And I think there’ll be things that it throws up that we’re not even thinking about yet.

That’s why I’m just wondering who should be sitting at that table evaluating everything. Anecdotally, I’ve been talking to some broadcasters who in some cases seem to have a point person who is kind of assimilating everything and reading up and testing things out. They don’t necessarily have fully fleshed out committees yet per se, but it seems like that would be the direction in which things should be moving — a panel of people. Ideally at this point, what should that panel be comprised of at different levels or different aspects of a broadcast organization?

This is a really good bit of news because it throws into a room exactly the right people that should be talking about a lot of stuff like this — your lawyers, your data protection experts, your ethicists, your machine learning experts, your editorial people and your product people. And I think the more you get those people into a room together, the better your organization is.

Now technology is the medium and the message. And you know, depending on your choice of social platform, the way you’re going to deliver something, technology has much more of an impact about how you’re speaking to your audiences and the decisions and the choices you’re making. I think having those teams working collectively together will bring all their expertise to the table. It sounds unwieldy, but I think it’s something that we really do need to kind of get used to doing, because I don’t think that there is a single person.

And what about the C-suite? How often should they be briefed? How closely should the people there be brought into this discussion at this point?

It’s incredibly difficult for anyone working at that level to understand this fast-moving stuff very quickly unless they’re a specialist themselves. So, I think what it relies on is really good briefings and a real openness to receiving these briefings. Fortunately, both of which we have in the BBC and our team is called the Chief Technology Advisor Team, and we do weekly videos with the latest things you need to know. We do briefings, we do updates, we’ll do stuff ad hoc, and a lot of it is push, but it can also be sort of pull from the executives like, we want to find out more about this particular thing. We need to know about this company, whatever it is.

OK, we could go on for hours here, and I hope to pick up this conversation again with you very soon. But let me ask you one final question for now: Where are your current areas of greatest concern in terms of generative AI being weaponized for the creation and dissemination of disinformation?

My areas of concern are numerous because gen AI literally does things in seconds that would take a bad actor who wanted to create something horrible hours previously. One of the things that we really need to look out for is how we make sure that trustworthy content, however you define that — and it’s very much up to the audience who they choose to trust, so I don’t want to make any value judgments on any organizations… The technology we’ve been looking at under our media’s properties is agnostic. Anybody could use it.

But what it does do is it means that it will flag out where somebody is trying to spoof you. So, if somebody is trying to spoof a piece of BBC content, the signals that we have in that content normally will not show up or they will show up in a kind of corrupted way. And I think trying to get in this great expanding sea of content, the boats to rise that are going to give people a route through to trustworthy news from providers that they understand and know and want to trust is going to be one of the key issues.

And that’s going to be something we have to work [on] with the platforms, we have to work on with our audiences because there’s a media literacy, news literacy, tech literacy aspect to this, which is really important. And I think it’s going to be probably the foundational conversation in the new space in the next five to 10 years.

I think you’re probably right. Laura Ellis, you have probably one of the most interesting jobs in all of media today. Thank you so much for being here.

Thank you.

You can watch all of our past episodes of Talking TV, including quite a few that tackle AI at TVNewsCheck.com and on our YouTube page. We also have an audio version of the podcast that’s available most places you get your podcasts. We have a new episode most Fridays. Thanks for tuning into this one and see you next time.


Comments (0)

Leave a Reply