EXECUTIVE SESSION WITH BRUCE MACCORMACK

TVN Executive Session | Deep Fake Videos Pose Growing Threat To News

CBC adviser and news technologist Bruce MacCormack warns that deep fake videos have gotten more sophisticated and difficult to detect. Their creators are also proliferating, he says, and news organizations need to begin arming themselves against what could be “an existential threat” to their legitimacy.

Deep fake videos have, for the most part, been relegated to corners of the internet outside the mainstream. But inevitable improvements to video editing software will better the deep fake product and make it more widely available for use, warns Bruce MacCormack, senior adviser to the Canadian Broadcasting Corp. and Radio-Canada.

MacCormack, who specializes in deep fake defense initiatives, says this will boost the chances of deep fake videos infiltrating media organizations

MacCormack, previously the CBC’s head of business strategy for media technology and infrastructure services, has worked with other media organizations to analyze deep fakes and decipher what media companies can do to address their development, distribution and potential impact.

In an interview with TVNewsCheck writer Michael Stahl, he says deep fake videos are both an inflow and outflow problem for news organizations. A three-pronged approach including media literacy, detection and serial numbers on news content will be required to combat such videos, and various media outlets and tech companies have begun collaborating on industry-wide efforts to that end.

An edited transcript.

How sophisticated and widespread of a threat are deep fakes to the integrity of news right now?

BRAND CONNECTIONS

It is an annoyance now with the potential to become an existential threat as it goes forward. Worst-case scenarios are realistic impersonations of a media company’s brand. By that I mean not just the logos, but fake hosts and sets, fake voices in audio. If those sorts of things start to become prevalent, then trust in the basic product starts to erode. This is technically feasible; it’s just not widely dispersed right now. But the view of the industry is that this technology will become far more accessible in the coming years, both to state actors and to 14-year-old kid innovators who could also do a lot of damage.

How rapidly has the technology improved over the last few years and how close are we to deep fake videos passing the eye test and appearing authentic?

Right now, they can pass the eye test; it’s just the difference between a sophisticated audience and a general audience. There is an inflow problem and an outflow problem for news agencies. How do you make sure that everything flowing out of a legitimate newsroom has a Tylenol safety seal, that this product that you received came from my newsroom and has not been tampered with at all? That is the outflow problem. The inflow problem is how do I make sure that this video I have received of an event is truthful and it is something I want to rely on as I build my news story.

What are the current detectible markers for deep fake videos? Who has these tools for detection?

If you think about the problem with these fakes, there are really three solutions.

One is media literacy. You can make the audiences aware that these things exist and get them to cast a critical eye over them and tell them if it doesn’t seem real, maybe it isn’t, and to check their source.

The second line of defense is deep fake detection. There has been work done by Facebook and Microsoft with the participation of the Partnership on AI. But there are some ethical challenges about researching deep fake technology and whether or not you should put a deep fake detection tool into generally available space because that could help make the deep fakes better.

The third is probably the long-term solution, and it’s putting a serial number on pieces of news that leave our organizations. We have internal serial numbers and we will add them in so that we can say this piece of news came from us. And we will pass along enough information so that you can validate that from us.

Microsoft has AMP — Authentication of Media via Provenance. Adobe has the Content Authenticity Initiative. The New York Times has The News Provenance Project. All of those projects have the same basic philosophy, which is put in place an identifier and then a way to validate that identifier. There is no absolute science on this, but in the opinion of people that I have been talking to, it seems to be that is the long-term fix, but it will take a while to put in place.

How can broadcasters work to advance media literacy in a tangible way with their audiences, especially in regard to deep fakes?

Show them what’s possible. Without creating alarm, produce stories that show people that it is possible to create synthetic video. The more people are aware of what the technology can do, the more people can make their own judgements about what is going on in their information environment.

How can broadcast companies identify deep fake videos from a technical standpoint right now?

Not easily is the way I would say it. There is no surefire way to do that at scale at the moment.

There are little things like lip-sync versus audio, and people are looking for those little telltale technical clues that something has been adjusted. It’s just they are very subtle and us finding a way to detect them are computationally intensive and not the sort of things that are easy to do.

That being said, if people are suspicious of something, it’s manageable to investigate. The trick is to use an automated system to flag things that are suspicious and then use manual fact checkers or manual validation beyond that point. You just can’t do that for a million things a day.

And it’s fair to speculate that those people trying to create the deep fakes would continuously look to enhance their technology as well, right?

Not only would their technology get better, there is something called adversarial networks where you just keep throwing challenges back and forth at each other and if someone is trying to improve the deep fake, they throw it at a detector and they teach it to jump or adjust the algorithm until the deep fake detector can’t find it.

Have we seen a deep fake make it to the air of a legitimate news outlet? 

No, but that is the fear.

Are we seeing deep fakes being used for individual or company gain?

There’re some that have been humorous. I can tell you that there are amateurish attempts that have shown up in the Canadian election, but they were amateur enough that they weren’t that believable. But they are examples of intent. So, if you take the amateur attempts to hijack the brand and then you give those people in the next election cycle a more sophisticated tool, your intent hasn’t gone away, you end up with a problem.

Even if news companies somehow developed such great technology that they could snuff out every deep fake video that they come across, that would not stop people from posting them on social media. And a lot of people get their news from social media. So, what are social media platforms doing to address this issue?

Through the Partnership on AI, where I’m a member of its media integrity group, we are having these conversations. The social platforms cannot cope with different forms of validation information coming in from different organizations. CBS used one encoding standard; NBC used a different one; BBC used something else. It becomes an impossible challenge for them. We want to get the validity signals straight and consistent across the industry so that we can present the social platforms to the solved problem.

How can news organizations get involved and be kept abreast of this problem?

That is getting a little ahead of us right now, so, read a lot is the answer in the short term. There is public information about the work that all those organizations I mentioned are currently working on. Stay tuned.


Comments (0)

Leave a Reply