Turner’s Bob Hesskamp is overseeing the company’s shift to IP-based operations that’s happening concurrently with the construction of a new CNN headquarters in New York City, a new next-generation master control facility at Turner’s Techwood campus in Atlanta and a new CNN bureau in London. The move to IP, he says, “is the most significant change we have undertaken. We are changing everything, from the infrastructure up, and every system that rides on top of it and how they talk to each other.”
Bob Hesskamp is a busy man. As head of engineering for Turner Broadcasting System Inc., he is overseeing the construction of a new CNN headquarters at Hudson Yards in New York City, a new next-generation master control facility at Turner’s Techwood campus in Atlanta and a new CNN bureau in London.
All of this construction is occurring amid Turner’s enterprise-wide shift to IP-based operations, which began back in 2015.
Technology change is nothing new for Hesskamp, who in his previous role as SVP for CNN Worldwide Broadcast Technology oversaw the creation of CNN’s MediaSource content management system and the launch of CNN HD.
A graduate of the University of Missouri, where he majored in radio, television and film, Hesskamp began his CNN tenure in 1983 as a video journalist for Headline News.
TVNewsCheck recently spoke with Hesskamp about all the technology change afoot at Turner.
An edited transcript follows:
When did Turner’s transition to IP technology begin, and what’s been your overall strategy behind that change?
As we came together under Jeremy [Legg, Turner CTO] when our technology groups reorganized in late 2015 and 2016, we started an exercise to look at our technology to see how prepared we were for the future, for the changing media business. We determined, that if we were going to be successful, we needed to build a more flexible broadcast infrastructure to deliver across all platforms.
It was at that point that we decided the only way we could really do that was to move to an IP routing infrastructure and an IP infrastructure. So, we began the work of issuing RFPs and investigating what was going on, and made our decision [on the router vendor] in 2016 and started implementing and installing right away, once we chose Evertz as our main partner for this.
What we did first was to put this in our Atlanta campuses, Turner Studios and our network operations playout facilities — the master control and playout and all the incomings for master control for the entertainment networks and sports, and then the CNN Center as well. And we linked all these together with our fiber. So really we have a virtual production and virtual routing infrastructure across all of our Atlanta campuses, where every source is available to everybody.
We just completed our user acceptance, and we are moving our sources into that infrastructure now. As new facilities are built, we will build them all IP. Like our next-generation master control [in Atlanta], Hudson Yards and London, they will be built IP, and as the replacement cycle comes up for control rooms they will be built IP and attached directly into this IP infrastructure.
If you are a facility looking at this, somebody who’s just starting, is the router the key piece and everything flows from that?
That’s exactly right — the router and the control — how you are controlling the router, how you are looking across all your facilities. In IP, you don’t have one signal on one cable, so you really need to be able to get a good view into what sources are going where, where you might be having a problem, how you double-route into some places so that you never lose anything. And we got everything that we required when we made that choice. But really, it’s a solid routing infrastructure and then the control on top of that, the routing control and management of all your sources, that are key to being successful.
Have there been any big surprises along the way?
Nothing huge. We knew we were an early adopter into this, and so we knew that not everything was going to be perfect. We commissioned everything with Evertz ASPEN [that company’s IP transport protocol], just to get everything set up and make sure that we were passing signals between our facilities successfully and that the control infrastructure was working, all the routing infrastructure.
The data routing infrastructure, we had to build around the routers, to control it and to manage it and we made sure all that was working. And then when 2110 [the SMPTE IP transport standard] was available, a little later than we had hoped, we migrated to 2110 and began our testing kind of all over again. But it gave our engineers a lot of insight, and time on the router, and we developed some really good expertise as a result of that. And now we are still in this process of checking interoperability with different vendors across the board in 2110.
When you look at this compared to other technology shifts, like moving from tape-based operations for news to files, have the vendors been pretty responsive?
Yeah, I think they have been. I think everybody knows that to be successful in this space they have to be able to play in the IP world. I think if people were developing their systems independently, there are different approaches, with multicast, unicast, timing, all those things. We had to kind of broker between some of our vendors because we are so early into this process.
But it’s going well and they are engaged, and I think they are appreciating the conversation and us bringing different vendors together into this space to work out some of the inconsistencies we have seen and some of the problems that we are experiencing. But there haven’t been a lot of problems for something that is so new.
Fundamentally, since I have been in the broadcast business, this is the most significant change we have undertaken. I think file-based workflows was a big one, as there had to be a lot of trust in getting rid of tape, you saw great efficiencies in how you worked across multiple cities and multiple groups and how you allowed journalists to have easier and better access to all video. So that was a big fundamental change that evolved over time.
But really, we are kind of changing everything here, from the infrastructure up, and every system that rides on top of it and how they talk to each other. And how important your overall network is to your successful operation. Not just your routing network, but the data control networks, are absolutely critical to your production and playout operation. So that’s been a big learning curve for us.
Have you had to adjust what type of personnel you hire, or offer your current engineers training to work in an IP world?
Both of those are true. As I mentioned before, the overall data network that surrounds your video infrastructure routing is absolutely critical for it to work. So we have had to hire more people in the networking space, and also be way more in tune with our CSO [chief security officer] on security issues. Because obviously this is all IP, we are firewalling this off, we are looking at different ways to protect these systems and how our data network is architected so that we protect our routing infrastructure. Those things are incredibly important.
So we have hired more people with networking experience. Since we moved to a file-based workflow we have been hiring engineers with more networking, computer science and [software] development skills, and we have been doing that for a while.
But we still hire broadcast engineers because we still use a lot of SDI and it’s going to be around for a long time. We have found that, for the most part, all of our engineers are really eager to learn. We have invested a lot in training and our vendors have been very helpful in that effort as well, and I have been really impressed with the way that our team has taken to this.
How much are you relying on cloud services today, whether it’s on the production side or on the distribution side of content?
Right now our cloud efforts are focused on our media content supply chain, from the acquisition of video from our distributors to shows that we produce internally. We are doing that with Amazon and SDVI, as well as some internally developed software.
So when we can get a movie or a series from a supplier, we can order it through the system, receive it and make sure we get the right video. The distributors can upload it to a portal, it goes into Amazon, we have automated QC around it, and then through that process we have automated versioning for linear, SVOD, VOD or distribution to one of our over-the-top platforms, whether one of our internal over-the-top platforms or to a partner, in the correct format.
How long have you been pulling your content that way?
We are in the process of still building it and it’s in UAT [user acceptance testing] right now, but it is scheduled to go live this fall. We are putting video up there and are testing it, and it’s working well so far.
How were you getting the content before?
You would get a file, but it was a very manual workflow across a myriad of organically grown on-prem systems and storage and file movement and coding. I hate to say this, but the business changed so fast that we grew around it in an organic way rather than a strategic way, and I think what this does is it replaces a lot of those aging on-prem systems that we just kind of enabled to work and deliver video in these new ways, with a system that is strategically designed and automates much of the process.
It still takes human interaction at a number of levels and a number of places for content customization, changing [ad] break structures and things like that, and we are going to be able to do simple edit directly in the cloud. But it really streamlines the process and makes it much easier for us and faster to get video in and distribute it to our monetization platforms.
Your biggest current project is probably the new headquarters for CNN at Hudson Yards in New York. I believe you are moving into that in March?
Yes, we get our certificate of occupancy hopefully in March and we hope to start moving shows and teams over toward the end of March. So we will get people in there, do shot-blocking, rehearsals, training and move different show teams in time. We are still working on that process.
I was just up there last week and the IP network, the data network, the equipment is installed, the router is installed, and a lot of the equipment is in the racks. We expect a bunch of equipment to be coming in over the next few weeks and then this fall will be installation. It’s all fibered, and we expect control rooms and studios and everything to be built out this fall and winter so that we get in a real solid system testing of the entire thing early next year.
What have been some of the biggest challenges there?
It’s a big facility and we are building IP from the ground up, so the interoperability I talked about earlier is absolutely crucial for us to be successful here and to build as much of this IP as we can. We will have little pockets of SDI, but we are really trying to limit that because we want to have a facility that is as flexible as possible.
Along with the next-gen master control, that’s kind of where our focus is. Is this flexible, scaleable, changeable? We want it to feel like a broadcast facility for our editorial teams, so when they walk into a control room it feels like a control room that they are used to. But the underlying infrastructure is absolutely flexible if we have to change formats, if we have to add or subtract. Whatever we have to do going down the road, we can do.
Being an early adopter [of IP] has been one of the biggest challenges, but I will tell you, it’s paid off. We lowered the floors by a foot from our original plan, and we shrank our gear rooms by a third, because we are fibered so much. What you see on the backplane of the router and what you see in terms of fiber around the facility, it’s so much less than we even anticipated. We could have shrunk it a little bit more, but we were being conservative. We had to make these decisions a year ago, a year-and-a-half ago, so we didn’t want to be too aggressive.
What are you doing in master control operations to make them more efficient?
In addition to Hudson Yards, one of the biggest projects we are undertaking is what we call our next-generation master control, and it’s built completely on an IP infrastructure. We are moving the news playout master control functions over to our Techwood [Ga.] facilities. So we will have a common technology stack infrastructure for news and entertainment for the first time ever.
We weren’t aiming for efficiency in terms of operations, but we were aiming for efficiency in terms of the technology infrastructure by doing that. And, again, our goal is to create new capabilities and flexibilities on top of this infrastructure and on top of this master control so it’s easier to add channels, it’s easier to spin things up and try things.
Where things would have taken months and hundreds of thousands of dollars, we think we can do things faster and quicker and adapt to new requirements faster over the long term. We hope to have that online early next year. It’s a pretty cool facility where everybody across news and entertainment is working in the same facility and we have live breakout rooms for news and sports, although news will be in those rooms most of the time because of the live nature of the content.
You mentioned an overall goal of making things more flexible because the business is changing so much. Obviously, you are undergoing a change in terms of the [pending] merger with AT&T. How do you expect Turner’s overall technology operations to change after that is complete?
You know, I don’t know yet and because of the appeal, we haven’t been working [on] AT&T as much as probably we would like to. I am very optimistic about it and excited about the potential, but at this early stage we have been so focused on these internal things and our move to Hudson Yards that we haven’t spent a ton of time really contemplating that integration.
You mentioned security as a big thing to focus on as you move to IP. Is there anything else for broadcast engineers to be thinking about in terms of new technology over the next five years?
I think we will all be looking at new formats. I think that HDR is promising as a real improvement. But really, we are so new in this IP world I think that the further development of NMOS [Networked Media Open Specifications] and things that make interoperability and plug-and-play across an IP infrastructure easier is a real opportunity.
On a pragmatic level for broadcast engineers, it’s that all these things are interconnected now. There are so many dependencies that didn’t exist before. So, everything is a system, and we all have to have more of a systems approach and systems thinking to our equipment and our technology and our gear.
You can’t just be worried about graphics or the control room, you have got to think across the network and what’s controlling what and how all these things operate together, and the potential and the pitfalls that may occur because of those things.