Broadcasters Rethink Cloud With Hybrid Approach
As they upgrade their technical infrastructures for an IP world, broadcasters today are often looking to software-based systems that run on public cloud platforms to replace their legacy on-premise hardware. But they are also finding that the cloud may not be ready to handle all of their operations, particularly at the local station level.
A hybrid approach that combines on-prem hardware with cloud-based applications appears to be the best way forward.
Most broadcast operations are technically feasible to support in the cloud today, said cloud experts from major station groups and technology vendors who gathered last week for the TVNewsCheck webinar “Hybrid Cloud Strategies.” But that doesn’t mean it makes operational or financial sense to do so, particularly if a station is considering a “lift and shift” to the cloud of an operation currently performed on hardware without any fundamental change in the workflow.
NBCU Local’s Vanguard In Albuquerque
“That makes sense almost none of the time,” said Matt Varney, VP of media technology, NBCUniversal Local, who has overseen the launch of cloud-based news production at Telemundo stations in Albuquerque, N.M. and El Paso, Tex., and plans to eventually implement it across all of the NBCU-owned stations.
The station group has been the most aggressive with KASA Albuquerque, N.M., a newly acquired station that had to quickly start-up local news production and launched an end-to-end cloud-based workflow a little over a year ago. The station is operating a production control room (PCR) in the cloud based on Vizrt’s Viz Vectar live production switcher. It includes software “appliances” to handle live field contribution, like LiveU and Dejero bonded feeds, Ross Expression graphics and Dalet’s Galaxy newsroom system.
The station relies on the cloud for playout too, handing off its signal to Amagi for playout and then using Harmonic software to perform encoding and distribution within the cloud. (Varney says that cloud playout is eventually planned for other stations in the future.)
“For us, we’re trying to push our live production into cloud so that we can deduplicate the hardware side of it,” Varney said. “We’ve got a pretty large footprint, with stations everywhere. And many of those stations have multiple control rooms, but not all of them are in use. But we’re paying to refresh, maintain and operate those things on a regular basis. So, we’re trying to de-duplicate. We’re trying to drive consistency in our operating models and using a common platform as a way to do that is really enabling us.”
Priming The Content Pump At Fox Television Stations
Fox has made a big investment in public cloud technology for ingest and playout of the Fox network and its digital properties, controlled out of its new technical hub in Tempe, Ariz. But the Fox Television Stations (FTS) are still relying largely on on-premise hardware. Tim Joyce, SVP of engineering at FTS, said that network playout is certainly a proven model in the cloud, but challenges still remain for call-letter stations with daily live newscasts to produce and local transmitters to feed.
“We’re still working out what’s the best use of the cloud and how we’re going to do it,” Joyce said.
Setting up disaster recovery (DR) capability will probably be the group’s first foray into cloud playout and is something that Joyce is actively exploring.
“Having something readily available that you can turn on at a moment’s notice and getting everything, that for us is a huge benefit,” Joyce said. “Also, the cost benefit, we don’t have to be constantly running it and paying a cloud charge for having all of our content up there. We can use it when we need it and then turn it back off.”
Producing news in the cloud is far more complex, he said, particularly since Fox’s stations vary in size from the No. 1 market in New York to No. 160 in Ocala, Fla., and have different requirements.
“We don’t know when a story is going to come,” Joyce said. “Where we feel pushing in the cloud gets a little bit tricky is some of the integrations we have in our newsroom, our automated robotic cameras, things like that, is where it gets a little bit trickier for us. Not to say we can’t do it. Just right now, we’re focusing on the things now we can get out of the cloud.”
FTS is starting to take advantage of the AWS cloud today through a centralized content ingest, prep and quality control (QC) system called the “Fox MediaCloud” that was created as part of the Tempe project.
“That system is central to all of the Fox businesses,” Joyce said. “And [at FTS] right now, we’re in the process of really integrating our workflows into that system. But getting the content in a place in the cloud and being able to distribute simultaneously to multiple stations, that obviously is a huge advantage, and something we looked at a lot when we were doing this.”
Pursuing Local Playout At Sinclair
Sinclair began playing and distributing its diginets out of the cloud several years ago, and then built its “Content Media Pipeline,” a cloud-based centralized content ingest, prep and distribution system for syndicated content and commercials, after a ransomware attack hit the group in October 2021 and severely impacted operations. That “Pipeline” system became fully operational last year, greatly streamlining workflows across the group’s 85 markets.
“Putting all your media into one place to increase discoverability is very good,” said Mike Palmer, senior director, media management for Sinclair. “What we found there also is all the wonderfully imaginative workflows that produce duplicate content that each one of these stations had been doing. When you multiply that by the number of channels that we’re producing, we’re finding much more duplicate content than we expected. So I think we’re going realize additional savings as we collapse duplicate workflows.”
Sinclair is now in the process of moving playout of its local stations to the cloud, using Amagi software and the AWS cloud to replace aging on-prem hardware and automation software. Last week, Palmer was part of the launch team at the first station shifting to cloud playout, in Raleigh, N.C.
“That is going well, and we’re on pace to do 20 of these by the middle of the first quarter of next year,” Palmer said. “And then the balance of our 190 or so channels will go very quickly after that.”
The way Sinclair’s cloud playout is architected, the group has East and West systems up and running at all times. Master control operators are still on-premise at the stations and are remotely operating the cloud playout. Most of the rack equipment at the stations themselves will go away.
“Probably the largest challenge we’ve had so far is networking,” Palmer said, “making sure that we can bring that content from the station up to the cloud, process it, both file and streams, and then bring it back down and push it out to the transmitter sites.”
Like Fox’s Joyce, Palmer acknowledged Discovery’s vanguard role in first moving to cloud playout back in 2017. But he said that compared to a large station group like Sinclair, Discovery had it relatively easy as playout was confined to a small number of locations.
“Whenever you’re trying to do contribution from all of your channel locations to all of your transmitters, that problem becomes geometrically larger,” Palmer said. “If we underestimated anything, it was in the amount of effort and complexity we had to put into networking to support this.”
Sinclair would like to get its network feeds delivered directly into the cloud via terrestrial IP, said Palmer, but is generally taking them down via traditional satellite. Then it is reencoding the feed and sending it up to the cloud, where it goes into playout. It then comes back down to the station, and goes through another decoder, as well as through the EAS chain, before it heads to the transmitter. The process becomes more complex for satellite receivers that are doing regionalized content, such as market blackouts for live sports programming.
“It’s not as simple as a network simply providing us a live feed if they’re handling live programming that they have regionalization of,” Palmer said.
As far as costs go, Sinclair does expect to realize cost savings, he said, but that required “a very detailed conversation” with its cloud provider on cost.
“This is achievable at scale,” he said. “It may be much more difficult to achieve if you’re not operating at scale.”
New Life For On-Premise Hardware
Imagine Communications sells on-premise solutions, including 2110 routing software and HD-SDI hardware, as well as software to drive cloud-based workflows. While there has been a lot of interest in the past few years in the cloud, Imagine is still selling more SDI hardware today than IP technology, said Andy Warman, SVP of product, Imagine Communications. In fact, on the playout side SDI still represents 60% of I/Os (inputs/outputs), and Warman isn’t expecting on-premise hardware to go away anytime soon.
“What’s interesting lately is we’ve actually seen a renewed interest in on-prem facilities, which I’m not sure was quite what we anticipated,” Warman said. “Even though that is happening, there is still a lot of momentum behind cloud. Cloud itself is still moving forward at a pretty rapid pace, just probably not quite at the pace we had seen up until six months ago.”
Warman attributes that shift to customers’ increased interest in investing in hybrid workflows, particularly as it relates to tackling the “ground-to-cloud question.” Imagine is looking to help by developing hybrid routing topologies that can combine on-premise routing, like 2110 infrastructures, with cloud routing workflows under one common control interface. That includes the ability to support multicast routing functionality in the cloud, which is inherently unicast.
“Hybrid done well allows you to mix playout, automation, all your video, all your data flows, all your media processing, in an effective way so you get the best leverage of what makes sense at any time,” Warman said. “But also, you can change your mind.”
Bitcentral provides software solutions for news production and master control workflows, using both on-premise and cloud compute. And Bitcentral COO Sam Peterson said his company is also seeing strong customer interest in hybrid solutions that combine on-premise hardware with cloud resources.
“It’s a both, it’s not an either/or,” Peterson said. “Almost every conversation today, it’s how do we use them together and use them effectively.”
Peterson doesn’t view customers’ change in mindset as a retreat from cloud technology. Instead, he thinks it reflects a more realistic assessment of a viable technology strategy going forward.
“For a long time, we were hearing people use the word ‘move,’ ‘I’m going to move to the cloud,’” Peterson said. “And I don’t hear ‘move to the cloud’ anymore. The term is we want to ‘leverage’ the cloud, we want to ‘use’ it. It doesn’t mean that everything necessarily wholesale is going to go to the cloud. For as long as there are transmitters, there is an endpoint that I’ve got to feed, and there are probably some local resources that I have to interface with. I think that is what got missed — there are pieces on the ground that we’re going to interface to. So, don’t forget about those.”