TVN TECH

Cloud Switching Floats In Proof-Of-Concept Stage

Sinclair’s Ernie Ensign says while there are numerous long-term upshots to adopting cloud switching, the group is cautiously taking its time with multiple proofs-of-concept for at least another year. Meanwhile, at least one vendor is pursuing a dramatically different approach to cloud switching. Above, Vizrt hosted a day at a beach house to show local producers how easy live production in the cloud can be.

As broadcasters seek to employ public cloud technology to replace on-premise hardware in their operations, live production has been described as the area with perhaps the greatest potential benefit but also the most technical difficulty, given the latency of moving video and audio feeds to the cloud and back.

Vendors and broadcasters have been working together to tackle the live production challenge, particularly in the area of switching live sources in the cloud from a “virtual production control room” that can be run by an operator from anywhere with a solid broadband connection.

Switching vendors say there is a lot of interest from major broadcasters in their cloud products as they look for alternatives to traditional HD-SDI on-premise equipment or new IP routing gear based on the SMPTE 2110 uncompressed standard. Broadcasters are looking to use protocols like SRT (Secure Reliable Transport) and NDI (Network Device Interface) to send compressed feeds to the cloud and back down, thus minimizing compute and egress costs.

“With all of the content these broadcasters are looking at making, it’s a way to get stuff done,” says Jon Raidel, Vizrt’s global lead for cloud live production.

Jon Raidel

Vizrt’s NDI-native cloud switcher, Viz Vectar Plus, can deal with any kind of incoming IP stream and transcode it to NDI for live switching. In the sports arena, it has been used by BT Sports in the U.K. to produce a UEFA Youth League soccer match and by the Drone Racing League in the U.S. for its virtual events. On the news front, it is supporting day-to-day news production in the cloud for Telemundo station KASA Albuquerque, N.M., which launched live newscasts in January 2022.

BRAND CONNECTIONS

“Their workflow is consistent with what it would be on prem,” says Raidel of Vizrt’s early cloud switching customers. “They have the same tools, and, in some cases, they have more resources, honestly. Our production switcher has eight M/Es [mix/effects buses], it’s huge. It has 68 keyers.”

Starting From Ground Zero

Ross Video offers cloud switching among other applications in its Graphite CPC (Cloud Production Center) product, which includes a software version of the Carbonite switcher. The system is currently in a number of proofs-of-concept (POCs), says Chris Kelly, solutions manager, production workflow for Ross Video.

Chris Kelly

Customers often have very different reasons for pursuing production in the cloud. A common thread, however, is rethinking how to handle traditional tasks, like monitoring and return feeds, given the latency of getting compressed signals up to the cloud and back.

“All the assumptions you’ve ever made about production kind of get thrown out the window and you kind of have to start from ground zero,” says Kelly. “As a person who did this from a producer perspective — how am I going to get that graphic into that monitor or that reporter into that monitor? Doing that in the cloud is a different story.”

One of those customers is Sinclair Broadcast Group, which is already successfully using the public cloud for content ingest and preparation with plans to transition playout there as well through 2023. Sinclair has been exploring how it can use the cloud to transform local news production and has tested cloud switching with major vendors including Ross, Grass Valley and Vizrt.

While it is not ready to move news production to the cloud anytime soon, Sinclair thinks the public cloud could provide long-term benefit in two ways. One is as a cost-effective replacement for aging HD-SDI infrastructure. While Sinclair plans to implement on-premise 2110 routing at its larger stations when their current HD-SDI infrastructure is due for replacement, 2110 may not make sense for smaller outlets, particularly if they are unlikely to produce 4K content.

“With the 2110 situation, there’s a big financial investment there, and in a lot of cases in news production it’s potentially overkill,” says Ernie Ensign, AVP, news technology & operations for Sinclair.

Equivalent functionality may instead be provided by making an “opex” investment in cloud computing. The concept is to bring compressed feeds from the station up to the cloud, switch the show there, and then bring a compressed feed back to the station. Then it would be brought back to baseband before being reencoded and sent to the transmitter.

Ernie Ensign

Another possible efficiency of the cloud would be in reusing control room infrastructure across multiple stations. Sinclair and other station groups have mused about the potential of using a single control room and director to produce newscasts across multiple markets and time zones, a capability already supported by the latest generation of on-premise production switchers from vendors like Sony. But the same concept could be applied more broadly in the cloud.

“At scale, you can get some efficiencies if you are sharing resources,” Ensign says. “We have 70 news-producing stations today, but only 45 of them are live at any one time. If we only need 45 cloud resources, and we can share them between the 70 stations, then I think you can start to make better financial models. It’s going to take you a little bit to ramp up and get there, but I think it will get cheaper over time.”

Sinclair’s Measured Approach

While Ensign thinks a Sinclair station could produce live news in the cloud today if it wanted to, he says any move beyond a POC is at least a year away. As long as Sinclair’s existing on-prem gear continues to work, the traditional model of news production is still more cost-effective than the cloud. That is due to the costs of cloud egress, particularly when it comes to duplicating return feeds for monitoring in the studio; encoders; additional hardware required for embedding audio into compressed streams like NDI or SRT; and upgrading connectivity at the stations.

There are some workarounds that can reduce cost, such as using media players or multiviewer instances in the cloud to handle monitoring instead of bringing return feeds all the way back down to the station for display on traditional monitors via SDI. But replicating all of the monitoring that currently happens in a studio probably wouldn’t be feasible.

“When you start to add everything up and compare it to what we do today, it’s hard to make the numbers work,” Ensign says.

Sinclair has tested cloud switching using both the SRT and NDI protocols to send compressed feeds to the AWS cloud, at a range of 6 to 10 Mbps. It hasn’t gone to higher bit rates because it hasn’t invested in beefing up connectivity for the tests and didn’t want to interfere with existing station operations, though it may experiment with higher rates in the future.

It evaluated Vizrt’s Viz Vectar Plus and Grass Valley’s AMPP cloud switchers early on and is currently doing a POC with the Ross Graphite CPC system.

A big selling point for Graphite CPC is its easy integration with the Ross OverDrive production automation system, which Sinclair runs in about 60% of its stations in conjunction with traditional Ross production switchers. There are a “handful” of Grass Valley Ignite automation systems spread across the group, Ensign says, while the remaining Sinclair stations aren’t automated.

Sinclair is adding OverDrive to a few more stations in the near term, and making the system work in the cloud is imperative to moving forward with cloud switching. In that vein, Ensign plans to do further testing of the Vizrt switcher, as Ross has integrated OverDrive with the Vectar product since Sinclair’s early testing.

While making OverDrive work with a competitor’s cloud switcher might seem counterintuitive, Kelly says it’s consistent with Ross’s decision eight years ago to make OverDrive work with a variety of third-party production consoles from vendors.

“It’s just part of what we do these days,” Kelly says.

The Operator Experience

So far Sinclair’s testing of cloud switching has all been browser-based, though Ensign says there are ways to connect a traditional hardware panel to the system if manual operation was required. It hasn’t yet made a decision on whether in the future it would run a soft panel in the cloud or put in a traditional hardware panel.

Ensign has found that sending SRT feeds with a Haivision encoder has given the lowest latency and better “resiliency and dependability” than NDI, but he concedes that Sinclair is still experimenting.

Sinclair hasn’t found latency to be a major challenge in switching live news, since camera feeds coming from the studio are all in the same time space, Ensign says. He adds that latency is likely more of an issue for sports.

In its tests Sinclair has been able to successfully incorporate bonded cellular feeds, using LiveU’s “cloud connect” feature to take both SRT and NDI feeds directly to the cloud instead of bringing them back to the station and re-encoding them, which would be redundant. Sinclair did see some latency differences between the two protocols, but Ensign notes that Sinclair already intentionally delays its LiveU feeds by 1.5 seconds to ensure good picture quality as the feeds transit the cellular networks.

“With a live reporter in the field, we did not see any major latency issues that prohibit us from moving forward,” Ensign says.

What is challenging in the cloud is handling audio channels, which were embedded with video in the SDI world but are a separate entity in IP routing. Audio feeds need to be embedded or mapped to the video streams whether dealing with SRT or NDI, which leads to complexity and additional expense.

“Obviously, you’ve got to get studio microphones and other audio sources up to the cloud as well, so you kind of piggyback those on some of the SRT feeds,” Ensign says. “But now you’ve got to start putting in embedding hardware and other hardware to sort of marry those audio sources into the SRT and get them up to the cloud, where they can be de-embedded and put on various faders in the cloud to manage those. So, it’s been a little bit of a challenge to have to add that additional hardware to embed.”

Sinclair has experienced the same scenario in NDI, with the need to marry audio to video sources proving cumbersome. It plans to begin testing the “Sienna Cloud for NDI” processing system from U.K. firm Sienna, which Ensign describes as the “Swiss Army knife of terminal gear and processing gear” and can be used to embed audio in NDI.

“You need some kind of tool that can be a little easier to map different sources and audio and get that up to the cloud,” Ensign says.

In the long run, he would like to see the ability to transport popular digital audio protocols like Audinate’s Dante directly to the cloud.

Bringing 2110 To The Cloud

One vendor pursuing a very different strategy for cloud switching is swXtch.io LLC, a subsidiary of technology company and U.S. stock exchange operator IEX Group. The company wants to bring the full functionality of uncompressed 2110 switching to the public cloud, including features like multicast, precision time protocol (PTP) and hitless merge (the SMPTE ST 2022-7 standard).

swXtch.io has leveraged its expertise in high-speed financial applications to create a software switch, cloudSwXtch, that runs in the cloud but works like a hardware IP switch from Arista or Cisco. It would allow broadcasters to use the cloud to expand their signal capacity means of a “virtual overlay network” while still maintaining the workflows and processes of their on-premise 2110 hardware.

swXtch.io demonstrated cloudSwXtch in the Evertz booth at IBC last September, showing a “ground-to-cloud” scenario with 2110 signal flows from an Evertz Magnum-OS system going to the Microsoft Azure cloud and being switched there.

Geeter Kyrazis,

Geeter Kyrazis, business lead for swXtch.io, notes that the broadcast industry has made a long technology journey going from SDI to IP, then to virtualization, and now to the cloud. But he says there are still some “low-level components” that are missing before broadcasters can take full advantage of the cloud for production, with the two biggest being a lack of multicast and a lack of PTP.

“Unless you can synchronize all of the signals and unless you can make sure you have distribution using the multicast methods that the applications and systems require, and the standards require, you really can’t move live production to the cloud,” Kyrazis says. “You can do some things, but the benefit of cloud production, of course, is you expand your footprint of the things you can switch to and from. You can cut to things from around the world, as opposed to around a campus or around a broadcast network. But if you want to do that, you have to be much more adherent to the timing that those long distances can impact and the network efficiency that those broad networks can require. And of course, the standard [2110] requires multicast.”

At IBC, swXtch.io didn’t yet have PTP fully integrated or all of its 2110 features like framesynch. But it has completed that development work and will be showing those capabilities at NAB 2023 in a demonstration of a global live production workflow with the ability to ingest multiple 2110 sources into the cloud. It also will be highlighting “protocol fanout,” the ability to convert between different IP formats and protocols, such as unicast, multicast or SRT, and freely switch between them.

Kyrazis says swXtch.io found protocol fanout to be invaluable in creating the IBC demo, which featured multiple vendors including Telestream, Cinefilm and Lumen with different protocol requirements.

“We thought it was kind of a minor thing, just taking in unicast and pushing it out as multicast or fanning it out as unicast or pushing it out as SRT,” Kryazis says. “But that became a big deal, because it really allowed all of these different vendors to interoperate in that demo, and we put it together in four to six weeks.

“Had it not been for that feature,” he says, “it never would have happened, because each of them would have had to accommodate each other’s stream format types and they wouldn’t have had the engineering resources to do that.”


Comments (0)

Leave a Reply