TVN TECH

Virtualization Expands, Moving Off-Prem

Vendors say broadcasters are increasingly looking to expand virtualization of their operations across the whole chain, and they’re also seeking to take that virtualization off-premises, either in their own master control hubs or the public cloud.

Virtualization, or taking functions previously run by dedicated hardware and moving them to software running on common-off-the-shelf (COTS) hardware, is certainly not a new concept for broadcasters. Networks and major station groups have been pursuing virtualization for years as a way to standardize technology, reduce costs and gain operational efficiency, such as the 10-year deal Sinclair Broadcast Group signed with Avid in 2016 to virtualize newsroom functions across its stations.

The Sinclair/Avid deal was notable in that Sinclair was looking to bring the same approach it was already using in its IT operations to news production, replacing proprietary editing workstations with virtual machines that run on generic computing power. Groups like Hearst Television and the Fox Television Stations have made similar efforts, virtualizing many of their news production systems to run on on-premise COTS hardware.

Vendors say the concept of virtualization in television operations has expanded as broadcasters look to implement it throughout the entire broadcast chain, including master control functions like captioning and loudness monitoring. And broadcasters are also now looking to take virtualization off-premise, either in their own master control hubs and data centers or through public cloud platforms like Amazon Web Services, Google Cloud and Microsoft Azure. Vendors are responding with flexible, software-based products that can be run on on-premise hardware, in a private data center or on public cloud compute.

Alan Young

The move to virtualization should only increase as more broadcasters roll out ATSC 3.0, or NextGen TV, services, says Alan Young, CTO of IP transmission and production services vendor LTN.

“Because that standard is literally all IP, it is possible to virtualize the whole thing,” Young says. “Not only virtualize but do it remotely, you can put it in the cloud. And that brings enormous possibilities.”

BRAND CONNECTIONS

Captioning, monitoring and translation vendor Digital Nirvana is seeing broadcasters move away from vendor-specific hardware to a running a virtualized stack on-prem for their monitoring needs, says Digital Nirvana CEO Hiren Hindocha.

“A lot of our customers have moved away from sourcing the hardware from us to just getting a spec from us, saying this is the hardware that we’re looking for, it’s just a commodity hardware that we certify, and then we sell our software to run on that hardware,” Hindocha says.

Hiren Hindocha

Digital Nirvana sells a cloud-based product called Trance for closed captioning, translation and transcription. But its loudness monitoring systems for recording audio for CALM Act compliance rely on on-premise hardware residing at the local station. However, Hindocha says the monitoring of those systems can be centralized in a master control hub, even for 100 to 200 stations.

For its part, supply chain software vendor SDVI saw a big uptick last year in captioning and subtitling work for non-live programming being done in the cloud. One key driver was that SDVI’s large media customers were doing more distribution deals with more platforms, says SDVI Chief Product Officer Simon Eldridge.

“Captioning and subtitling are becoming a step in the media prep phase instead of the distribution phases,” says Eldridge. “With that goes a change from a device that monitors a feed on the way out, to a piece of software that does that function and prepares the media beforehand.”

Simon Eldridge

Systems integrator Diversified has seen an increase in centralized monitoring through several virtualization projects with large station groups it has done in the past two years. The groups moved to a “hub-and-spoke” model, with the goal being to remove as much physical equipment as possible from the local stations or “spokes” and consolidate functions at a master control hub, says Jason Kornweiss, VP and GM of emerging technology and solutions for Diversified.

Virtualized hardware at the hub is now remotely handling many functions and systems previously performed by discreet systems at the local stations. They include closed-captioning insertion; loudness monitoring for CALM Act compliance, which is done by exception; and emergency alerting (though an EAS radio is still required at the station to receive the in-market signals and communicate to the hub to insert the EAS messaging tones and alerts). Some hubs even perform encoding for final distribution to both over-the-air transmitters and MVPDs for small-to-mid market stations.

Jason Kornweiss

“The impetus was we’ve got x number of TV stations built over time with disparate equipment, and pieces of gear within the release path are in different locations and in need of standardization,” Kornweiss says. “Like, where do you put your school closing ticker and where do you put your closed captioning encoder? And it varies by market on its way to the transmitter. So, they’re chasing standardization through virtualization of as many parts of the product as they can.”

Virtualization at its base level is being able to run different operating systems on a single piece of hardware, allowing several different applications to run on a single server. But many broadcasters quickly moved beyond that to an optimization of virtual machines called containers, which are light pieces of software that allow multiple applications to run on a single operating system. The next step beyond containers are microservices, a software architecture that uses containers to build a distributed application. And microservices are how many broadcast functions are now being provisioned in the cloud.

Brick Eksten

Brick Eksten, CEO of broadcast compliance monitoring, reporting and analysis at vendor Qligent, began working on virtualizing broadcast applications back in 2009 at Digital Rapids, the company he founded. He then later pursued virtualization at Imagine Communications, where he served as CTO and worked with large broadcasters like ABC on their virtualization and cloud initiatives. Eksten sees the public cloud as a logical next step in broadcaster’s virtualization journey, based on an analysis of IT spending across various industries that he performed while at Imagine.

Eksten’s underlying math is an extension of the “3:30:300” rule, a formula first developed by commercial real estate company JLL (Jones, Lang, LaSalle) to express the orders of magnitude between a company’s costs on a per-square-foot, per-year basis: $3 for utilities, $30 for rent, and $300 for payroll.

“The idea is that the absolute costs will go up or down based on location and industry, but the relative proportions hold true,” Eksten explains.

An analysis of data center costs for broadcasters can be used to extend that rule to 3:30:300:3,000, based on an estimated cost of $3,000 per square foot, per year to run a private data center, Eksten says. That is almost twice AWS’ estimated costs of $1647 per square foot to run one of its data centers. So, the numbers only really make sense for the big public cloud vendors like AWS and Google Cloud that can leverage their compute across thousands of customers.

Given many broadcasters’ relative utilization of their existing infrastructure, the $3,000 per square foot number is conservative, Eksten adds. He says the actual number for a smaller broadcaster to run its own data center might be closer to $10,000 per square foot.

“The smaller you are, the less likely it is that you’re running your data center as efficiently as you think you are,” Eksten says. “You are not managing it well, meaning you don’t have it staffed 24/7/365, you’re not looking at all this stuff, you don’t have redundancy for your power and cooling, because you just can’t afford it. So, then it’s even more likely that if you’re a small call-letter station that you really should be considering cloud.”

IP transmission vendor Net Insight, which has long provided on-premise contribution hardware used for IP backhauls by major sports broadcasters, is addressing the virtualization trend with several new products. One, called Nimbra Edge, is a cloud-based contribution server that can be configured to do SRT streaming for an internet backhaul one day and then reconfigured to perform high-quality JPEG XS transmission over a 10-gigabit link the next day.

Per Lindgren

“You can spin it up in private servers, data centers, or public cloud,” says Net Insight CTO Per Lindgren. “You can have different instances and have ingest points and output points.”

Diversified is a big believer in public cloud technology, particularly as a way for broadcasters to quickly launch channels. The company been steadily growing its cloud business, including making several new hires with cloud expertise and partnering with some key vendors, and it is working on public cloud projects with a few major media companies. But given recent events that have shown the cloud’s vulnerability, including multiple outages for leading vendor AWS this past December, it has found some customers are reevaluating how quickly they want to jump in. Instead, many are looking at a mix of cloud compute and on-premise hardware in various “N+1” redundant models.

“I think people are rethinking how much cloud they really want to leverage,” Kornweiss says. “It’s certainly alive and well in production. In the media supply chain, there are a lot of provocative use cases to provide mechanisms in the cloud.”


Comments (0)

Leave a Reply