TVN TECH

With Less Latency, Live From The Cloud Grows

Discovery and ABC both rely on the cloud — one public, one private — to deliver their cable networks. Now broadcasters are looking at how the cloud can benefit not just distribution but also production, whether that means live news or sports coverage or post-production applications like editing.

Long a buzzword in TV technology, the cloud is becoming the word as media companies like Discovery Communications, Disney/ABC, Pac-12 Networks, Fox and PBS put the technology to work in live linear distribution and production and even post-production.

The growing acceptance is evidence that vendors are gradually solving the latency problem – delays caused by signal processing — that has held back use of the cloud in live applications.

Delivering live sports via the cloud via OTT has also gotten much faster, says Keith Wymbs, chief marketing officer of Amazon’s AWS Elemental.

AWS has been able to prove out an end-to-end latency of five seconds or less, which Wymbs says makes it competitive with standard broadcast latency.

“The primary thing we’ve seen change in the last two years, is that Apple’s recommendation for HLS [encoding] has dropped from 10-second chunks, or segment length, which they used from about 2009 to 2017,” he says.

“That creates a round-trip buffer of 20 to 30 seconds. Now the recommendation is two seconds, and some people are dropping it to one [for applications like horse racing]. I think broadcast latency is five to eight seconds, so if you can get it in five seconds from the cloud, that’s equivalent.”

BRAND CONNECTIONS

Of that five-second delay, 2.5 seconds is associated with the video player, mainly the buffer, says Wymbs. The other parts are associated with ingest, encoding and CDN distribution. There are some tradeoffs to be made in encoding, by sacrificing image quality, but that only gives about a 10% improvement; the area of most potential is in the player.

There are some other ways to improve latency. Today, high-end customers in major metro areas have the option of using Amazon Direct Connect, which is fiber from the customer facility to the cloud using a transmission vendor like Centurylink. That is usually a 100-gigabit connection, which introduces almost no latency on ingress to the cloud.

While live streaming represents the bulk of AWS Elemental’s revenues, the live channel has always been created by the customer first and the cloud hasn’t sat within the actual production chain. But that could be changing soon, says Wymbs, who is seeing interest for new live applications in the cloud such as switching between camera feeds.

“That goes from the largest events to more ‘long tail,’ a tier below SEC Network and Pac-12, having cloud capabilities there for a one- or two-man camera production,” he says. “Those are the types of things that are very good for the cloud because you don’t know what the usage is going to be, but you know it’s only going to last two or three hours, and the production is relatively simple. We could handle switching in that situation.”

Discovery is moving content aggregation and linear distribution for its non-sports channels on a worldwide basis to the public cloud, using AWS. So far 300 linear TV channels have made the move, representing the U.S. and European footprints for networks such as Discovery, TLC and Animal Planet, and the recently acquired Scripps cable networks are next in line.

Discovery says the latency of the cloud has grown small enough that it can run live programming such as Deadliest Catch post-shows through the cloud.

Discovery CTO John Honeycutt says the latency of the cloud has grown small enough that it can run live programming such as auto auctions or Deadliest Catch post-shows through the cloud. It does so by using a built-in 10-second delay. “You have to manipulate time.”

But using the cloud for Eurosport, the European sports giant that Discovery acquired in 2015, is more challenging as latency is still an issue for live sports programming. Even a delay of five to seven seconds is unworkable in today’s social-media dominated world. “You can’t be broadcasting sports and be behind Twitter,” notes Honeycutt.

Disney/ABC Television Group was “very early” with cloud playout, says Brad Wall, SVP of broadcast operations for the group, launching private cloud playout for its cable networks over two years ago.

The cloud playout occurs from data centers in North Carolina and Las Vegas that link to network operations in New York over dedicated fiber paths. “We’ve been very successful on that platform,” says Wall.

Using the cloud for playout has been tougher for the ABC network, which as a live broadcast network needs low latency in order to serve the ABC affiliate community.

A critical consideration is staying on a tight clock with affiliates when inserting special news programming or delivering make-goods for advertisers. Those latency considerations mean that while the network eventually plans to fully shift to cloud-based playout, some key pieces may remain on-premises in New York.

“We can’t compromise ultimately what our goals are,” says Wall. “But we’ve had to pin back, and lose some of our religion, about some things and some requirements that early on we were very religious about, to just let the vendor community and software development cycles play their course.”

One of the most IP-focused programmers today is Pac-12 Networks, which uses at-home production techniques running over a high-speed network to cover 850 games a year.

Pac-12 Networks has been using the AWS cloud and AWS Elemental encoding to feed its OTT and on-demand streams, as well as support its linear international distribution. It also archives its programming long-term on a mix of Amazon S3 and Glacier storage.

Now the sports programmer has made the AWS cloud an even bigger part of its everyday operations.

In the past two weeks, Pac-12 Networks has relaunched its master control infrastructure with Comcast Media Center, which has handled master control and distribution for the sports programmer since its 2012 launch. One major change is that any recorded content first sits in a short-term archive in the AWS cloud, and is then pulled over to Pac-12’s master control for playout by its Evertz Overture automation system.

“That’s a big deal,” says Mark Kramer, VP of engineering and technology, Pac-12 Networks. “We used to have a ton of expensive storage locally. Now it’s elastic.”

Automating the management of the cloud storage is easier than doing it on-premises, says Kramer, and also lets Pac-12 take advantage of other AWS workflows such as machine learning. His hope is to use such tools to create new statistics and new forms of programming for viewers.

“Any recorded content flows through AWS, either from them or to them,” says Kramer. “That’s really part of the biggest change in our approach. We feel like in terms of what we can bring to our fans, it opens up a whole lot of awesome workflows.”

Kramer wants to do a proof-of-concept test of editing in the cloud within the next year. He is also investigating the possibility of highlight clipping in the cloud. He notes that the data-intensive nature of sports programming, with constant score graphics and player numbers and names on uniforms — as well as the dynamically changing crowd noise — make it a natural for machine learning tools.

“There’s so much that exists in our content, I think it’s going to be absolutely unbelievable what could happen in our space,” says Kramer. “If you ask me why I work in sports media as a technologist, this is why. I think this is going to be completely transformative.”

Latency is still “definitely a consideration” in using the cloud to deliver live sports programming, and Kramer says the context matters. “I think for international it makes more sense, as the expectations can be a little different.”

He doesn’t expect to see the cloud being used to switch between different cameras for live productions anytime soon, but he won’t say it will never happen.

Fox Sports created a cloud-based file-sharing system to help produce highlights for last summer’s FIFA World Cup coverage, using AWS as the cloud platform and file transport technology from IBM unit Aspera.

The latency of delivering files through the cloud has greatly improved in the past few years, says Brad Cheney, Fox Sports VP of field operations and engineering. Where a couple years ago delivery was “tens of minutes behind real-time,” says Cheney, for the World Cup, Fox saw delivery times of real-time plus 10 seconds to get proxies and files.

After a successful experience, Fox rolled out a similar system for its NFL coverage this season and another for its MLB postseason coverage, which it shared with TBS. From a volume standpoint, Fox pushed about seven hours of content per postseason game into the cloud, roughly double the average 3.5-hour length of an MLB game.

Fox has also had good results with the cloud system for the NFL. NFL games have a shorter game time than baseball and that results in a faster influx of files. The files are sometimes shorter in duration and/or smaller in size than baseball, which uses a lot of super-slo-mo replays from high-frame rate cameras.

“We’re seeing a near-real-time push of files back to central storage in cloud,” says Cheney.

The bandwidth that Fox is using to send files hasn’t changed, but the mechanism on both sides that it uses to push files through has improved greatly, he says.

PBS is looking to migrate its media workflows and playout into the cloud, and this fall began evaluating 39 different vendors for cloud-based playout. PBS is vetting traditional broadcast vendors, such as Imagine and Evertz, who make automation software that will allow the network to playout via the public cloud on platforms such as AWS, Azure and Google Cloud.

Peter Wharton of Happy Robotz

The network has been running software from several vendors in parallel to compare their performance, says Peter Wharton, president of Happy Robotz LLC, who has been consulting with PBS on the project. The testing has been revealing; Wharton says he has seen as much as eight times difference in cost between two different vendors doing the same playout functionality in the cloud.

“That’s cloud cost, and that’s a huge difference,” says Wharton. “It’s important to look at vendors that are not just doing a lift and shift, but who are really optimizing their solution for the cloud.”

PBS is currently running cloud playout in a full-time shadow mode as part of its evaluations, and hopes to pick a vendor and cut over to cloud operation early next year.

While PBS doesn’t do much live programming, latency is still a consideration.

“From the point of view of latency in particular, even in playout it’s an issue,” says Wharton. “A lot of times, organizations like PBS need to live-switch events in master control, like the Kavanaugh hearings. When you’re doing that, it’s really hard to manage latency. You have large latency going in, and large latency coming out.”

Wharton notes that compression vendors like Zixi and transmission vendors like LTN Global Communications employ various techniques to achieve broadcast-quality transport over the public Internet, such as forward-error-correction, but that they all carry inherent latency.

Wharton says he isn’t sure whether PBS will end up using the public internet to connect to the cloud or rely on a private network like an MPLS link.

Latency to and from the cloud can be managed, and today it ranges anywhere from a couple hundred milliseconds to five seconds, he says.

“But when you have five seconds going to, and five seconds coming from, it gets really hard to manage live events,” says Wharton, who notes that the Video Services Forum (VSF) is working on a standards for public Internet transport.

What works best so far for broadcasters, he says, is to make sure one has low-latency monitoring of what is happening in the cloud itself, by connecting to a computer in the cloud with a low-latency interface that provides near-real-time feedback on a user console.

“If you’re having switching and processing in cloud, at least you get rid of that latency,” says Wharton.

LTN Global Communications, which already provides the connectivity for Sinclair’s cloud playout solution, is helping PBS with its evaluation. The company, which works with a number of major networks to support their OTT services, delivers a latency across its global managed network of 200-250 milliseconds and can fix that latency at a constant level.

Chris Myers, EVP and chief business development officer for LTN, says that LTN’s trick is to control what happens in “middle of the internet,” instead of trying to correct problems at the edges or preemptively solve problems that might not exist.

“Our network is fully managed in the middle,” says Myers. “We have a set of data centers globally across the world, and we don’t trust any single one of them. Our technology moves video seamlessly around the ISPs, and ends up correcting for problems inherent to the middle of the Internet, thinks like packet loss and jitter.”

While most of LTN’s broadcast business is in distribution to OTT platforms, Myers is seeing new interest in using the cloud for contribution as well.  He points to companies like Pixellot, which is using automated cameras that pump feeds into the AWS cloud to produce OTT coverage of high school sports. LTN did contribution into the cloud to help support OTT coverage of Wimbledon last summer, and just landed another live contribution customer.

“We have a league that is going to be using our network to push live content from a location into Google Cloud, and then once the content is in Google Cloud, it will be transcoded, and then streamed off to a CDN for OTT delivery,” says Myers.

For broadcasters that do a lot of live, in-studio production, migrating to the cloud is a tougher proposition to make work and may not make sense right now, says Wharton.

“You might look at a TV station group that has a lot of stations, and say why not just put all those production control rooms in the cloud, and run them in the cloud,” he muses. “That starts to make sense until Election Night, when they all want to do their one thing. It takes careful analysis.”

Microsoft is seeing a lot of interest in using its Azure cloud to support production applications like editing and captioning, says Scott Bounds, media industry lead for Microsoft. A number of customers are doing a proof of concept editing through the Azure cloud, using popular software such as Avid or Adobe.

With over 50 regional datacenters worldwide connected by Microsoft’s own high-speed fiber, “you’re not totally dependent on the Internet,” says Bounds. For example, customers in Australia today are cutting promos with content stored in a data center in Amsterdam.

“That’s leveraging our horsepower in the cloud, our network and what you’re actually connecting to,” says Bounds. “We’ve delivered 120 milliseconds latency, which gets you an experience with full-blown [Avid] Media Composer that is totally usable for the end user.”

Microsoft is also seeing interest from news organizations like Al Jazeera and entertainment producers like Endemol Shine in archiving their content in the cloud and using Microsoft’s AI capabilities, such as searching through the metadata for relevant content or automatically creating captioning via speech-to-text. Microsoft even sells a product called Databox that allows media companies to load up terabytes of old content onto a tower PC, ship it back to Microsoft and have it loaded into the Azure cloud.

As for doing live production with live switching through the cloud, Bounds isn’t seeing any customers doing that yet. He says latency will remain a problem due to simple laws of physics, though smaller programmers might have more tolerance for latency than big networks broadcasting major-league sports. But he says it could happen, particularly in news production.

“The cloud is evolving so rapidly, we only talk roadmaps,” says Bounds. “We don’t talk five years out.”

 


Comments (0)

Leave a Reply