TV2025

Cloud Migration Needs Enterprise-Wide Effort

Moving on-prem operations requires involvement from all parts of the company, not just engineering and IT, said tech leaders at TVNewsCheck’s TV2025 conference last week.

Public cloud technology has advanced to the point where it can reliably support key broadcast workflows, including live production and playout. But shifting on-premise operations to the cloud across a station group or network requires planning and effort from every part of the company, not just engineering and IT, according to broadcast technology leaders speaking at TVNewsCheck’s TV2025 conference at the NAB Show New York last week.

Sinclair Emphasizes Change Management

Sinclair began laying the technical foundation for its local stations to playout their programming from the cloud two years ago, in the wake of a ransomware attack that crippled legacy on-premise hardware across the station group. The company first created a centralized ingest and content distribution system in the AWS cloud — the “cloud media pipeline” — to handle all commercials and syndicated programming. It then evaluated different vendors of cloud playout and automation software before tapping Amagi in April.

Over the next six months, Sinclair set about tackling the myriad details of moving playout from on-premise gear to cloud instances, including significantly upgrading its networking capabilities and creating multiple redundancy plans. The group launched its first market with cloud playout, Raleigh, N.C., last month with CW affiliate WLFL and MNT affiliate WRDC. Birmingham, Ala., is next up, followed shortly by Nashville, as the group plans to take a new market to the cloud every two or three weeks for the next two to the three years.

Mike Kralec

But change management across Sinclair, including educating operators and other staff about the new systems, was just as important as refining the technical architecture, said Sinclair SVP-CTO Mike Kralec.

BRAND CONNECTIONS

“It’s not just cloud, and it’s not just technology,” Kralec said. “This is workforce, this is finance, this is communications — this is every piece of the organization coming together to really think about how do we operate the most effective company for ourselves moving forward?”

Reckoning With Resistance To Change

Janet Gardner, president of consulting firm Perspective Media Group, agreed that change management was just as vital in making the switch to cloud as an organization’s software design. She said failures in cloud implementations sometimes come from technology not living up to its promise. But more often they are due to a resistance to change from personnel, particularly those in middle management who may be most threatened by the shift.

Janet Gardner

“The biggest reasons we see failure is not because of technology, it’s because the organization is not willing to adopt the change,” Gardner said.

She said successful cloud transitions require both “top-down” pressure from upper management, by emphasizing and explaining the business and operational drivers for the move, as well as “bottom-up” champions of the work including those directly implementing the technical changes.

Building A Foundational Layer At Allen Media Group

Allen Media Group hasn’t experimented with moving broadcast over-the-air workflows at its 25 stations to the cloud yet. But the media conglomerate is nonetheless in the midst of a significant cloud migration, as it is moving OTT streams and other operations from AWS to Google Cloud Platform (GCP) as part of an exclusive enterprise-wide deal the company signed with Google in January 2022.

Shilpi Ganguly

Shilpi Ganguly, AMG VP, IT and cybersecurity, said the initial plan was to transition all of the company’s businesses including the station group, cable networks, syndication and Entertainment Motion Pictures to GCP within a year. But that timeline has since extended to the end of this year, due to both internal training that was needed to work with AMG’s new technology infrastructure and some changes to the company’s foundational software that it made to set it up for long haul.

“Our team did not have the skillsets that were required to architect in GCP,” Ganguly said. “So, upskill your team, and make sure you have a partnership with your CSP [cloud service provider]. We did, and we used some of that, upskilling within the team, training them up, and that took a little bit.

“We also did not want to do what we’d done in AWS, which was build as we go. We wanted to set up a foundational [layer] that went from nomenclature to load balancing, architecture and make those foundational decisions first, before you start putting in your CI/CD (continuous integration and continuous deployment) pipeline, for instance. We used infrastructure as code so we could scale a lot more effectively; we had not done that in AWS.”

Looking ahead to broadcast operations, Ganguly is hoping to do it “one piece at a time” by moving individual workflows to the cloud instead of building out the whole ecosystem at once.

Partnering With Finance

Marcy Lefkovitz

Marcy Lefkovitz, industry consultant and former VP of technology and workflow strategy for Disney Media Entertainment & Distribution, said it is critical for engineering and IT teams to partner with the finance department in making the cloud shift. Finance teams can no longer be traditional “bean counters,” simply managing capital expenditures and operating expenses and tracking budgets, in the new world of the cloud. Instead, they need to develop a nuanced understanding of how the cloud works and its associated costs.

“There needs to be a team that actually understands those bills, and not just this is how much compute costs and this is how much storage costs,” she said. “But operationally, what are you doing with that compute, and what lives on that storage? So, they need to be more operationally aware than they have been in the past, so they can actually be an advocate for you with the cloud partners.”

Lefkovitz acknowledges that the value proposition of the cloud varies for different broadcasters based on the complexities of their individual capital cycles.

“You have to [ask] where do I have pain points and can moving workflows into the cloud alleviate those pain points for where you are right now,” Lefkovitz said. “What’s up for refresh? It’s a simple as, if I’m up for a refresh cycle, am I going to buy more hardware? Or am I going to see if this is already a mature cloud workflow and see if I can move it at this point?”

For some broadcasters, she added, real estate is a “very expensive part of their ecosystem,” so the opportunity to consolidate their footprint by moving from on-premise hardware to the cloud for functions like master control and archive can be reason enough.

Gardner agreed that real estate is a “very significant driver” for cloud, particularly with the recent wave of big media mergers.

“We have a lot of replication of playout facilities, of libraries, all of that area,” she said.

The Nuts And Bolts Of Cloud Playout

Real estate wasn’t the big driver for Sincalir, but instead the need to replace aging playout servers and automation systems at its stations. Kralec didn’t want to do it by buying new on-premise hardware and locking the stations into another depreciation cycle of five to seven years, with technology that couldn’t adapt to a rapidly changing business.

“This is about flexibility in the face of uncertainty,” he said. “If you want to design a system that’s going to work for you in the future and going to work for your business in terms of pivoting to innovation and other processes, you need to consider the toolsets that are available to us in the cloud.”

The foundational elements of the move were built through the cloud media pipeline, which met the need of getting “content to the stations in the right format at the right time,” Kralec said. Solving those early problems paved the way for the local cloud playout launches occurring now.

“The integration with AWS, that message bus that’s created by the cloud media pipeline about what’s happening in the pipeline, is what Amagi now leverages to find out what content is available off of the traffic integration,” Kralec explained. “So, the traffic integration, the playlist, has become one ecosystem. But not built at once, but built over time to evolve, and to continue to evolve for our company. It is never intended to stay static. It can continue to deliver more value for the business because it can change.”

In that vein, what Sinclair did to launch cloud playout in Raleigh will not be exactly what it does in Birmingham, or in Nashville, as it will continue to improve the software-based system instead of doing a simple “rinse and repeat.” But the company does have a playbook now for each local launch, which takes about two months in total, and has already started conversations with stations in its seventh and eighth markets.

Working with Amagi, Sinclair first does architecture development at each station, followed by a systems integration testing process and user acceptance testing process. That rolls into testing onsite, followed by a training program. The next stop is using the cloud for shadow operations of the existing hardware-based playout, which then rolls into a “reverse shadow” after the station officially launches from the cloud.

Resiliency was a key focus for Sinclair, and its AWS architecture uses multiple availability zones, so in a single AWS region there are multiple systems that fail to each other. Sinclair also has multiregion redundancy, with all of its content synchronized between different AWS regions. Sinclair’s IT group has also developed “performance hubs” with improved network capability to connect its stations to AWS.

“The key part of this is still going to be the network,” Kralec said. “It’s always been the network. We still have to get the signal back to the station.”

Another key piece of the redundancy strategy is a “survivability box” that Sinclair developed with Amagi, in the case of a catastrophic connectivity failure such as the dreaded “backhoe fade.” The device is a single server that synchronizes content and could handle playout of stored content for day or two, until Sinclair can find alternative connectivity.

“It is not intended to be a full automation and playout system, it is just really intended to keep us alive for a period of time,” Kralec said. “Because we knew from our news contribution workflows that LiveU or another type of satellite — it could be low-earth-orbit satellite — we can provision connectivity to our stations in many different ways. And we only need to survive for a brief period of time.”

There is still hardware that is also used in an everyday capacity at the stations, but it occupies a much smaller footprint and is “much more flexible than anything we could have built even five years ago,” Kralec said. That includes small encode/decode systems, which Sinclair calls “ground to cloud” and “cloud to ground environments,” that send content back and forth from the station to the cloud using the RIST [Reliable Internet Stream Transport] or SRT [Secure Reliable Transport] protocols. There are also some small switchers to allow for local flexibility, such as occasional-use channels to the cloud.

The encoding/decoding capability is a necessity to handle network feeds that are still distributed to individual stations via C-band satellite. Kralec’s long-term hope is for the major networks to eventually distribute their programming through terrestrial IP, which would allow Sinclair to take a network feed directly from the cloud. In the interim, Sinclair is still receiving traditional satellite feeds of network programming and then re-encoding them to send them to the cloud.

“Right now, we’re just focused on enabling the existing workflow, which is take it from the station, and then get into the automation and playout system from there,” he said.

Read more coverage of TV2025 here.


Comments (0)

Leave a Reply