NBCUniversal Owned Television Stations group, after rolling out a new plant in Philadelphia without an SDI router that uses as much IP as possible, will be creating a virtualized architecture for KLBR, its Telemundo station in Las Vegas, in which most of the back-end equipment used to run the station, including news production, will be located at a data center in Dallas. And then there’s the new facility NBCU is building in Boston, which will include six control rooms and six studios and accommodate four separate NBCU businesses. A big driver is increased support for the 2110 IP standard from the broadcast vendor community. (Photo: Wendy Moger-Bross)
After building a new state-of-the-art, IP-based facility for its Philadelphia duopoly of WCAU-WWSI, the NBCUniversal Owned Television Stations group is looking to take lessons learned there and apply them to the rest of its 40 stations across 28 markets.
One of the early projects, according to NBCU engineers speaking Monday at TVNewscheck’s NewsTECHForum in New York, will be creating a virtualized architecture for KLBR, the Telemundo station in Las Vegas, in which most of the back-end equipment used to run the station, including news production, will be located at a data center in Dallas.
The plan is to have the new Las Vegas plant on-air next summer, with 90% of it running on equipment that complies with the SMPTE 2110 IP networking standard, says Brad Plant, director of technology and operations for the NBCUniversal Owned Television Stations group.
Plant is also hoping to use as much 2110-compliant equipment as possible in the new facility NBCU is building in Boston, which will include six control rooms and six studios and accommodate four separate NBCU businesses. A big driver is increased support for the 2110 standard from the broadcast vendor community.
“Now we are looking for 2110 across the board,” says Plant, who was speaking on a panel, “Live News Production Over IP,” moderated by Marcy Lefkovitz, VP, technology and workflow strategy for the Disney ABC Television Group, along with other technology executives.
That is a big change from three-and-a-half years ago, when Plant began planning the new Philadelphia facility with Tony Plosz, VP, engineering and operations for WCAU and WWSI. The goal was to create a plant without an SDI router and use as much IP as possible. At that point SMPTE 2110 didn’t yet exist. So the Philadelphia facility is built on the earlier SMPTE 2022-6 and 2022-7 standards.
“Brad and I joke a lot that while this is the most advanced station in the group, it’s also borderline-outdated already,” says Plosz. “We’re already thinking about upgrades, the path to 2110. The decisions that we made, and how do we get to that next step, how do we replace some of things that we put in already.”
The original design started off with about 10%-15% of the plant being native IP, says Plant, and today it’s probably closer to 20%-25%. The Philadelphia facility has dual Cisco Media Fabric architectures, using Grass Valley IP gateways to interface between the 100-gigabit Ethernet Cisco network and traditional SDI operations.
“We could not do full IP everywhere, so there are a lot of little SDI islands with Grass Valley offramps into the Media Fabric,” explains Plosz.
The Philadelphia facility, which began operations in late October, is designed with an open-floor concept with functional groups like production and engineering staff organized into “pods.” While the guts of the IP-centric, heavily automated plant are a big departure from traditional SDI infrastructures, a major goal was to make that “transparent” to staffers.
“All of the touch points are still very much the same,” says Plosz. “They don’t know that behind the scenes there is no SDI router.”
That said, training broadcast engineers in basic networking principles is a key requirement for moving to IP operations, say Plant and Plosz. Vendors are helping with that process; Cisco offers a four-day intensive course for broadcasters which Plant and other NBCU staffers attended.
Overall, Plant says that major vendors like Cisco, Grass Valley and Evertz have been great in helping smooth the IP transition through events like “public interops” where vendors test compatibility between their gear.
“It’s taking an entire-industry effort, and to see the interoperability from the vendor side is really cool,” says Plant, who worked for Ross Video earlier in his career. “I spent part of my life on the vendor side, and I never saw vendors working so closely together as I have as we move forward through this new uncharted territory.”
Matt Keiler, VP of North American sales for IP transmission vendor TVU Networks, agrees that vendors are working hard to make IP happen.
“There has been more interaction on the vendor side when it comes to partnering,” says Keiler. “Our partners from the station and network side are pushing us to do that more in the vendor community, which I think is a good thing.”
TVU Networks, a longtime supplier of bonded cellular and IP-transmission systems for newsgathering and sports production, sees big upside in broadcasters shifting more of their operations to IP.
“There’s so much when it comes to IP-based production today, in the cloud, that is a tremendous opportunity,” says Keiler. “Some of the things we’re doing with our station partners specifically on remote production — we can do full cloud-based productions now. College football conferences are doing this, small station groups are doing this with their social media groups.”
The NBCU station group is no stranger to running broadcast operations remotely; it already runs a centralized master control operation in Denver and a centralized graphics facility in Dallas. Plant says that Dallas is also a good place for centralizing backend equipment for the smaller Telemundo stations.
“I’ll be clear,” says Plant. “This is not centralizing newsrooms or centralizing production operations.”
Instead, Plant wants to use low-latency IP networking and virtualization to move most of the equipment for those stations to the data center in Dallas. In Las Vegas, for example, he says the only piece of back-end hardware that should still remain on-premises is a small multiviewer that will be used for robotic camera control; even 100 milliseconds of latency is too much when making pan, tilt and zoom adjustments. But everything else should be able to reside in Dallas.
“The idea is to lift out the technology piece and centralize that,” says Plant. “A lot of these small-market stations don’t have a technology leader, it’s a regional technology leader. And that doesn’t work for us. And there’s maybe only two or three engineers that have to maintain all of the technology, which keeps getting increasingly complex. So, the more we can take away and centralize, the easier it is for that operation to run. For the users, again, the goal is transparency. They should not know that the equipment is living in another state, they should have full immediate access to all of the tools they have today.”
Here is the link to a video of this session: https://www.youtube.com/watch?v=sTxC1x6RVyc
Read all our NewsTECHForum coverage: