TVN Tech | CBC’s Montreal Greenfield Reveals IP Challenges
CBC/Radio-Canada will soon move into its new Montreal broadcast center, which will run almost entirely on IP technology. The Maison de Radio-Canada (MRC) will house 11 TV and 19 radio studios in a footprint of about 400,000 sq. ft., which is a third the size of the current facility’s 1.3 million sq. ft.
In an interview with TVNewsCheck’s Jennifer Pallanich, Francois Vaillant, executive director of engineering solutions at CBC/Radio-Canada, talks about the broadcaster’s decision to pursue an IP future rather than an HD/SDI plant and gives an update on the status of the greenfield MRC.
Vaillant says constructing a brand-new facility powered by a technology that is not yet mature has presented a series of challenges such as interoperability between vendors, but those difficulties should lessen for future IP technology migrations the broadcaster plans over the next decade. He cautions that other broadcasters considering the switch from HD/SDI to IP should expect things like requests for proposals (RFPs) to take much longer than they have in the past because of the learning curve involved in moving to a new technology, and for operational costs to double.
An edited transcript.
What’s the status of your greenfield buildout at Maison Radio-Canada?
We started the project in 2017. We actually had the data center delivered in August 2018, and we slowly started the installation in the data center. The room and the power and cooling were available, but it was still a construction site, so it was challenging.
We are, in terms of investment, 70%-75% complete. Installation is well advanced. The building itself should be delivered by December. We will probably start to move some individuals by February, and will slowly start to move the production shows by March or April. We are planning to complete by the end of 2020. We have a window of another six months in case we need it because of the Olympics in August. The Olympics is big for us, and the same resources are required for the Olympics and the installation.
This is a much smaller space than your current home. Did that factor into your decision to go with an IP-based infrastructure?
In the current building, we have 465 cabinets with technical racks, and we are moving into a smaller place — from three data centers to one — and getting down to 155-160 cabinets. It is two-thirds smaller in terms of the technical footprint. IP increased the density per rack of technology.
The second driver was the fact that IP was emerging in the industry. Were we investing money to be the last HD/SDI plant, or are we moving toward the future? A greenfield project starting from scratch was an opportunity to be more future-proof, and IP is well designed to be future-proof. If we are moving to 4K in two years, in three years, five years, the base infrastructure remains the same. We just need to increase the pipes to absorb the increasing bandwidth, but basically we are using the exact same infrastructure.
Will the MRC have any legacy components, or is it entirely IP?
I can’t say it is going to be 100% IP based. There is a component that doesn’t exist yet in IP. The control board, the video controller, the switcher don’t exist yet in IP, so we will put around gateways to convert in and out of IP, but the rest is going to be IP based, including the playout. Monitoring is going to be IP based. Live video flow between whatever control rooms and cameras and monitors and posts and maintenance up to presentation will be fully 2110 IP based.
What kind of difficulties is the switch to IP infrastructure posing?
It’s challenging because the technology is not really 100% mature. We are facing a lot of issues. The first roadblock was the synchronization of those new networks. PTP synchronization is well known in IP industries and not as well known in the broadcast industry, so there is a learning curve in there.
The next big roadblock was about configurations. The standard name for the discovery and registration of new devices in the network is in Networked Media Open Specification (NMOS) protocol, and it is not well accepted across the industry yet.
We need to develop a lot of workarounds to synchronize or to recognize a new device. A lot of manual configurations have to be made to configure our end device to be recognized by the IP and broadcast controller. Then we are facing interoperability issues between various vendors. Two or three years from now, that is going to be an old story. But as we speak, it is really challenging.
How has interoperability been a problem?
There are a lot of interoperability issues. We don’t have an integrator, so we are the middlemen between all those vendors.
As an example, we cannot control the video switcher with the broadcast controller because Lawo [the broadcast controller vendor] is working on one protocol and the Grass Valley switcher is working on NMOS. Even between vendors using NMOS, we are still having interoperability issues with control and recognizing devices on the network. It’s really new and still not mature. It’s not easy plug-and-play technology yet.
What, if anything, are you doing in the cloud?
We are adding an entire digital network in the cloud, but the rest, not much. Comparing between on-premises and on-cloud, we quickly realized that the cloud is more expensive. Figure that you need to carry HD or 4K content natively at the cloud provider, it’s really heavy in terms of bandwidth. It’s really costly.
The next round for us is going to be probably deep archive, but retrieval time for deep archive with the cloud was too long. We can retrieve from our deep archive in a couple of minutes. For us, the cloud didn’t really meet the need. For live, specifically, the cloud is still having a lot of challenges. It works well for streaming for OTT or digital, but for live it’s another story.
How would you approach this process differently?
Because we have a greenfield, we are touching all the aspects — the training, the learning curve, the lack of interoperability between the technology, missing gaps within the standards. Those will be solved in a year, two years for most of it. It is going to be much easier. The challenge we will have is because the other locations won’t be greenfield projects. We will have to make those migrations and retrain those folks and keep the operation alive.
What advice would you offer other broadcasters considering a greenfield IP facility?
Make sure you provide enough time because everything is much longer to do when you are changing technology like that. For example, the RFP may take four to five times longer. First of all, you don’t know what you don’t know, so it is hard to put exact specifications in the RFP.
Just to find out what your requirements will be is already a big challenge. In HD/SDI, the technology was well known. We knew exactly what we were chasing. In IP, it’s not exactly the same. We have been spending a lot of time just to understand and test in the lab. We need to learn what works, the limitations, the latency, how can you recall things.
The second thing is your operating costs will increase by probably around 100% with software, servicing and licensing. The technology doesn’t last as long as broadcast technology hardware. That will impact our refresh cycle.
To compare the old technology with the new technology, I am expecting it is going to be more expensive capital-wise on the total cost of utilization perspective. And the operating costs will at least double. If you compare that with a 10-year picture, you are roughly within the same ballpark. You have less capital investment on day one and an increase on operating costs, but the refresh cycle is still an unknown.
Is Montreal the vanguard for an IP transition across the rest of CBC? Is Toronto, for instance, yet on the road map?
Toronto is next in line to move. One of the big differences is that the other locations are not greenfield projects. While we are refreshing and updating our technology and then moving to IP, it is going to be a much smaller process to get there. Right now we are in the process of mapping migration across the entire network. I am expecting it is going to take probably 10 years. At one point, we will probably gain a lot of expertise, and then probably solve a lot of issues.
In the next two or three years, the roll up to something else through the IP infrastructure will be that much easier in the future. As we speak, it’s a hell of a ride.