Rich Chernock, chief science officer of Triveni Digital, is leading the effort to develop TV's next-gen transmission standard, ATSC 3.0. At next week's NAB Show he will provide updates on ATSC 3.0 developments as well as lead an IEEE/BTS discussion on next-generation compression. He talks about where the process is now and what's in store in Las Vegas.
Updating TV’s Progress Toward ATSC 3.0
TV engineers in Las Vegas for the 2014 NAB Show next week will have an opportunity to get up to speed on the next DTV transition. Unlike the first, in 2009, which took broadcasting from analog to digital, this one will take them from first-generation digital to next-generation digital.
ATSC 3.0, as the all-new and incompatible system is known, promises to keep broadcasters competitive with wireless phone companies, allowing them to deliver video to viewers on the go and position them for a brighter in-home TV viewing future centered on Ultra-HD, 3D and interactivity.
Rich Chernock, chief science officer of Triveni Digital, is leading the ATSC 3.0 effort at the Advanced Television Systems Committee. At NAB, he will provide updates on ATSC 3.0 developments as well as lead an IEEE/BTS discussion on next-generation compression.
In this interview with TVNewsCheck Technology Editor Phil Kurz, Chernock gets an early start on his outreach efforts.
An edited transcript:
You’ve been shepherding development of the ATSC 3.0 standard since February. What is your impression of the development effort so far, and will the standard be ready for adoption in the first quarter of 2016 as envisioned?
I think there are a lot of really smart people working hard to make this happen. The timetable is still achievable. There is activity at different layers, and each has the right people involved, who are working hard to make ATSC 3.0 a reality. There is a lot of technology and many choices to make.
There’s also some effort to create a core system with additional pieces that may follow a little bit later for the sake of keeping to the schedule. But overall, I think the effort is working out quite well.
Is it likely that ATSC 3.0 modulation will be OFDM-based?
I think that is really likely.
Is there an effort to make it possible to have a software-updatable demodulator and decoder in the TV so broadcasters don’t get locked in to a system that could easily become obsolete as new generations of technology are developed?
Extensibility and growth in the future are the core themes throughout ATSC 3.0 — not only with the physical layer but elsewhere. Addressing the physical layer, I know that at least one proposal had at its core a software-defined radio. There’s a lot of discussion about that, and people are considering it.
However, at this point, there’s been no decision, but it is one of the options.
ATSC 3.0 encompasses transmission of many different types of content, including HD, 4K 3D and mobile. What sorts of compromises are necessary — if any — to accommodate so many different resolutions and playback devices?
Some of them aren’t all that different. 3D is somewhat orthogonal. Resolution, well, Ultra-HD 4K is a pretty obvious target for a big screen.
At the moment, Ultra-HD 4K isn’t seen as right for a mobile device. But not long ago, people were saying that about HD resolution for handheld devices.
However, technology progresses, and today an HD resolution device is something you can hold in your hand. So the line separating given resolutions from certain devices blurs over time.
How likely is it that there will be higher frame rates than 30fps with ATSC 3.0?
It is very likely. There are lots of things being considered. People are looking into different ways to approach higher frame rates, higher dynamic range and deeper color space.
You headed up development of ATSC 2.0, which leveraged non-real-time (NRT) content delivery to add a feeling of interactivity to TV viewing. Will any of that, or some other approach to interactivity, port over into the ATSC 3.0 standard?
I think a large part of 2.0 is going to port right into 3.0. The push notion, or pushing content to the receiver before use, is part of the core of ATSC 3.0. The group working on the transport of 3.0 is talking about transporting streaming content, such as television, and files, like NRT content.
The service models for ATSC 3.0 are fairly rich. They include conventional TV, streaming TV enhanced with interactivity, push TV and combinations of broadband and broadcast delivery. The goal is to give broadcasters the opportunity to do whatever they can dream up.
Tim Carroll, founder of Linear Acoustic, has frequently made the point that the speaker on a small handheld device like a mobile phone doesn’t have nearly the range of a home theater system. That can present challenges when mixing sound for TV content that will be viewed in the home and on mobile DTV devices. ATSC 3.0 also encompasses mobile and in-home viewing aspects. Is there anything being done in the development of the standard to accommodate the differences? Maybe to make it easier on the audio mixing side?
Mixing is still going to be an art as it is today, but the number of tools to mix for viewing on multiple platforms is going to be growing a bit. New audio technologies are being considered. They include immersive audio and object-based audio.
There’s a lot of attention being paid to the fact that audio will need to be heard in the home, on headphones and on speakers in a mobile device.
Years ago when Motion JPEG and MPEG were emerging, I can remember having a conversation with someone who noted that the mathematics behind the compression had existed for a long time and was well understood. What made the video compression possible was the emergence of computer processors that were fast enough to execute the algorithms at the speed of video or faster. Fast forward to today. Occasionally, an announcement from a university will claim advancement on quantum computing. If quantum computing ever becomes commercially available, will there effectively be an end to any limitation holding back video encoding efficiency?
Quantum computing may come along at some point. There is even debate about whether today’s quantum computers actually are quantum computers. I’m not an expert in that. But there is a real kernel of truth in what you described.
If you look at video compression for TV, you had MPEG-2 and MPEG-4. Now we have HEVC. What basically has happened is there is a limitation, specifically you have to put frames on a screen 30 times per second or at a rate that matches whatever frame rate you are using. That means whatever kind of compression or decompression you do has to be done in 1/30th of a second.
So when MPEG-2 video came around, you had a certain amount of computing power. That kind of limited the number of tools, tricks and algorithms you could use because you had to do the compression within 1/30th of a second.
Later, Moore’s Law took hold and processors got more powerful. As a result, along came MPEG-4, or AVC, which had the same sort of limitation, but all of a sudden you could do a lot more calculations and a lot more processes in parallel. This allowed the device actually doing the compression to try one approach and then another and then decide which one was better.
All of this in 1/30th of a second, and all of a sudden there was a boost in compression efficiency.
It’s happened again. Moore’s Law keeps on moving, and now we have HEVC, which does more in that thirtieth of a second. Whether or not quantum computing will come along someday and have an impact, I don’t know.
The key thing is that whatever computer you are using has to be cheap enough to put into the millions of TVs in the market.
You are part of the ATSC 3.0 update panel at the 2014 NAB Show. Can you summarize your message for those who will attend?
It’s sort of a peek at what’s going on in ATSC 3.0. It’s essentially the chairs of the specialist groups in TG3 [Technical Group 3] and I. Each group has a certain domain it works in.
I will be giving an overview of what ATSC 3.0 is, what it offers and what the philosophy is. We want to give the broadcaster an idea of what this is all about, and what they might be able to do with it.
Each of the SG chairs will go into the details about the work going on in areas like the physical layer, the transport layer and the presentation layer.
You also are chairing the IEEE/Broadcast Technology Symposium presentation during the NAB Broadcast Engineering Conference on Future TV Technology. What’s going to happen during this session?
This presentation is focused on video and audio compression. Compression has moved along quite a bit since ATSC 1 or even ATSC 2. There will be a one-hour presentation on HEVC for video compression. Dr. Yan Ye from InterDigital, who is developing HEVC, will explain what it is, how it differs from MPEG-2 and MPEG-4 and a good bit of what HEVC might mean to the broadcaster.
Although she could do it, she is not going to do a deep dive into the technology, but rather a broad stroke to get people to understand what is going on and why it makes a difference to them.
The other two-thirds of the session are an audio panel moderated by Schuyler Quackenbush, who is well known in the audio field, and representatives from DTS, Dolby and Fraunhofer. They will be talking about their views of next-generation audio.
Mark Aitken, VP for advanced technology at Sinclair, asserted a couple of months ago that TV manufacturers and others who are not broadcasters were steering the development of ATSC 3.0 to the detriment of television broadcasters. How do you respond?
I think it’s not an accurate summation of what is going on. There are a number of broadcasters involved, and recently an increasing number of vocal broadcasters who are submitting quite a bit of input and are being listened to.
Aitken also has said Sinclair had the desire to go its own way in developing a next-generation TV system. Is that feasible, and if so, how would a competing system impact development of ATSC 3.0?
A totally independent, competing system probably would not be good for everyone. You have fracturing, and every time markets fracture they seem to be harmed. I think Sinclair is going to try to do a prototype system.
I have also heard the notion that if they do a prototype system, they would bring it to ATSC, which would be fine. Anybody who has technologies and can show that they work, of course, will be heard.