Dolby’s Craig Todd Gives The 411 On 3D

Dolby CTO Craig Todd heads the ATSC's 3D planning committee. While broadcasters aren't anxious to deal with another game-changing technology revamp, ATSC may be moving ahead with creating a standard, at the request of other members, especially Korea. One scenario is the creation of a 3D standard for the current TV system and then developing another that would be part of the next-gen transmission system in the works.

The last thing U.S. broadcasters want to do is the upgrade their facilities so that they can broadcast 3D TV. Most have yet to muster the resources and time to produce local news in HD.

But 3D is out there, a work in progress, and unlikely to go away. Hollywood sees it as the great attraction for the theaters. Home video sees it as another format they can sell and some cable programmers think it will give them an edge. Sooner or later, broadcasters will have to deal with it.

Recognizing the reality, the Advanced Television Systems Committee (ATSC) last year established a planning committee to investigate 3D and what it will take to produce and deliver it over a conventional 6 MHz digital broadcast channel.

That work was headed by Craig Todd, chief technology officer of Dolby. In this interview with TVNewsCheck, Todd says that despite the lack of interest in the U.S., ATSC may still move ahead with creating a standard for the sake of its Korean members. He also talks about some of the early 3D broadcast systems that could form the basis of a standard and some of the challenges of 3D production and display.

An edited transcript:

What was the thrust of the planning committee effort?


We looked at things you would like to know about the human visual system as you’re contemplating 3D, how the human visual system works in 3D, what the various issues are, things that can make you uncomfortable watching 3D, some areas where some additional research would be useful. One section is what we know about technical methods for delivering 3D over ATSC given the bandwidth constraints. It’s trying to lay out what’s known and what’s not known, but you would like to know.

You’re starting to sound like Donald Rumsfeld now.

Yeah. He came to mind as I phrased it that way.

Is it technically possible to do 3D in a 6 MHz channel with the current digital TV standard?

It looks barely technically feasible. Dolby and the Korean ETRI have developed systems. [Editor’s note: The ETRI is the Electronics and Telecommunications Institute, a research institute funded by the Korean government.]

How does the Dolby system work?

What Dolby was doing was simply putting the two pictures side-by-side in an HD frame. You take the left-eye picture and cut the resolution in half and put that in the left half of a new HDTV frame and then you take the right eye picture and likewise squeeze it and put that in the right half of a new HDTV frame. It’s an anamorphic squeeze. It loses horizontal resolution because you have to subsample the image and then the receiver expands the picture back out, basically up-converts it from 960 horizontal back to 1920 horizontal. To make up for the loss in horizontal resolution, we were delivering an enhancement layer and a special decoder that would restore the horizontal resolution that had been lost. This is the same method that is currently being used in satellite and cable. It works pretty well.

Do you have a name for it?

I think we call it Frame Compatible Full Resolution, FCFR.

Do you know how the Korean systems work?

One requires a common production and what they’re doing is sending the left eye over the HD MPEG-2 service and sending the right eye using H.264 in the remaining 7 megabits. So the receiver simply decodes the MPEG-2 left eye and H.264 right eye and then critically synchronizes the two so the same frame is presented to the two eyes.

In the other approach,  they send the left eye over MPEG-2 in the conventional way and then they send the right eye using the ATSC mobile technology. So the 2D terrestrial viewer sees the left eye HD and the 2D mobile viewer sees the right eye with the lower mobile resolution. Then, if you want to see 3D you decode both the terrestrial service fixed service and the mobile service and you present those two pictures to the viewer.

If you’re looking at it on an HD display, you will have HD in the left eye and standard def in the right eye. You would think that would look funny, but it turns out that your brain kind of takes the resolution from the sharper eye. You apparently see a full resolution 3D picture. It’s really very clever.

This all presumes the program has been produced for the 3D audience and many times when the program is produced for a 3D audience, to keep that audience comfortable watching 3D, you avoid a lot of production practices that are commonly used in 2D productions. You make a “boring” 2D production.

So it can be done, but you have to give up something in the production.

You do, and, in different types of programs, what you give up may or may not be very significant.

Can you give me an example?

In sports, in particular, any fast cutting between cameras where the image may be at a different depth, and even fast pans. Fast action is more difficult to follow in 3D. So the tendency is to leave the camera more stable, zoom back a little bit and let the viewer follow the action naturally in the 3D picture with their eyes as opposed to the camera following the action.

So the experience is like watching from the upper deck at the ballpark.

But, of course, as you zoom back, you kind of lose some of the 3D effect. There’s a lot of art in this and a lot is being learned by ESPN and others who are out there doing events and learning what works, what doesn’t work.

The demos of the systems I saw at the ATSC annual meeting in May looked pretty good.

Yeah, but they’re demos; they’re not proofs. When you do a demo, you make sure your content works whereas in real life, the content could be anything. It could be more challenging than what was demonstrated. You have to be very careful.

Are there any broadcasters in the U.S. actually interested in doing 3D in the current digital channels?

In the context of terrestrial broadcasters, I really haven’t found any. What we heard at the annual [ATSC] meeting was broadcasters are going to follow the market. I always felt this would be the case. It’s very different from HDTV where broadcasters led the world.

Broadcasters are not going to lead the way in 3D. That was written into the ATSC strategic plan several years ago, but if the 3D market develops via other media — cinema,  Blu-ray, cable, satellite — and consumers like it and preferentially watch 3D, then that opens up the market for broadcasters to deliver signals into.

Is the ATSC going to move forward with standardization of any of this?

It will be discussed, probably at the next ATSC board meeting, whether to move ahead with standardization or not. We have to look at it not only from the perspective of the U.S. where we haven’t seen a lot of demand from broadcasters for 3D delivery, but also from other countries, particularly Korea. It is very interested in this. They have a system operating on the air experimentally on a special frequency. They’re probably going to start sending signals in the middle of the night on real broadcast stations as their next test.

So ATSC may go ahead to accommodate Korea and some other countries?

Right, and then, of course, we would have a standard in the U.S. if we ever wanted to use it. It would be available.

So, what do you think is going to happen?

I would rather not speculate. I would rather just say it’s going to be seriously considered. There was a formal request letter from Korea signed by a number of companies, ATSC members, asking ATSC to do standards. Generally, if a new work item proposal is drafted and signed by, I think, five ATSC members, then that kind of obligates us to move ahead and do the standards work. There would be sufficient support just from Korea to kick off this work.

The ATSC is also looking at a next-generation digital broadcasting system that, I presume, would make 3D a lot easier.

Yeah. So, one tack is you say we’ll just put off 3D and include it as part of a next-gen system. That’s a fundamental decision that has to be made.

Or you could do both, right? I mean, for the sake of the Koreans, you go ahead and create a 3D standard for the current system and then you create a new standard as part of the next-gen system.

Absolutely correct.

Do all these systems require glasses?

As far as we can foresee, we’re talking about large screen with glasses. Large screens viewed by multiple viewers require glasses. We know everybody is working on glasses-free displays, but they still seem out there in time, five to 10 years.

The glasses-free displays typically need many views so that as you move your head around you get a natural effect. Typically, there are like nine views or 11 views and we don’t foresee broadcasting nine views. What happens is the display has to somehow synthesize the views from the information you do transmit. There has been talk of sending one view plus depth and the TV synthesizes all the views. That’s challenging. There’s also talk of sending two views plus depth or disparity. But since those displays are not imminent, we don’t exactly know what to transmit right now to make their job easier. So we can’t really foresee doing standards for them right now.

But  if we’re talking about personal screens, a smart phone or a tablet, then you might be able to produce a 3D effect, right?

Then the autostereoscopic, two-view display can work. There are a number of those out there. As long as you orient it toward one viewer so it sends one view to one eye and one view to the other, you only need two views.

Comments (0)

Leave a Reply