Is 720p better than 1080i?

This question comes up over and over again. Particularly by people new to HDTV who surf the internet and find many conflicting arguments between the two formats. The true answer is: It depends. It depends on whether you are on the broadcasting end or the viewing end. It depends on the source of the video. It depends on the bandwidth of the medium you are getting your signal. It depends on the type of display you are watching. It depends on your budget. It depends on a lot of things.

From a strictly technical point of view eliminating factors of cost, bandwidth and sources, progressive video is preferable to interlaced video. The reason behind this statement is the entire frame is constructed at the same time and more important the two fields of interlaced video does not have to be reconstructed before any video editing or processing can be done. Stated another way, the signal does not have to be deinterlaced before it can be processed. 720p video used by broadcasters have another benefit of 60 frames per second (fps) being sent rather than the 30 frames per second for the 1080i signal. Since more frames per second are being sent, fast motion video such as sports will benefit due to the increased temporal resolution. The problem for the viewer is there is a very limited amount of 60 fps video, primarily sporting events.

Broadcasters recognize the benefit of progressive video as many are standardizing on progressive cameras, primarily 1080p cameras. Some are 24 fps and others are 60 fps depending on the event being shot. The 24 fps would be mainly for non sporting events like Saturday Night Live, David Letterman, etc. Progressive scan cameras allow the outputs to be directly fed into the processing equipment without deinterlacing reducing one less step in the production. Why is it necessary to deinterlace the video? In order to add graphics to a frame the entire frame needs to be constructed would be one example. Any video editing also requires full frames. It is easier to deinterlace the video before processing then interlace the output for transmission. The video processing is at the root of most technical people’s objection to interlaced video.

Ok, so if progressive video is technically better, why did they allow interlaced video in the DTV spec? Bandwidth is the short answer. Turns out in the current TV channel frequency spacing and using the compression technology that existed at the time of the DTV decision in 1996, there was sufficient bandwidth to allow only HDTV formats that would not exceed the allowed bandwidth per channel. This ended up allowing three formats that pretty well used up the bandwidth. These are 1920×1080p/30fps, 1920×1080i/30fps and 1280×720p/60fps. There are other possible HDTV formats, but these are the three that are at the bandwidth limit. Looking at pixel counts we have roughly the same quantity of pixels transmitted each second for all three with the two 1920×1080p,i/30fps being 62,208,000 pixels per second and 1280×720p/60fps being 55,296,000 pixels per second. Notice the 1080 and 720 formats are within 12.5% of each other, so the bandwidth required is about the same. The 720p/60fps format does have a slight advantage of not having to be quite as compressed as the 1080/30fps formats, so pixelation or blockyness of fast moving objects could be somewhat less. This can be a significant benefit for televised sporting events, although the reason why 1080i broadcasts have more of this blockyness is more due to multi-casting where the bandwidth is divided up to allow one channel to have multiple programs, a DTV feature that was not available on analog TV.

Of the three HDTV formats that are at the bandwidth limit two won out. ABC, and FOX for the over the air (OTA) networks and ESPN and ESPN2 for the cable/satellite networks chose 720p/60fps and NBC, CBS, PBS, WB, UPN and independents along with the other cable/satellite networks chose 1080i/30fps. Here is that darn interlaced format that is technically inferior again, why? In a word, or phrase, CRTs. The CRT based television was based on an interlaced scan system from the beginning of TV. Basically the cost to provide interlaced video on a CRT is much less expensive than the cost to provide progressive video on a CRT. Additionally, the persistence of phosphors had evolved to a point where interlaced video was more than acceptable for HDTV use. It was either force HDTVs to be even more expensive than they are for CRT based units or allow the interlaced format to spur on faster acceptance. Obviously the cost factor won out.

But what of the argument that with 1080 line interlaced video there is only 540 lines of video being displayed on the screen at any given time? If you have poked about on the internet much exploring this subject I’m sure you have seen this claim. In short it is a false claim. The claim of only 540 lines of video is based on the fact that the odd lines are scanned on one pass, or field, and then the even lines are scanned. What is forgotten here is a couple of things. First, on a CRT the persistence of the phosphors I mentioned before. Persistence is the ability of a phosphor to glow for a time after the electron beam has moved on, sort of like the glow of a filament in an incandescent light for a while after the electricity is turned off. This persistence is what keeps the first 540 lines of video lit while the second 540 lines of video is being painted. Now it is true that the prior scan of video will not be as bright as the current scan, but our brains will average this out which is why TV works for most humans and some dogs even. In short the phosphors provide the deinterlacing on the screen itself.

Now move to fixed pixel type displays like plasmas, LCDs, LCOS, DLPs, SEDs, etc and you have a completely different matter. These type displays are progressive in nature and any interlaced video fed into these displays will be deinterlaced by combining the two fields into a common frame for display. In the case of 1080i/30fps a video memory image of 1920×1080 is created and then scaled to the resolution of the display and displayed at the refresh rate of the display. Since most, if not all, fixed pixel type displays refresh at 60 times per second, each deinterlaced frame would be displayed twice. If the display has a resolution of 1920×1080 pixels, then the full HDTV resolution will be displayed, obviously not just 540 lines of video.

Another issue to discuss when talking about the difference to the viewer between 1080i/30fps and 720p/60fps video is the source of the video. If the source of the interlaced video is the same frame for both the odd and even lines, such as it would be for movie frames and progressive cameras, the deinterlacing will reconstruct the progressive frame back to the original. Movies are shot at 24fps and even if displayed on a 60fps display the effective frame rate will still be at 24fps, so having a 720p/60fps signal and corresponding display does not help at all. In fact the efficiency is not as good as a lot of data is redundant. The 1080i/30fps matches up for 24fps video with half as many redundant frames. Most prime time HDTV shows are also 24fps, so the only case where the 60fps would offer an improvement would be when the source is also 60fps, such as sporting events.

Also there is the interlace artifact where the object moves in the 1/60th of a second between the odd lines being scanned and the even lines being scanned. This was important back in the days of iconoscope cameras which were interlaced in the capture the same as the CRT tubes used for display, because these cameras had the same constraints as CRTs as far as interlaced video is concerned. Modern CCD solid state cameras use a matrix of pixels to capture the images as a full frame and the pixels are shifted out of the captured image matrix electronically. No longer is it necessary to have a different frame between the odd and even scans and these naturally progressive cameras are making the classic interlace artifact a thing of the past. Remember if the two passes are made from a common frame capture, the reconstructed image will end up progressive, even if the transmission is interlaced.

It used to be this issue of interlaced vs. progressive was very contentious in the video world. Arguments would get pretty heated with each side presenting their case passionately. In the end both sides won out as far as TV is concerned. In the computer world the progressive side won the day, just in time for the CRT monitors to be pretty much obsolete, ironically.

Finally there is the issue of resolution. Most arguments for 720p/60fps being better center about the progressive scan and 60fps, but ignore the resolution argument. The truth is there are more than twice the quantity of pixels in the 1080i/30fps over the 720p/60fps. There is the counter that there is roughly the same quantity of pixels per second with both formats, but unless there is new frames each 1/60th second, then there is not any benefit to the extra redundant frames and as has been explained before, sporting events are currently the only video where this would make a difference. In my mind the extra resolution of the 1080i/30fps is preferred overall considering all of the source material that is available. To that end the ultimate display in my opinion is a 1920×1080p/60fps display.

REV. 1

Blog Of Hawaii

page template

High Definition Blog is proudly powered by WordPress
Entries (RSS) and Comments (RSS).

-->