Putting human vision models into computer video display

In addition to the ability to dynamically adjust for light, the new processing technology uses an approach with multidimensional tables, in order to map the true image colors as seen by the human eye. This gives total color control for the first time.

The evolution of color reproduction

The problem with video display is that it fundamentally differs from how the brain works. The way digital display works hasn't changed a whole lot since the early days of TV. We have four colors, and added white and other attempts to make the standard look better. The trouble is, it looks awful if you take relatively poor processing capability in the video processor and display it on a big screen.

But no matter what you do to the display, the color information in the image data doesn't correspond to the way your brain expects color to work. You just get a better-looking bad picture. The problem is, the eye is naturally adaptive. That's why you can see a candle on a dark night ten miles away. The way the color image processing works, it isn't visible at all.

To achieve a video display that offers the same quality as the human eye, one must first start out by understanding what science calls memory colors. Examples include blue sky, sky tones, and reference colors.

If you can make those colors look right, our eye system, when it looks at the display, will perceive that there is quality of color in the image that is much closer to what is actually there. To do that, you have to understand how to map the image to different pieces of the color space.

In current technology, adapting the display depends on adjusting saturation and contrast to compensate. The new Entertainment Experience technology does not depend on those adjustments. Instead, they control the area in the color gamut of the display that creates the color one's eye expects to see, controlling for ambient light.

Furthermore, the approach is scalable for different screen sizes. It scales to the size of the display at any height level. Scaling down, on the smartphone, the display characteristics can be adjusted, so you can look at it and see it clearly even in bright sunlight.

Unfortunately, before eeColor, standards have not kept up with the promise of the hardware. The color gamut for HDTV was bound by what can be created from a scanning electron beam and phosphor.

Today we do have wavelength-tunable lasers that can create a larger color gamut. It is possible to get to 85% of the theoretical maximum gamut with current LED laser technology, but the standard remains at about 45%.

We throw away a lot of color information in going from what the camera saw, to what you see when it's displayed. eeColor puts back much of the information that was thrown away, by processing what was retained in a dynamic fashion.

The eeColor technology is a software plug-in. The engine references a set of lookup tables that can be very specific or completely generic to a class of device, which gives it a high degree of flexibility to be adapted to almost any display hardware.

The result is simply stunning – and represents the future of display technology on every device from smart phone displays to Times Square billboards.

  • John Parkinson is Chief Executive Officer of Entertainment Experience LLC.