Undergrad researchers at Cornell release computer program that gives Linux users full-motion videoconferencing

Videoconferencing on a desktop computer is usually a bumpy ride. Even with a good Internet connection, most desktop video displays 15 frames per second or less, jumping and jerking like an old movie that has been cut and spliced a few hundred times.

But on Dec. 7, 2000, Cornell University undergraduate researchers started giving away qVIX, a videoconferencing application they have developed that provides full-motion, 30-frame-per-second video in full color. However, the application is only for computers running the Linux operating system.

The application uses such a narrow bandwidth that the system will run with good quality over a cable modem or DSL (Digital Subscriber Line) Internet connection. These fast connections, which are becoming increasingly popular with home users and some businesses, provide much greater speed than a standard telephone modem.

In tests, the application has run successfully on a 200Mhz Pentium PC, using only about 300 Kbps (kilobits per second) of bandwidth -- about the same as is used by other desktop videoconferencing programs – but with much better quality. Like many other videoconferencing programs, qVIX uses a standard video format known as Q-NTSC. NTSC is the standard for broadcast video, with a 640-by-480 pixel image. Q-NTSC is one-quarter of that, or 320-by-240 pixels. Users also can set qVIX to handle 640-by-480 video on a higher-bandwidth connection.

The application is based on a new video compression algorithm called CU30 developed by Toby Berger, the Irwin and Joan Jacobs Professor of Engineering at Cornell. Eventually, CU30 also could lead to better video on web pages and even hand-held videophones. CU30 is one of several developments from Berger's Discover Lab at Cornell, a facility devoted to improving desktop and palm-top video.

A Windows version of qVIX is in the works, but it's not an accident that the first full-scale test has been rolled out for users of the Linux operating system, a somewhat select group of computer-savvy people. "We're targeting the Linux market first because they are the early adopters of technically superior products," says Aron Rosenberg, a Cornell junior in the School of Electrical and Computer Engineering who heads the team that began writing the Linux version of qVIX in the fall of 1999 under Berger's direction. The team also included Cornell junior Andrew Dodd and Ben Luk, who graduated in the spring of 2000 and now works for the Oracle Corp.

Users can download qVIX from SourceForge at http://cu30.sourceforge.net/main.html , a clearing house for "open source" software, programs whose source code is openly available for users to test and improve. While the CU30 algorithm is patented, Cornell Research Foundation, which manages the university's intellectual property, has agreed to license the algorithm open-sourced under the terms of the GNU General Public License. GNU is the organization supporting Linux development.

The CU30 algorithm gets more information into less space by throwing away details that computer users don't notice anyway. Experimental psychologists have found there is a threshold below which a change in a stimulus, such as the loudness of a sound or the brightness of a light, goes unnoticed. That threshold grows higher as the intensity of the stimulus increases: A dim light getting brighter is noticed more quickly than a bright light getting brighter. This was codified in the 1800s into an equation called the Weber-Fechner law.

Berger's algorithm applies this law to the brightness of single pixels in a video image. A common method of compressing video is to transmit only information about the pixels that change from one frame to the next. CU30 transmits those changed pixels only when the change is great enough that a human observer will notice it. Berger calls this procedure "bush-hogging," after a farm implement that cuts down low underbrush but leaves tall plants alone.

What comes out is full-motion video, free of most of the digital artifacts, or annoyances, that plague other videoconferencing applications, like large square blocks or "mosquitoes" that gather around fast-moving parts of the picture. Edges tend to be sharper than in other systems.

"Where we suffer is in the softer part of the picture, where there are slow changes in intensity over a large area, like light on a wall," says Berger. "It doesn't matter much on a wall, but it can show up in close-ups of a face. For videoconferencing this doesn't seem to bother most people. But in TV or a motion picture, I don't think you would like it."

Berger's video-compression algorithm is discussed in the paper "A Software-Only Videocodec Using Pixelwise Conditional Differential Replenishment and Perceptual Enhancements," by former Cornell graduate student Yi-Jen Chiu and Berger, published inIEEE Transactions on Information Theory, April 1999.

Media Contact

Media Relations Office