Modeling the Internet from the top down, but keeping sight of small details

Three very different Cornell researchers -- a game theorist, a specialist in small-scale computer networks and an expert on networks both electronic and biological -- have undertaken a collaboration to create computer models of large networks that don't throw out small details.

They plan to try out their new models, built with a $1.5 million, four-year grant from the National Science Foundation (NSF), on three test cases: controlling Internet congestion; synchronizing networks for such applications as environmental monitoring or patient monitoring in hospitals; and creating incentives for fair distribution of resources in peer-to-peer networks. The team anticipates spinoffs applicable to such problems as synchronizing neural networks and stabilizing global financial networks.

Computer models enable scientists and engineers to tinker with systems -- especially to find out what can go wrong -- without causing any damage in the real world. But computer models of complex networks like the Internet or the nationwide power grid don't predict very well.

As in the proverbial "butterfly effect," small events somewhere on the network can propagate and cause major problems. Available computer models treat information traveling around the network as a fluid, missing the small details. At the other extreme, there are very good models for very small systems, starting with just two computers talking to each other. However, making a larger model by hooking a lot of those small models together demands more computing power than anyone has. Simulating as few as 100 nodes would overwhelm even Cornell's supercomputers.

The three Cornell researchers plan to develop large-scale models that recognize the small details, using what they call a "computational-analytic" approach. "Analysis" is something human beings do, writing equations that model what's happening -- in this case, some event on the Internet -- and then applying insights and shortcuts to improve the model. "Unfortunately, human analysis of realistic problems often falls short and simplifies away important details," said principal investigator Eric Friedman, associate professor of operations research and information engineering. So the "computational" part is loading data into the model and running it on a simulation of a real network to see what happens. "Our idea of a computational-analytic approach is to use the computers to extend and complete human analysis to real messy problems, combining the best of both worlds," Friedman said.

Friedman specializes in game theory and its application to computer networks, looking at ways to make the allocation of bandwidth fair to all users. Co-principal investigators are Steven Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics, and Ao "Kevin" Tang, assistant professor of electrical and computer engineering. Strogatz studies networks in the abstract, including human interactions and biological networks such as the ones that let fireflies flash and crickets chirp in sync.Tang has developed models of the Internet flow control protocols that are accurate for any number of nodes but limited by the computing power required as the number of nodes increases. He suggests that if the new approach creates a successful model it can be run on progressively larger and larger networks until the goal of a really large-scale model with tens of thousands of nodes is reached.

"The challenge is to get three people who work in very different areas to see if we can combine three different fields and get them to work together," said Friedman. "I'm going to learn a lot about things I've heard of but don't understand. Our grad students will learn the languages and bridge between us."

The team plans to start with techniques developed for other fields, including "equation-free modeling" from chemistry, which involves running an equation on a few data points in a large array and projecting over the whole array, and "renormalization" from physics, a mathematical way of pulling back so far that small dimensions disappear. That work will eventually lead to their test cases, Friedman said.

The grant is one of the first awarded through the NSF's new Cyber-Enabled Discovery and Innovation program, an initiative to "create revolutionary science and engineering research outcomes made possible by innovations and advances in computational thinking." NSF received 1,671 submissions in the first year of the program and accepted only 2 percent of the proposals.

Media Contact

Blaine Friedlander