How to crowdsource your decision-making (or not)

Whether you are choosing a restaurant or the destination for your next vacation, making decisions about matters of taste can be taxing.

New Cornell research points to more effective ways to make up one’s mind – and sheds light onto how we can use other people’s opinions to make our own decisions. The work may also have implications for how online recommender algorithms are designed and evaluated.

The paper, published May 28 in Nature Human Behavior, suggests that people who have had a lot of experiences in a particular arena – whether it’s restaurants, hotels, movies or music – can benefit from relying mostly on the opinions of similar people (and discounting the opinions of others with different tastes). In contrast, people who haven’t had many experiences cannot reliably estimate their similarity with others and are better off picking the mainstream option.

“Our findings confirm that even in the domain of taste, where people’s likes and dislikes are so different, the wisdom of the crowd is a good way to go for many people,” said lead author Pantelis P. Analytis, a postdoctoral researcher in Cornell’s Department of Information Sciences.

Analytis co-wrote “Social Learning Strategies for Matters of Taste” with Daniel Barkoczi of Linköping University, Sweden, and Stefan M. Herzog of the Max Planck Institute for Human Development, Berlin.

But how many restaurants (or movies or music albums) should you try before relying on the opinions of others who seemingly share your tastes, rather than the wisdom of the crowd? It all depends on how mainstream (or alternative) a person’s tastes are and how much their peers differ in their similarity to them, Analytis said. “For people who have mainstream tastes, the wisdom of the crowd performs quite well, and there is little to be gained by assigning weights to others. Therefore, only people who have experienced lots of options can do better than using the wisdom of the crowd,” he said. “For people with alternative tastes, in contrast, the wisdom of the crowd might be a bad idea. Rather, they should do the opposite of what the crowd prefers.”

The researchers investigated the performance of different social learning strategies by running computer simulations with data from Jester, a joke-recommendation engine; developed at the University of California, Berkeley, in the late 1990s, it has been running online ever since. The interface allows users to rate up to 100 jokes on a scale from “not funny” (-10) to “funny” (+10). An early citizen science project, it is the only available recommender system dataset in which many people have evaluated all the options.

The findings suggest people could learn their own preferences in the same way that recommender systems algorithms assess which options people will like most, shedding light into our own cognition “We humans have the most powerful computer that ever existed running algorithms all the time in our heads. We’re trying to show what those algorithms might be and when they are expected to thrive,” Barkoczi said. In that respect, the new research builds bridges between the behavioral and social sciences and the recommender systems community. The fields have looked at opinion aggregation using very different terminology, yet the underlying principles are very similar, Barkoczi said. “We’ve put a lot of effort in this work trying to develop concepts that could cross-fertilize those parallel literatures.”

The research also has implications for how online recommender algorithms are designed and evaluated. So far scientists in the recommender systems community have studied different recommender algorithms at the aggregate level, disregarding how each algorithm performs for each individual in the dataset. In contrast, this research shows that there might be potential in evaluating these strategies at the individual level. “In our work, we show the performance of the strategies diverges a lot for different individuals. These individual level differences were systematically uncovered for the first time,” Herzog said.

This implies that each individual’s data can be seen as a data set with distinct properties, nested within an overarching recommender system dataset structure. “Movie recommendations systems like the ones used by Netflix could ‘learn’ whether individuals have mainstream or alternative tastes and then select their recommendation algorithms based on that, rather than using the same personalization strategies for everybody,” Herzog said.

According to an age-old adage, there is no arguing about taste. “This work, in contrast, shows that the best learning strategy for each individual is not subjective,” Analytis said,” but rather is subject to rational argumentation.”

Media Contact

Jeff Tyson