NSF funds work on flagging bad online behavior
By Tom Fleischman
In classical Greek theater, the chorus served a specific purpose: to explain and comment on the particular moral issue being dramatized on stage.
These days, the “Greek chorus” – in the form of commenters on social media – can sometimes overshadow the “play” itself with an overabundance of zeal, mean-spiritedness and hubris.
“You see a dialogue between two people, and then there are these comments, from the ‘chorus,’” said Drew Margolin, associate professor in the Department of Communication, in the College of Agriculture and Life Sciences (CALS). “And in many ways, in terms of the impact on society, what happened to the chorus matters almost more than what happened to the two people because that’s just two, but the chorus could be thousands.”
Negative interactions are easily found on social media; calling them out and effectively tamping them down is a challenge, but Margolin and members of the multidisciplinary Prosocial Project believe it can be done.
They have received a four-year, $1.19 million grant from the National Science Foundation for their project, “Deterring Objectionable Behavior and Fostering Emergent Norms in Social Media Conversations.” The researchers, in a series of five phases over the grant period, will seek to develop a theoretical model for understanding the emergence and maintenance of norms to deter negative online behavior.
“There’s this intuition, ‘If you see something, say something.’ We all have that intuition,” Margolin said. “But in social media, who are you saying it to? Do they share your view? Are they actually on the other side? Do you look like the bad guy? All these things are really hard to understand.”
The team includes four Prosocial Project team members: co-leaders Margolin and Natalie Bazarova, M.S. ’05, Ph.D. ’09, associate professor in the Department of Communication (CALS); Vanessa Bohns, associate professor of organizational behavior in the ILR School; and René Kizilcec, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science.
Also on the project is former Cornell Social Media Lab postdoctoral researcher Dominic DiFranzo, now an assistant professor in the Department of Computer Science and Engineering at Lehigh University.
The project poses specific questions: If an individual objects to an “offense” in a certain way, under certain conditions, how will this objection influence future behavior in the community? In particular, will it encourage or discourage a norm suppressing such offensive speech?
The project proceeds in four research phases, each informing the next:
- Real world observation: The researchers will obtain real-world objections to offensive speech from social media, then map these comments into a theoretical space;
- Individual-level experimentation: Using the novel simulated social media environment Truman, which DiFranzo developed while working with Bazarova and her Social Media Lab, the team will test whether effects observed in the observational phase have causal influence on individuals;
- Agent-based simulation: The team will use the individual-level mechanisms tested in Truman to build simulations of interactions between objectors and audience members at scale; and
- Collective-level experimentation: The team will return to Truman and test whether the collective dynamics of interaction among real people match those produced by their agent-based simulation.
Finally, in the field implementation phase, the researchers will employ evidence-based strategies derived from the project to build scalable online learning modules to train social media users on how to be effective objectors to harmful discourse. This will be done using the Social Media Lab’s Social Media TestDrive public outreach platform.
The design of the project was very much a collaboration, Margolin said, but it follows methods that he’s used often in his research, particularly the use of real-world observations in phase one. He’s excited for the team to collect that data together.
“A lot of my work focuses on this idea that you should observe, and then test, and then see what the broader implications are,” he said. “So rather than making up things that people might say on social media, we start with saying, ‘What are people actually saying? What kinds of things are people objecting to? How are they doing it?’ And oftentimes, there are things that you see in real life that you never would have thought of.”
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe