Facebook announced new rules this week for political advertising in response to misleading disclosure claims on ads and pressure to maintain transparency ahead of the 2020 election. Facebook will now require political advertisers to verify their identities via government databases.
Drew Margolin, professor of communication at Cornell University, studies the way people communicate online and the role of accountability, credibility, and legitimacy within social networks. He says that the new rules will likely slow down illegitimate advertisers, but the effect for users will depend on how Facebook flags political advertisements.
"On the supply side—influencing advertisers—it is a good first step because it at least adds some burdens to political advertisers. We know that scam artists search for the path of least resistance, so sites that have no oversight will actually attract more of this kind of content. Putting some rules in place should slow them down.
"But the influence on users is harder to anticipate. On the one hand, people do respond to information about the source of information—if it is from a source they don't trust, they are more skeptical. But, there is also research that shows that there is an ‘implied’ effect when content isn't flagged when a flagging policy is in place. Basically, people tend to assume that it's ok—that's it's been checked and given a seal of approval—even if it just hasn't been flagged.
"So it would be important for these companies to create two categories—verified, not verified—and make this salient, in addition to reporting on the sources for the verified cases."