Tip Sheets

Instagram bets on AI-human collab to stop bullying

Media Contact

Jeff Tyson

Instagram is unveiling two new tools on Tuesday to crack down on cyberbullying: one will automatically hide comments that look like bullying and another will send a warning message to users whose comments are repeatedly flagged as harmful.


Natalie Bazarova

Associate Professor of Communication

Natalie Bazarova, professor of communication at Cornell University and director of the Cornell Social Media Lab, studies the psychology of communicating on social platforms as well as mental health and wellbeing online. Bazarova says Instagram’s new solutions are promising, but that the ability of AI and humans to effectively moderate content together in this context is still unknown.

Bazarova says:

“Just as online harassment can take many forms, it requires multi-pronged solutions that include design approaches, social media literacy, and policies curbing cyberbullying. The solutions that Instagram rolled out today are design interventions that issue a warning message for bullies and automatically hide bullying messages from other users (with an option to view and further report them). While certainly promising, they rely on accurate identification of harassment. This can still be problematic, both for AI and humans, especially for borderline messages with a limited context.

“Another challenge is to strike the right balance between AI and human content moderation – not to take agency from users and also how to use AI while continuing to empower users to be active upstanders on the site. Instagram's solution for that is to give users an option to view, report, or remove the cover from comments flagged by AI, but it remains to be seen how AI and human moderation can work effectively together. For example, one of our preliminary studies has demonstrated that when people know that they can rely on AI to flag problematic messages, they become less active as upstanders themselves because they see upstanding as less of a social norm on the site.

“Finally, one cannot underestimate the effect of social media literacy training in efforts against cyberbullying. Our national program – Social Media TestDrive – developed in collaboration with Common Sense – teaches young social media users digital citizenship and online literacy skills through a life-like social media simulation. To date, we have had over 100,000 users who have had an opportunity to learn and practice how to be responsible digital citizens, including how to be upstanders, through realistic digital dilemmas and scenarios that they may encounter in social media.” 

Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews.