When you go online to look at funny memes, updates on the news and check-ins with friends and family, you are likely also to get a dose of curated hate posts. They target anyone whom the hard right views as “the other," including people of color, members of the LGBTQ community, non-Christians, immigrants and more. Those posts are inescapable and every day they manage to influence, radicalize and recruit more and more people and contribute to the perceived divisiveness in the United States.
Online hate speech and extremism remain significant problems in the United States, with major social media platforms struggling to contain the spread of racist disinformation and violent rhetoric. Despite efforts to moderate harmful content, extremist ideologies continue to thrive online, contributing to real-world violence, political polarization, and societal division. The failure of tech companies to effectively curb hate speech underscores the complexity of balancing free expression with public safety.
The rise of social media has provided extremists with unprecedented access to a global audience. Hate groups and conspiracy theorists exploit platforms like Facebook, X (formerly Twitter), YouTube, and TikTok to spread racist propaganda, misinformation, and incitements to violence. Algorithms designed to maximize engagement often amplify divisive content, pushing users toward radicalization. This digital ecosystem has fueled events like the 2017 Charlottesville rally, the January 6th Capitol attack, and violent acts inspired by white supremacist ideologies.
During the COVID-19 pandemic, online hate speech and disinformation reached new levels, with conspiracy theories about the virus, false "cures," and attacks on public health officials spreading rapidly. Social media platforms became breeding grounds for anti-vaccine propaganda, misinformation about masks, and baseless claims that COVID-19 was a hoax or a government plot. Public health leaders such as Dr. Anthony Fauci faced relentless online harassment, fueled by politically motivated disinformation campaigns. The persistence of these false narratives has had long-term consequences, including vaccine hesitancy, decreased trust in medical experts, and continued attacks on public health initiatives. Even today, disinformation undermines efforts to combat other health crises, with anti-vaccine movements expanding their reach and promoting skepticism toward life-saving medical interventions.
While tech companies have implemented moderation policies, their enforcement has been inconsistent and largely ineffective. Hate groups frequently evade bans by using coded language, rebranding under different names, or migrating to less-regulated platforms. Additionally, content moderation efforts often face political pushback, with critics arguing that such measures infringe on free speech. The result is a persistent cycle where harmful content is flagged, removed, and then quickly reintroduced in different forms.
The failure to effectively regulate online extremism has real-world consequences, including increased hate crimes and radicalization. Stronger legislative oversight, improved content moderation strategies, and greater transparency from tech companies are necessary to combat this ongoing issue. Without more decisive action, social media will continue to serve as a breeding ground for hate speech, undermining social cohesion and public safety in the United States.
In Tamar Mitts’ new book, Safe Havens for Hate: The Challenge of Moderating Online Extremism, Mitts looks at why efforts to moderate harmful content on social media fail to stop extremists.
Drawing on a wealth of original data on more than a hundred militant and hate organizations around the world, Mitts shows how differing moderation standards across platforms create safe havens that allow these actors to organize, launch campaigns, and mobilize supporters. She reveals how the structure of the information environment shapes the cross-platform activity of extremist organizations and movements such as the Islamic State, the Proud Boys, the Oath Keepers, and QAnon, and highlights the need to consider the online ecosystem, not just individual platforms, when developing strategies to combat extremism.
Guest:
Tamar Mitts is the author of Safe Havens for Hate: Challenge of Moderating Online Extremism (Princeton University Press) She is professor of international and public affairs at Columbia University, where she is a faculty member at the Saltzman Institute of War and Peace Studies, the Institute of Global Politics, and the Data Science Institute.
"The Source" is a live call-in program airing Mondays through Thursdays from 12-1 p.m. Leave a message before the program at (210) 615-8982. During the live show, call 833-877-8255, email thesource@tpr.org.
This interview will be recorded on Tuesday, March 4, 2025.