Since 2016, mainstream news has fixated on technological explanations of political extremism. Article after article foregrounds technology to explain political change and, specifically, the rise of the American right. However, when we focus on technology as a cause for social change, it becomes easy to lose sight of the social world we wanted to explain in the first place. This is especially true of current debates about rightwing extremist in the US.
As violent and hateful politics in the US become more visible, the internet has also taken root as an instrument of popular communication. Given the seismic change introduced by information technologies, it makes sense to ask what role new media play in the spread of the alt-right or neo-Nazi thought. The editorial board of The New York Times recently took up this tech-as-catalyst narrative, pointing to social media as an agent of negative social and political change. The board argued that . . .
the fundamental design of social media sometimes exacerbates the problem. It rewards loyalty to one’s own group, providing a dopamine rush of engagement that fuels platforms like Facebook and YouTube, as well as more obscure sites like Gab or Voat. The algorithms that underpin these networks also promote engaging content, in a feedback loop that, link by link, guides new audiences to toxic ideas.
There is certainly some truth that media have a role in creating group identity around ideologies, but blaming the technology for the appeal these ideas shifts attention away from more obvious spurs for social change.
Let’s break down the argument. According to the Times, impersonal agents of the network (algorithms) lead a child-like citizenry (audiences) to ideas that are not good for them or the society in which they exist (toxic). By this reasoning, removing the communication tools that link the radically-minded would undermine the spread of rightist ideologies if not rightist movements themselves. The problem is that this tech focus does not adequately account for the social anxiety that these radical beliefs seem to answer.
An analogy to online advertising helps make this point. Innumerable cookies and web trackers follow us online and collect behavioral data to create predictive consumer profiles. Thus, a woman between 16 and 34 who searches lotions for stretchmarks has a higher probability of buying, for example, prenatal vitamins. The algorithm serves the woman the “Superbaby” vitamin ad. Though she did not intend to buy the vitamins, the targeted marketing works and she puts the item in her cart. Here we see algorithms at work. But did the algorithm prompt the woman to buy the vitamins or did the technology facilitate a preexisting need? Like pregnancy, the social problems that allow extremism to make sense to online audiences similarly preexist the technology that serves it up.
This is why I argue we need to correct the technology bias in addressing the relationship between extremism and online communities. We can do so by inverting the implied causality in arguments that blame new media. Did technology lead to the rise of rightwing extremism or did rightwing extremism seek communication tools to link the like-minded into communities? The NY Times editorial board seems to believe Gab and Voat fell from technology heaven fully formed. In reality, Gab’s creator was all-too-human. Social conditions created the technology to meet a political need that preexisted the technology. If we want solutions to American radicalism, we need to pay more attention to social conditions that make extremism a legitimate option rather than the secondary question of how it gets around the internet.
The focus on tech is not without merit, but the questions social researchers take up should ask why such toxic ideologies have appeal. “The algorithm did it” is insufficient and, in fact, undermines finding clear answers. Furthermore, technological explanations for why people hold political beliefs may function as a sort of optimistic fairy-tale about the inherent goodness of the United States. Taken to a logical conclusion, technology arguments about extremism assert that if not for Facebook, Americans would be more tolerant, less anxious about change and more trusting of government. The focus on technology allows us to believe that Americans are only temporarily “off course.” American neo-Nazism is a mistake that better algorithms and artificial intelligence can correct.
This is a predictable mistake when technology dominates our search for answers to social problems. Excessive focus on a technological explanation suggests that Americans are not fundamentally xenophobic, anti-Semitic or tribal. But this may not be true. In reality, the mélange of conspiracy theories (Soros is a hidden political puppetmaster; an Islamic center is an effort to institute Sharia law in the US, etc.) stems from a sense of social powerlessness and a loss of local American communities. Technology only offers the idea. The social context makes far-fetched or conspiratorial explanations of that powerlessness attractive.
Blaming the algorithm can also function as a sort of ignorant optimism. It can become a story about an American population misled by technology run amok. But this explanation of American radicalism too quickly pushes aside obvious explanations for the growing rejection of the status quo. Middle class incomes have stagnated for decades even as US gross domestic production has grown. American wealth, in general, has become concentrated in the hands of not just the 1% but the top 0.1% of citizens. Healthcare in the US is the most costly in the developed world while for-profit insurance companies avoid taking on sick customers and pay out as little as is legally required. Racial, religious and political tensions undercut the unifying story of American society, what David Brooks called the “American Creed.” Globalization has restructured the national economy, prompting the collapse of entire communities and ways of life, leading to misdirected anger at immigrants and ethnic minorities. The focus on technology can obscure these traditional triggers for extremism and tribalism.
In short, revolutionary dissatisfaction with life in America does not need an algorithm. At best, the technological explanations illustrate how ideologies circulate according to the network logics of the commercial companies that profit from their circulation. At worst, they distract us from actual reasons such ideologies are growing. To understand why these ideas take root in the minds of people, we must focus on the lived experiences of Americans that give such “toxic” ideas traction.
One thought on “The algorithms made me do it.”
The Times editorial notes that “[p]ast decades saw violence by left-wing groups, environmental extremists and black nationalists, but while attacks from those groups have fallen dramatically, violence from the right has risen.” If tech was responsible for the rise in extremist violence, why would it only affect the right?
Perhaps they should think further on two facts included in the editorial: 1) “a longtime online acquaintance said that Mr. Bush’s tweets — which had long been peppered with infrequent casual racism — became more and more vitriolic over the course of the 2016 election.” 2) “… in 2017 alone there was a 57 percent increase in anti-Semitic incidents.”
Obviously, the 2016 election is not the primary cause. The social conditions you note provide a framework of susceptibility to longstanding bigoted ideas that have continued to circulate throughout the evolution of media technologies. But bigotry edges closer to the mainstream in some moments and not others. Media can facilitate this, as with cinema and Birth of a Nation, but I agree that they are much less causal than most would have it. And if we are going to point at media that has facilitated right-wing extremism today, let’s not leave out television coverage of the 2016 election.