I like social media but I wish I didn’t. When I go on BlueSky or Threads, a significant percentage of the posts are from professional provocateurs, people who exclusively repost clips of “political opponents behaving badly.” Most of us are familiar with the ritual: politician X does something performatively outrageous to progressives such as posing with an AR-15 in inappropriate places. An enterprising “online entrepreneur” re-posts the content on another platform. This causes a scalding lava flow of outrage, adding mega-units of glee to the original offender (and to the “online entrepreneur,” who probably gets ten new Substack subscribers with each post).
I’m torn about this practice of “comfort bashing.” It certainly doesn’t promote the habits conducive to good public discourse. There’s no perspective taking, warrant formulation or argument folding. If a law professor trained in argumentation graded a social media comment thread, they would give it a D- (grade inflation is a thing after all). Comment threads are filled with red herrings, ad-hominem attacks, and enough straw-man arguments to feed the world’s cattle.
But, on the other hand, good civic argumentation is not the point of social media. In 2012, I wrote about how Facebook promoted affective, emotion-based communication over deliberative, rational discourse. Social media is for building solidarity through venting, and there is so much to vent about. When my mind goes here, I say to myself “stop being such a buzzkill … People need to blow off steam … Stop ruining other people’s fun.” On the rare occasion where I intervene in a “solidarity through snark” comment thread, I’m impolitely told to go do obscene things to myself.
Besides, I’m not above sarcasm and mockery. In fact, I’m a consumer (albeit with some guilt). Like the good aging progressive Gen Xer I am, I occasionally revel in John Oliver’s, Seth Meyers’, and Jon Stewart’s sarcasm and mockery of public figures with whom I disagree, both from a policy perspective and from an aesthetic/ethical standpoint. If that’s the case, who am I to judge others?
My reluctance to gleefully join in on the social media “pile-on” isn’t simply aesthetic. One insight I’ve gleaned from studying social media for two decades is that these platforms reformulate our ground truth, or the way in which we collect knowledge about the world. All data is an abstraction from the real, but social media encourages us to forget that lesson. Whether we are on social media or not, we use cognitive shortcuts to make sense of a complex world. This is why we have stereotypes and biases.
AI models do the same thing. Kate Crawford notes that the training data used to train machine learning models is their “ground truth.” The pre-labeled training data from which supervised algorithms learn is provided by flawed humans with flawed epistemologies. Regardless of the type of training data used (images, words, sounds, policing records, geolocation data, etc.), the human-generated model of “truth” given to the AI model is a representation, not an exact copy of, reality. The difference is that we, as humans, are (or should be) aware of our epistemological limitations. As far as the AI model knows, however, the abstract “ground truth” of the training data is the absolute truth.
In my forthcoming book, You Must Become an Algorithmic Problem, I argue that we are taking on machine-like qualities. Our highly productive tech tools are instilling in us habits of certainty in an inconclusive world. Our advantage over machines is that we go about the work of figuring out what is true about the world with the knowledge that our truth is incomplete. With respect to Plato and his cave, the “ultimate truth” of things is difficult to discern. But at our best, we develop a sense of empathy and vulnerability from the knowledge that all of us are limited in our ability to understand the “true nature” of things.
In the not-too-distant past, this “no one knows for sure” view of the world facilitated the liberal democratic project. If ultimate truths are nuanced and complex, then our best bet is to live in a society where individuals and communities should be free to engage in their own personal epistemological projects, with rights guaranteed by the state. But lately, I’ve seen this attitude fray at the seams.
This isn’t exclusive to social media. For most of my teaching career, my first-year students regularly adopted a “who am I to tell others what to do” attitude about the world on practically any ethical challenge of world politics I threw at them. This could be maddeningly frustrating when presenting them with human rights abuses in other countries, but at least it was an exercise in intellectual humility.
But increasingly, more and more of my students (albeit still a minority) enter my first-year class with an even more frustrating certitude. It is as if they have been soaked in a brine of YouTube videos, Instagram Reels and TikToks and have emerged ready to impose their ill-considered truths of the world on me and their classmates. This epistemological arrogance would be at least understandable if it was earned in some way through years of assiduous study. I have a grudging respect for certainty whittled into a solid position over decades of craft. But increasingly, many among us arrive at seemingly carbon-forged positions without going through the hammering, pressing, and shaping necessary for such positions to be truly strong.
How can I blame my students for this? We live in a culture that rejects nuance or any form of intellectual struggle for truth. On social media, nuance often seems to be treated as a capitulation to authoritarianism. As an academic, I do not know what to do with this perspective. Certainly there is space for “speaking truth to power.” But @pwned69 or @bigrickenergy on X isn’t power. It isn’t close to power. If we are moving towards a world of epistemological certitude necessary to confront real evil in the world, then let’s do that. Let’s vote, march, picket, run for office, or do anything that engages with formal power. To do otherwise is to indulge our bottomless need to be secure in our worldview, regardless of whether that world is on fire.
We are in a precarious moment where citizens are getting comfortable with a deeply illiberal politics of abstraction. The philosopher Brian Massumi draws a distinction between deterrence, which operates under the presumption that threats can be identified and deterred either through reason or sanction, and pre-emption. Deterrence is a deeply liberal premise that assumes that individuals can modify their behavior to either gain rewards or avoid punishment. This approach assumes self-governing agents that understand their preferences and can be persuaded to act in desirable ways.
Machine learning algorithms rely on pre-emption. This logic aims to neutralize threats before they emerge. With pre-emption, the subject isn’t a self-governing agent with the ability to modify their behavior. They aren’t full human beings with a worldview, but an abstracted danger that must be removed. Preemption does not require an understanding of the subject’s motivation. Instead, it focuses simply on “removing the threat,” whatever form that might take.
The ability to use algorithmic models to produce abstracted ground truth gives policymakers a powerful tool. A public that is indifferent to the gap between their ground truth and “truth” are likely to accept their ground truth as fact. When individuals are identified as “terrorist sympathizers,” “enemies of the state,” “neo-colonialists,” or “mouth-breathing bigots” based on social media posts, AI trained on social media data also sees these individuals in reductive ways.
When training a cat classifier, the goals matter. If the AI comes upon a tiger, then the purpose of the classification matters. If the purpose is a simple taxonomy, then “tiger as a cat” seems appropriate. But if the classifier is training for pet adoption, the implications are different.
But anyone who works with AI models recognizes their limitations. Data collection is incomplete, and AI models still require human labor for labeling data sets. With or without “humans in the loop” the process can be subjective and prone to error. Many cases are “edge cases” or ones whose classification carries heightened importance. As Google’s Cassie Kozyrkov noted, in a classic example: when training a cat classifier, the goals matter. If the AI comes upon a tiger, then the purpose of the classification matters. If the purpose is a simple taxonomy, then “tiger as a cat” seems appropriate. But if the classifier is training for pet adoption, the implications are different. The point is that behind the apparent objectivity of algorithms lie the subjective, socio-technical choices of humans. Machine truth, while appearing scientifically legitimate, can be used to support political agendas.
Langdon Winner argued in 1980 that technological artifacts—machines, computers, and the like—carry the social and political biases of their creators in their design. The biases and value systems of designers are transmitted into their creations. In the case of our current AI tools, their politics is abstraction. Algorithms do not merely classify people into categories; they shape and reconstruct their identities. When we search for items, recommendation algorithms interpret our aggregated choices as a form of ground truth. Algorithms have the effect of solidifying a particular “ground truth” in users’ minds. When the algorithm identifies someone as a “gang member” based on human-generated criteria, the model’s “ground truth,” however flawed, becomes a stand-in for reality. These citizens prefer action upon abstracted ground truth rather than the pursuit of actual truth.
A politics of abstraction can be very powerful in what Anne Schneider and Helen Ingram dubbed the social construction of target populations. A large part of the politics of public policy is to define target populations either negatively or positively. This process of definition is consequential. It can, for example, help amass public support for policies like deporting undocumented immigrants to harsh foreign prisons without due process. Schneider and Ingram proposed that targets could either be viewed as ethically deviant or ethically advantaged. A public that begins to prefer preemption over deterrence becomes less concerned with the due process rights of those targeted as deviant because they are seen less as autonomous subjects and more as “anomalies” that need to be removed from the environment. Public certainty about the criminality or immorality of a group becomes a pretense to engage in unspeakable horrors against them.
The choices made about “which label is applied to what case” also shapes our future perceptions of objects and concepts. We can see this in something like predictive policing. You might not be a “suspect” in real life, but if the ground truth points to your potential criminality, institutions may treat you as such, and you might begin to see yourself as a “suspect” and subject to pre-emption. The current administration has been wildly effective at increasing the number of groups in the “deviant” category (universities, trans kids, undocumented immigrants, the media, etc.).
We’re all vulnerable to our own ground truth based on our social media presence. But when the state replaces the “ground truth” about a population (selected details) as a stand-in for the more nuanced lived experience of these groups, the consequences are life and death. The easy availability of tools used to algorithmically classify subjects gives states unprecedented power to define deviant and advantaged targets. The same logic applied to undocumented immigrants or foreign nationals traveling to the U.S. can be applied to U.S. citizens whose loyalty to the state can be constructed as low via a set of social media posts.
A culture that prioritizes action over truth-seeking is dangerous for all of us. A belief in the bedrock liberal principle of universal human value, the Kantian “Kingdom of Ends” is what protects us from the politics of “ground truth.” As citizens in liberal democracies, we need to return to the practice of seeing our fellow citizens as “complete people” with diverse, complex, dignified lives. If that goal proves too lofty, we can at least return to a “no one knows anything for sure” approach that leaves individuals alone to pursue their own conceptions of the good.
Image Credit: Franz Marc, “Tiger” (1912) via Wikimedia