A Different Way of Thinking About Cancel Culture

An even more sinister version of this operates retrospectively, through search results. An employer considering a job candidate does a basic Google search, finds an embarrassing controversy from three years ago and quietly move on to the next candidate. Wokeness has particular economic power right now because corporations, correctly, don’t want to be seen as racist and homophobic, but imagine how social media would have supercharged the censorious dynamics that dominated right after 9/11, when even french fries were suspected of disloyalty.

Tressie McMillan Cottom, the sociologist and cultural critic, made a great point to me about this on a recent podcast. “One of the problems right now is that social shame, which I think in and of itself is enough, usually, to discipline most people, is now tied to economic and political and cultural capital,” she said.

People should be shamed when they say something awful. Social sanctions are an important mechanism for social change, and they should be used. The problem is when that one awful thing someone said comes to define their online identity, and then it defines their future economic and political and personal opportunities. I don’t like the line that no one deserves to be defined by the worst thing they’ve ever done — tell me the body count first — but let’s agree that most of us don’t deserve to be defined by the dumbest thing we’ve ever said, forever, just because Google’s algorithm noticed that that moment got more links than the rest of our life combined.

I think this suggests a few ways to make online discourse better. Twitter should rethink its trending box, and at least consider the role quote-tweets play on the platform. (It would be easy enough to retain them as a function while throttling their virality.) Fox News should stop being, well, Fox News. All of the social media platforms need to think about the way their algorithms juice outrage and supercharge the human tendency to police group boundaries.

For months, when I logged onto Facebook, I saw the posts of a distant acquaintance who had turned into an anti-masker, and whose comment threads had turned into flame wars. This wasn’t someone I was close to, but the algorithm knew that what was being posted was generating a lot of angry reaction among our mutual friends, and it repeatedly tried to get me to react, too. These are design choices that are making society more toxic. Different choices can, and should, be made.

The rest of corporate America — and that includes my own industry — needs to think seriously about how severe a punishment it is to fire people under public conditions. When termination is for private misdeeds or poor performance, it typically stays private. When it is for something the internet is outraged about, it can shatter someone’s economic prospects for years to come. It’s always hard, from the outside, to evaluate any individual case, but I’ve seen a lot of firings that probably should have been suspensions or scoldings.

This also raises the question of our online identities, and the way strange and unexpected moments come to define them. A person’s Google results can shape the rest of that person’s life, both economically and otherwise. And yet people have almost no control over what’s shown in those results, unless they have the money to hire a firm that specializes in rehabilitating online reputations. This isn’t an easy problem to solve, but our lifelong digital identities are too important to be left to the terms and conditions of a single company, or even a few.

Source