Content/Trigger Warning: Suicide/Suicidal Ideation
Towards the tail end of 2022, a court found two social networks (Instagram and Pinterest) directly responsible for the death of one of their users. A coroner testified that these two sites contributed to her death "in a more than minimal way". (See the links in "Resources" for details.) It's not the self-aware Skynet and terminators that will kill us; the antisocial media networks are already getting us to do so ourselves, which is far worse.
The defense these companies provided was that they are not responsible for the content of their sites. They claim their sites are mediums, which means that they are neutral. Sure, people can post and consume terrible things while they're there, but those people are directly responsible. The medium isn't responsible for the purposes for which it's used, right?
"Technology is neither good nor bad, nor is it neutral."
— Melvin Kranzberg (1917-1995), professor of the history of technology at Georgia Tech (1972-1988); Kranzberg's First Law
Wrong, because the thing is, the mediums aren't neutral. Social networks aren't just "mediums", either; they're the products of companies with agendas. Their mechanics aren't inert or unintentional or inevitable; they're designed for a specific purpose (extending and maintaining engagement). They're technologies, particularly technologies that have been evolved to meet specific psychological and financial ends. Their creators want us to feel something. The catch is that the thing they want us to feel is literally bad for us.
Let's go past the obvious. Yes, they want to addict us. Yes, they are designed to continually grab our attention. Yes, they revolve around "likes" and "notifications," and are designed to make us crave both, as well as get us to "create content" that will generate both, in between listlessly scrolling to find more. There's a reason both drug dealers and software/Web developers refer to people as "users"; addiction to the harmful products they're peddling.
Anxiety, Depression and Radicalisation
All that is true, but that's not really where the problem lies. Why do social networks exacerbate anxiety and depression so much? Why are they all so consistently good at radicalising their users? It goes beyond red numbers (the colours being a conscious choice, since red is associated with alarm and urgency) and infinitely-scrolling feeds. At the heart of all these mechanics lies a simple but non-obvious truth: Social networks are designed to put us in competition with one another — and on some level, the game board is your very sense of self.
No matter how much you design and craft it, your "character" is going to become conflated with you, your very identity, at some point (as much by yourself as others), no matter what the particular game. Are you interesting? Are you beautiful? Are you funny? Do you have enough friends? Are you enjoying your life enough? Are you rich? Are you famous? Are you right? Are your feelings and opinions the correct ones to have? Are the things you enjoy, objectively speaking, the most correct things you can enjoy?
The more invested you get — and these games are expertly designed to make us feel invested — the less it suffices to play completely honestly, "as yourself." There is an increasing pressure to perform, to outdo, to win at all costs. But your "reward" isn't really a reward, and there are two reasons why that is:
- Any chemical reaction in your brain to "being liked" (dopamine hits) is extremely fleeting, unless it's connected to some genuine intrinsic benefit (like "meaningful connection"). Social networks aren't designed to offer you that (and that's not by accident).
- More perniciously, the actual "reward" (in the schema of the game) is that you make other people feel worse, driving them to try and outdo you.
This is the Möbius strip of consequence that defines social networking: You doing "well" is what causes other people to "lose," usually in ways designed to make them feel specific forms of inadequacy. Unsurprisingly, they react (not respond) to those feelings by doubling down, becoming angry, pushing those feelings onto you instead.
It's a game without an actual winner, because nobody's accurately depicting anything about themselves, their circumstances and situations, etc. They're just lying to make other people feel bad. They're projecting images that are designed, somewhat consciously but mostly due to social-network manipulation, to make other people feel more alienated and alone (a carefully curated highlight reel). This leads to the phenomenon that defines the social-media age: Literally billions of people are unified in these feelings of resentful isolation, but that unification is sundered by the fact that they've all been taught to see each other as the enemy and that they truly are alone. If cities are places where people are alone together, antisocial media is that turned up to eleven. If you actually want to connect with people in a meaningful way, then you must disconnect from antisocial media and not go back. Of course, going cold turkey takes enormous willpower, something thwarted by the instant-gratification-seeking behaviour social media taught us to foster.
The algorithms are designed to blindly push people towards things that generate the most engagement. In other words, the algorithms push people towards the things that make them feel the most compelled to commit to something. Perversely, that often means pushing people to the things that make them feel the worst, because that's what leads them to want to lash out or overcompensate in some way. "Lowest common denominator" obviously holds some advantage — hence cute animals popping up everywhere, but ultimately that's not quite enough. The most successful things (to the site, that is) are also aggravating, irritating, enraging or isolating on some level, because that's what sets the feedback loop into motion.
The algorithm doesn't know what it's pushing, but it has gathered plenty of data on what it pushes and why. It helps users encourage one another to despair, because the more you despair, the more you need an outlet for venting that despair. Hence the prevalence of communities that revolve around suicidal ideation or eating disorders or explaining why slight variations in the shape of a male forehead determine whether women feel a biological need to screw over a man they don't/barely know. The unifying trait of these communities is that they consist of people encouraging each other to keep going, confirming for one another that their worldview is right while constantly upping the ante, escalating the bleakness, ratcheting up the sense of urgency. These communities proliferate because they do the algorithm's work for it — so the algorithm keeps recommending them to new people (new users, if you will). In simple terms, worse is better (as far as the algorithms are concerned, anyway), because humans prefer bad news to good. Newspaper publishers have known (and capitalised on) this for decades, long before antisocial media existed.
The Paperclips AI
Do you know the popular story about the "paperclips AI?", the machine that gets told to make paperclips as optimally as possible, and ends up destroying the world by turning it all into paperclips? I've seen it suggested before that publicly-owned corporations are a version of this AI: Companies like Chevron can't help setting the world on fire, because they're an algorithm tooled towards maximizing profits at the expense of literally anything else. Social media algorithms work the same way: They will destroy communities and human connection and they will do their damnedest to destroy your soul too, because they don't care about human beings; they care about maximising profits for the companies that made them.
The "optimal user," to these algorithms, is the most horrific image of a person that you can imagine — the kind of person who'd annihilate a part of your mental well-being if you so much as glimpsed them. That person might be a vapid influencer or a political radical or someone who isn't just suicidally depressed but actively addicted, in a deeply disturbing way, to suicidal ideation. Worse, this person could be all three! Make no mistake, however; this being is what the algorithms are designed to produce, regardless of whether or not their creators are smart enough or willing enough to anticipate the end results (or admit that they have). The monstrosity isn't an unintentional byproduct; it's the precise thing these sites want to generate, even if they'd rather limit it just enough to maintain plausible deniability or even a good night's sleep. It's one of the reasons why I mistrust AI.
As the economy crumbles and the social safety net falls away, and as younger and increasingly poorer generations are (incorrectly) taught that the only way out is to hustle, their livelihood and their future starts to look exactly like the hellscape that is antisocial media. Whether it's driving for Uber or posting on OnlyFans or living in a TikTok influencer house or, hell, trying to invent and create the next big social network (FSM forbid), you're just trying to be the one person who does well enough to destroy all the others, as you cross your fingers and pray that the algorithms don't abruptly change and pull the rug out from under your feet.
Conclusion
Ultimately, all that the algorithms want is engagement. They want you obsessed with other people and obsessed with yourself — and, crucially, they want you to hate both those people and yourself. That's the algorithmic version of perpetual motion. It leads to Bored Apes, murders, suicides and not much else. None of it is remotely social, which is why I refer to such networks as "antisocial media", for lack of a better term.
What perturbs me most is that, even though I know all this, rail against these sites and actively do my best to stay away from them, I cannot resist going back to them; I, too, am addicted. There definitely are some positive attributes, but finding them and continuing to avoid the dangerous ones is like swimming upstream against a strong current. Despite the proliferation of emoticons (mainly an exercise in hyperbole), we're slowly drowning, not waving. We're too caught up in the current to get out of the river. That's just how Zuckerborg and his peers want it.
Comments have been closed, for reasons pretty much expressed right in this post.
Thumbnail image: Photo by Cottonbro on Pexels