I. Am. Outraged.
Why fake news catches fire and spreads so quickly on social media
By Michael Patrick Lynch
When we share media stories online we are sharing information we believe and/or endorse, right? Wrong.
First, studies show we do not read what we are sharing. Second, they show that we do share content that gets people riled up.
Research has found that the best predictor of sharing is strong emotions — both emotions like affection (think posts about cute kittens) and emotions like outrage. One study suggests that morally laden emotions are particularly effective: Every moral sentiment in a tweet increases by 20% its chances of being shared.
And plausibly, social media actually tends to increase our feelings of outrage. Acts that would not elicit as much outrage offline elicit more online. This intensification may be due in part to the fact that the social benefits of expressing outrage online, such as increased tribal bonding, still exist and are possibly amplified, while the risks of expressing outrage are lessened — on the internet, it is harder for those you are yelling at to strike back with violence. Moreover, outrage can itself simply feel good.
And since our digital platforms are designed to maximize shares and eyeballs on posts — and outrage does that — it is not surprising that the internet is a great mechanism for producing and encouraging the spread of outrage.
As the neuroscientist Molly Crockett puts it, “If moral outrage is like fire, then social media is like gasoline.”
Put together, these points — what we are doing with our shares and what we are not doing — make it difficult to believe that the primary function of our communicative acts of sharing is really either assertion or endorsement, even though that’s what we typically think we are doing.
“If moral outrage is like fire, then social media is like gasoline.”
—Molly Crockett, neuroscientist
We think we are sharing news stories in order to transfer knowledge, but much of the time we aren’t really trying to do that at all — whatever we may consciously think. If we were, we would presumably have read the piece that we’re sharing. But most of us don’t. So what are we doing?
A plausible hypothesis is that the primary function of our practice of sharing content online is to express our emotional attitudes. In particular, when it comes to political news stories, we often share them both to display our outrage — broadcast it — and to induce outrage in others.
As Crockett has noted, expression of attitudes like moral outrage is one way that tribes are built and social norms enforced. Social media is an outrage factory. And paradoxically, it works because most folks aren’t aware, or don’t want to be aware, of this point. But it is just this lack of awareness that trolls and other workers in the fake news industrial complex find so useful.
Purveyors of fake news are keenly aware that when we share, we’re doing something different from what we think we’re doing.
Digital platforms are intentionally designed to convey emotional sentiment — because the designers of those platforms know that such sentiment is what increases reshares and ups the amount of attention a particular post gets. And whatever does that makes money.
I am not saying that we don’t endorse and assert facts on social media. Of course we do — just as some of us read what we share. Moreover, it is plausible to take ourselves to be endorsing or asserting that part of a shared post that we typically do read: the headline. Our communicative acts online can do many things at once.
But if you want to understand what I’m calling the primary function of a kind of communicative act, you need to look at the reason that the act continues to be performed. And in the case of sharing online content, that reason is the expression of emotional attitudes — particularly tribal attitudes.
Why? Because expressions of tribal emotional attitudes like outrage are rewarded by the amount of shares and likes they elicit.
The expressivist account of online communication is also compatible with the fact that we do form beliefs and convictions as a result of sharing attitudes.
Compare “team-building” exercises. These kinds of exercises (like falling back into your colleague’s waiting arms) are not directly aimed at conveying information or changing your mind. They are aimed at building emotional bonds with your coworkers. But if all goes well, that will have a downstream effect on what you believe. In learning to trust your team members, you will come to believe that this is the team you want to be on.
A similar thing happens during the training of military recruits. Many of the exercises that new soldiers are put through are aimed at building trust and self-confidence. But especially in wartime, they are also aimed at making soldiers hate the enemy. This aim, too, has downstream effects: The soldiers come to believe they are fighting on the right side.
Social media is like boot camp for our convictions. It bolsters our confidence, increases trust in our cohort, and makes us loathe the enemy. But in doing so, it also makes us more vulnerable to manipulation and feeds our hardwired penchant for being know-it-alls.
We think we are playing by the rules of rationality — appealing to evidence and data. But in fact, the rules we are playing by are those that govern our self-expressions and social interactions — the rules of the playground, the dating game, and the office watercooler. These rules have more to do with generating and receiving emotional reactions, solidifying tribal membership, and enlarging social status than with what is warranted by the evidence and what isn’t.
This emphasis on emotional reactions is perhaps most obvious on Facebook where the stated goal, after all, is emotional connection. Consider how the platform encourages us to react to posts that we share with one another. It used to be that one could only “like” a post or refrain from liking it. But now Facebook offers the choice of a few different reactions, each corresponding to a basic emotion and represented by easily recognizable emoticons: frowny face, happy face, surprised face, and of course, angry outrage face.
My experience in using these emoticons, which I suspect is widely reflected in others’ use as well, is that they have a deep impact on how you think about the pieces being shared. For one thing, the emoticons that other people in your network choose in reacting to a post can strongly affect how you yourself react. That effect is similar to the effects of social pressure offline.
If everyone in your workplace dislikes something someone said or did, it is difficult not to show a similar reaction. Similarly, if your friends express outrage at a news piece, it can feel awkward not to do so yourself. And independently of that factor, the emoticon you choose can help condition how you comment on the post, if you do comment. If you choose the angry emoticon, for example, it is extremely unlikely that you will then comment by saying that the piece in question really made you think.
Now consider a thought experiment.
Imagine that instead of the emoticons, we had a choice of three buttons that we could use when sharing a news story or other claim to fact: “justified by the evidence,” “not justified by the evidence,” and “need more information.”
How might having these choices — instead of emoticons aimed at the most basic human emotions — condition how we would engage with what we share and what we don’t share?
One thought — no doubt overly hopeful — is that they would make at least some of us more reflective or thoughtful. We might even be less eager to share something we haven’t read — because we would be thinking of people’s reactions as being
hinged not on their outrage or joy but more on the evidence they perceive the piece to communicate. It might encourage some of us to be more skeptical, and humbler, ourselves.
But unless the basic digital economy changed, my hypothesis is that eventually, we would start treating all three buttons emotively. Eventually — as the old expressivists would have predicted — we would start to use the language of evidence to express feelings, not considered opinions.
We could play on the emotions of others to get them to rate as “justified by the evidence” items that nonetheless go unread. And we might engage in spreading fake news and misleading evidence. So, not as much might be gained as we would wish.
Yet even if, in the way of thought experiments, this one is idealized, it highlights a crucial point.
Just changing the surface appearances of our social-media platforms won't help. As long as we ignore the fact that their underlying economy rewards the expression of strong emotion over reflection, we will continue to deceive ourselves about the real nature of much of our communication on those platforms.
We will continue to contribute, unwittingly or otherwise, to a corrupted information culture. And we will continue to make ourselves vulnerable to information polluters who revel in that corruption and take advantage of our naïveté — all the while complaining that our critics are peddling fake news.
This article was excerpted from Michael Patrick Lynch’s latest book, “Know-It-All Society: Truth and Arrogance in Political Culture,” published in August by Liveright, a division of W. W. Norton & Company. Lynch is a Board of Trustees Distinguished Professor of philosophy and director of UConn Humanities Institute.