The Internet is Making Us Stupid
The Internet is making us stupid because of clickbait. This isn’t a new fact, it’s a reality that’s becoming increasingly hard to ignore as its consequences spread. The Internet is also making us smart because it provides us with easy access to an ocean of information that was hard to come by in the past.
These two facts aren’t contradictory; the Internet makes information available, but most of it is clickbait because that’s where the money is. Can the Internet be managed in such a way as to maximize the halo and minimize the horns? Probably, but that’s not really happening today.
Why New Technologies Always Make Us Stupid
The classic treatment of the “making us stupid” trope was Nicholas Carr’s 2008 Atlantic cover story, Is Google is Making Us Stupid? Carr examined a lot of brain research and technology history pointing to a loss of attention span. He argued that the Internet is part of an efficiency-oriented industrial system that makes us value quick takes more than long, deep, thoughtful discourse. And he wrote this article before Twitter had even become a widely-used system.
The focus on Google comes from an evaluation of the Google mission and business model:
The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
This sort of doom-and-gloom always accompanies new technologies: the printing press was predicted to cause a wide range of social ills as well, and these predictions largely came to pass. But an even greater number of benefits appeared as well, many of them unanticipated.
The Internet’s Stupidity Bias
Carr’s argument is ultimately not very satisfying because it’s a condemnation of all technologies, and it’s immediately obvious to all but the Amish that electricity, treated water, plumbing, transportation, communication, and medicine are good things. But there are some features in the Internet as we know it today that make it less inclined to serve our overall interests than it might.
Google’s search algorithm, for example, places greater weight on the popularity of web pages and their contents than on their wisdom, authenticity, or veracity. Evgeny Morozov explored the implications of this feature in a 2012 article in Slate, Warning: This Site Contains Conspiracy Theories. The article examines three kooky movements built around dubious beliefs: vaccination refusers, climate change deniers, and 9/11 truthers. Each of these movements has created a number of online communities in which members trade links, engage in conversations, and indulge shared delusions.
The net result of these interactions on the web is to cause the links most loved by the communities to rise in Google’s rankings. This endows them with a aura of respectability, thus leading to more such communities and greater commitment to (largely absurd) shared beliefs.
Morozov describes the problem of these echo chambers well enough, but he doesn’t offer much in the way of solutions. He wants Google to curate links:
Thus, whenever users are presented with search results that are likely to send them to sites run by pseudoscientists or conspiracy theorists, Google may simply display a huge red banner asking users to exercise caution and check a previously generated list of authoritative resources before making up their minds.
It’s hard to see this having an effect on conspiracy theorists beyond reinforcing their belief that the established authorities are keeping them from the truth.
Facebook is Harming our Democracy
Google wasn’t happy with Carr’s and Morozov’s criticisms (Googlers see themselves as on the side of the angels), but their claims were mild compared to some of the grief that Facebook is getting these days. Timothy B. Lee’s recent article in Vox was positively withering: Facebook is harming our democracy, and Mark Zuckerberg needs to do something about it.
Lee explores the implications of the twin facts that many people get their news from Facebook and that Facebook organizes our news feeds by popularity. This is effectively what would happen if Google didn’t just give us search results ranked by popularity but told us what to search for:
The result has been a disaster for the public’s understanding of current affairs. Reporters have come under increasing pressure to write “clickbait” articles that pander to readers’ worst impulses. Too-good-to-check stories gain more traction online than stories that are balanced and thoroughly reported. That has worsened the nation’s political polarization and lowered the quality of democratic discourse.
… Facebook makes billions of editorial decisions every day. And often they are bad editorial decisions — steering people to sensational, one-sided, or just plain inaccurate stories. The fact that these decisions are being made by algorithms rather than human editors doesn’t make Facebook any less responsible for the harmful effect on its users and the broader society.
Facebook is the new home for the conspiracy freaks who used to organize around blogs. It’s dead simple to set up a Facebook group, and Facebook constantly suggests starting a group to users who are already members of groups. Morozov’s Big Three conspiracy theories are joined on Facebook by food scammers, alternative medicine frauds, alien astronaut theorists, Nazis, and other members of fringe social and political groups.
Fixing the Clickbait Problem
Lee is obviously concerned about the dynamics behind the fringe elements of the Trump movement, but the most extreme Trump supporters are easily outdone by more virulent fraudster-oriented movements. I won’t name them because I don’t want to become their targets, but they’re not hard to find on Facebook.
Just as Morozov suggests that Google curate search results, Lee would like Facebook to step up to its editorial obligations by curating news feeds and even re-writing headlines. These steps will make Google and Facebook less “neutral” in some sense, but it’s more important that it in so doing they will make the Internet less polarizing and public discourse less toxic.
Perhaps a more productive approach to the bias of Google and Facebook is to be found on the end-user side. We could develop browser and social media plugins that fact check articles or specific claims. It’s not hard to imagine a tool that would check Snopes for articles on outlandish claims that show up in our inboxes or Facebook feeds. And if Snopes is not your fact-checker of choice, there are other sources, especially for political stories.
Google and Facebook probably should add fact-checking as a service prominently available to their users. I suspect the only reason there’s not a fact check button on Google is the potential loss of revenue from people reading kook sites and even enrolling in kook communities. But I can’t prove that.
Newspapers are Infected with Clickbait
The corrosive effects of clickbait aren’t merely caused by or limited to Facebook and Google. In some sense, clickbait appeals to our innate longing for membership in social groups, a desire that can sometimes overwhelm our other priorities.
Sharing a mythical belief held by a minority – the hallmark of conspiracy cults – has more value to many than mere truth. As Farhad Manjoo observed in the New York Times last week in How the Internet is Loosening Our Grip on the Truth:
A study published last year by researchers at the IMT School for Advanced Studies Lucca, in Italy, found that homogeneous online networks help conspiracy theories persist and grow online.
“This creates an ecosystem in which the truth value of the information doesn’t matter,” said Walter Quattrociocchi, one of the study’s authors. “All that matters is whether the information fits in your narrative.”
Finding information that fits into the group’s narrative makes us better members of the group. Manjoo is careful not to examine his newspaper’s treatment of Internet policy, food science, and technology in general. There are plenty of counter-factual reports in the New York Times, most recently a breathless complaint about genetically engineered seed that managed to turn the key facts on their heads.
Life Without Facts
When we don’t have reliable fact-checkers, we don’t have facts. And when we don’t have facts, public policy becomes nothing more than a contest won by whichever side yells the loudest.
While this observation may be painful to Americans on election day, it’s even more sharply felt by the British in the aftermath of the referendum on that nation’s membership in the European Union.
Regardless of our views of the policy implications of the UK seeking its own way out from under the thumb of the European Union (some of the results may be quite good), there’s little doubt that the voting public was misled on key facts:
At the end of a campaign that dominated the news for months, it was suddenly obvious that the winning side had no plan for how or when the UK would leave the EU – while the deceptive claims that carried the leave campaign to victory suddenly crumbled. At 6.31am on Friday 24 June, just over an hour after the result of the EU referendum had become clear, Ukip leader Nigel Farage conceded that a post-Brexit UK would not in fact have £350m a week spare to spend on the NHS – a key claim of Brexiteers that was even emblazoned on the Vote Leave campaign bus. A few hours later, the Tory MEP Daniel Hannan stated that immigration was not likely to be reduced – another key claim.
So UK voters were in the position of US FCC commissioners, casting votes on an order that was yet to be written. No wonder they’re having second thoughts.
The Fact-Checker Ethic Has Failed
At the dawn of blogging, Ken Layne famously said that bloggers could become the antidotes to bias, laziness, and conformity of the mainstream media by fact-checking:
It’s 2001, and we can Fact Check your ass. And you, like many in the Hate America movement, are no longer able to dress your wretched “reporting” in fiction. We have computers. It is not difficult to Find You Out, dig?
Dialing back the aggressiveness, we still haven’t fulfilled this promise. But it’s more than bloggers can do on their own, especially now that blogging has become indistinguishable from the mainstream media itself.
Long Term Clickbait Solutions
As noted, technology can address at least some of the loss of authority, veracity, and analysis that’s come home to roost in a media establishment chasing the almighty click in the social media umbrella.
We could, for example, create a layer to the Internet that uses AI to validate the claims made in science, policy, and politics. Even if the results are equivocal, it would be nice to know when writers are taking well-defined stands in long-running debates. We might call that “the educated web.”
But that’s only going to go so far: fact-checking, whether done by humans or by algorithms, is only capable of filtering the news that Google and Facebook’s algorithms give us. To address the clickbait problem comprehensively, I suspect (but can’t prove) that the financial basis of the web will need to be adjusted in a serious way.
As long as web sites make more money serving up the news equivalent of fast food, that’s what they’re going to cook. The question for the next few years is how to monetize higher-quality writing, videos, VR experiences, etc. High-quality information will only displace the garbage when it becomes more profitable.