The Great Social Media Freakout

Senator Al Franken got Silicon Valley’s attention by proposing to apply net neutrality regulations to mega-gatekeepers Google, Facebook, Twitter, et al. Writing in The Guardian, the senator correctly observed that “these companies have unprecedented power to guide Americans’ access to information and potentially shape the future of journalism.”

The argument for regulating the gatekeepers is at least as sound as the logic for net neutrality regulations on broadband networks. Any firm that mediates Internet interactions has the power to manipulate our access to information, to persuade us to believe what they want us to believe, and to hide information they don’t want us to see.

The Spiderman Rule, that power implies responsibility, clearly applies every bit as much to a content platform as to a network platform. Considering the incentives content platforms have to alter our reality by virtue of their advertising-heavy business models, the argument for regulatory scrutiny is strong.

But…Net Neutrality is Just for NETWORKS, Dammit!

Well that’s what we’ve been told, anyway. The hard core of the net neutrality movement was quick to lash out at the Senator. TechDirt’s Karl Bode dismissed Franken as a kook waving a “magic wand”:

And to be clear, net neutrality is something specific to the uncompetitive telecom industry…You don’t need search or app store neutrality rules because those markets are actually competitive…Users can and should choose to not visit Facebook if they find the company’s ethics troubling. You can use Duck Duck Go if you’re understandably wary about Google’s schnoz up in your business.

[Emphasis in original.]  For all his passion, Bode doesn’t understand the issues that Franken and the others on both sides of the Senate Judiciary Committee are raising. Content platforms don’t just get up in our business while we’re on their sites, they essentially wiretap our visits to every website that incorporates their tracking code. TechDirt, for example, tracks visits for Google and Twitter.

Because you and I use the content platforms for free, we’re not their customers. We’re actually the products they sell to their actual customers, advertisers. Internet advertisers can’t just go to Duck Duck Go to reach potential customers of their own if they don’t like Google since nobody uses Duck Duck Go.

How Would Net Neutrality Work for Platforms?

Wired’s Nitasha Tiku (former editor-in-chief of gossip blog Valleywag) sneered at the apparent contradictions in a platform-oriented net neutrality regime:

Applying net neutrality rules to Google or Facebook, for example, could make them obligated to distribute content from political extremists and even foreign propaganda under some circumstances. Unfortunately for Silicon Valley, lack of solutions never stopped a congressional hearing.

Tiku is apparently trying to apply the “no blocking” rule from net neutrality without the exception for unlawful content, the network management exception, or the broad discretion networks have to block content that falls outside the advertised scope of their services.

Content-Based Censorship is Consistent with Net Neutrality

As Brett Skorup has pointed out, the FCC’s 2015 Open Internet Order explicitly allows ISPs to engage in content-based censorship to meet a business objective:

But Brent Skorup, a research fellow at George Mason University’s Mercatus Center, in a meeting with FCC officials earlier this month, said content-based regulation was just what the agency had attempted to do in the Open Internet order, and does not withstand the strict constitutional scrutiny such speech-based regulation demands.

The key is a 2015 U.S. Supreme Court ruling, Reed vs. Town of Gilbert, having to do with an ordinance that allowed for different treatment of signs — religious, political, directional — based on content.

The court concluded, “A law that is content-based on its face is subject to strict scrutiny regardless of the government’s benign motive,” in this case allowing for discrimination on the basis of family friendliness.

Skorup says the OIO is unlawful because of its content-sensitive standard, but until the Supreme Court hears the First Amendment challenge requested for certiorari by Daniel Berninger it’s the law.

But Tiku is right that the net neutrality framework is very confused with respect to content. Policy makers don’t want fake news on social platforms, but fake news is protected by the First Amendment. Platforms are free to censor it if they wish, but the government can’t order such censorship.

Senator Franken has the same problem with limiting the gatekeeper power of social networks that the OIO has with compelling speech on the part of ISPs. The First Amendment says the government can’t pre-emptively censor speech, as we all know.

The jurisprudence also prevents it from compelling speech – as the “no blocking” rule does – absent a compelling state interest. So the issue with applying net neutrality to platforms is the same as the problem with applying it anywhere.

Franken is Asking the Right Questions

Net neutrality is a poor instrument for addressing the issues Congress has finally decided to confront with content platforms because it’s a very confused and shallow notion in any context. Franken correctly argues that platforms have the power to shape opinion. Even hardline net neutrality fan Jon Brodkin of Ars Technica acknowledges this reality:

Franken wrote that “Everyone is rightfully focused on Russian manipulation of social media,” but that lawmakers should “ask the broader questions. How did big tech come to control so many aspects of our lives? How is it using our personal information to strengthen its reach and its bottom line? Are these companies engaging in anti-competitive behavior that restricts the free flow of information and commerce?”

These are good questions, but they’re also very difficult ones. Facebook and Google have developed powers of persuasion because their business models demand them. The more time we spend on Facebook, the more ads the platform can serve us and the more money it makes. The more searches we do on Google, the more ads the company sells. And this logic applies to the web sites instrumented with Google’s and Facebook’s tracking code as well.

Exploiting Human Vulnerabilities

It all comes down the devil’s bargain in advertising-based business models. Sean Parker, Facebook’s founding president, explained this in an interview that’s even more important than Franken’s questions.

Speaking to Axios, Parker explained that Facebook was designed to maximize engagement:

  • “The thought process that went into building these applications, Facebook being the first of them, … was all about: ‘How do we consume as much of your time and conscious attention as possible?'”

  • “And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you … more likes and comments.”

  • “It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

  • “The inventors, creators — it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people — understood this consciously. And we did it anyway.”

Like lab rats pressing buttons to have their brain’s pleasure centers stimulated, Facebook users can develop a kind of addiction to notifications. Rats engaged in Brain Stimulation Reward studies will forego food, water, and sex for rewards, and Facebook users will often prefer engagement in fringe ideological groups to rational sharing and communication.

How Do We Fix This Mess?

The good news is that the social media BSR problem may just fix itself. Some of the research shows that people who engage in social media a lot become less happy. In time, they’re bound to realize this and disengage.

Disengagement will lead the social network architects to revise their systems, and this may be productive for everyone. So the negative incentives caused by the advertising model are counter-balanced by incentives to broaden the base.

Government censorship and compelled speech aren’t likely answers because, for one thing, they’re not permitted by the First Amendment. So users and platform operators will have to resolve the issues on their own. The people who made the current Facebook will just have to keep refining it.

Net neutrality will not fix social media, and it will also not “fix” broadband networks.

Key Takeaways

A key takeaway from the Great Social Media Freakout of 2017 is that content platforms and network platforms are not very different. They both connect people to people and people to information. Each contributes to the success of the other, often in peculiar ways.

The most important realization is that the fundamental claim of the net neutrality movement – that policing networks is the only thing needed to ensure the health of the Internet – is false.

Problems and opportunities arise all over the Internet marketplace, and in most cases the best  government response is to highlight them, measure them, raise awareness, and leave it to the technologists to come up with solutions. We always have and we always will.