Net Neutrality is Antitrust for Dummies

The US Internet is now free of the restrictions imposed by the FCC’s 2015 Open Internet Order. Advocates argue that it’s also free of the protections the order provided to consumers, innovators, and established monopolies; this claim remains to be proved.

Reading today’s media coverage I was struck by how poorly media – even the writers and outlets who specialize on tech policy – communicate this issue to the public. The coverage says, with great uniformity, that net neutrality is now defunct.

Tony Romm said “Goodbye to net neutrality…“; Klint Finley wrote “The FCC’s Net Neutrality Rules Are Dead…“; Molly Wood recorded a podcast with a pro-neutrality academic titled “Why The End Of Net Neutrality Might Look Good … At First“;  and former FCC press secretary Kim Hart explained “Why it matters: Net neutrality’s demise“. A few milder voices homed in on the fact that today’s new regime simply discarded Title II regulation of ISPs, the more substantial thing; but they’re not likely to get much attention.

Media Still Struggles to Define Net Neutrality

We’re still seeing the narrative that net neutrality means treating all bits the same. ACLU says: “Internet content [your ISP] like  — for political or financial reasons — will be delivered at top speeds, while content it disfavors will be slowed or even blocked.” That’s advocacy spin, but Romm essentially said the same thing, declaring that the Title II regulation “required broadband providers such as AT&T, Charter, Comcast and Verizon to treat all Web traffic equally.”

But ISPs can’t really do that, because web traffic is already unequal when they get it. Websites that employ CDNs – either their own or those provided by specialists – have an advantage in terms of speed over those that don’t. Those that invest in large numbers of servers per user are faster than those that don’t. And those that design their pages to avoid off-server elements have an advantage over those that rely on external ad servers.

The best that ISPs can do under a neutrality regime is preserve the inequalities between websites that exist independent of ISPs.

A Better Explanation isn’t Hard

The intent of net neutrality is actually pretty simple: it’s an attempt to prevent ISPs from harming innovation by abusing their position in the Internet ecosystem. In other words, it’s a shortcut to antitrust enforcement, or antitrust for dummies. The legitimate goal of net neutrality isn’t equality, it’s fairness.

Net neutrality was deemed important by its creators because antitrust cases don’t always turn out the way they want them to. They believed net neutrality was superior to classical antitrust because deviations from neutrality should, in their view, be easier to detect than monopolist intentions. And they really, really don’t like the common forms of price discrimination that appeal to economists.

But the creators of net neutrality – Tim Wu, his professor Larry Lessig, and the others at Lessig’s Stanford Center for Internet and Society – recognized that it’s difficult to make the Internet serve the needs of innovators without deviating from neutrality. They admitted that suppressing activities such as spam and malware that produce negative externalities is good for most Internet users.

Neutrality is a Finicky Concept

Neutrality is a difficult concept in practice, as Tim Wu admitted in his original net neutrality paper:

Neutrality, as a concept, is finicky, and depends entirely on what set of subjects you choose to be neutral among. A policy that appears neutral in a certain time period, like “all men may vote”, may lose its neutrality in a later time period, when the range of subjects is enlarged.

This problem afflicts the network neutrality embodied in the IP protocols. As the universe of applications has grown, the original conception of IP neutrality has dated: for IP was only neutral among data applications. Internet networks tend to favor, as a class, applications insensitive to latency (delay) or jitter (signal distortion). Consider that it doesn’t matter whether an email arrives now or a few milliseconds later. But it certainly matters for applications that want to carry voice or video. In a universe of applications, that includes both latency-sensitive and insensitive applications, it is difficult to regard the IP suite as truly neutral as among all applications.

So the Internet is non-neutral in design and in non-neutral in operation. IP doesn’t care what datalink it runs on, but applications do; ISPs don’t care which websites are hosted on CDNs, but users do; treating all packets as equals harms most of them.

Right out of the gate, neutrality was been riddled with exceptions making it difficult to operationalize.

Structural and Behavioral Remedies

Before net neutrality, antitrust ideals were operationalized in broadband markets through the structural approach known as unbundling. Telephone networks were opened to competition in the 1980s by splitting the market into wholesale access to wires and retail access to consumers; initial regulations for DSL in the 1990s followed the same model.

This approach was popular in nations that didn’t have the near-universal cable TV network coverage that the US, Canada, Belgium, and Netherlands had. Cable-have-not nations had no other path to competition than pretending that 3 or 4 retailers selling the same service was a great thing. (Many analysts justified unbundling as a step up the ladder to true network-level competition, however.)

Cable-haves could pursue the path to genuine network competition in a more straightforward way, but our markets were limited to two providers in most cases; alternate approaches such as satellite and wireless weren’t very robust before the current decade and won’t be full substitutes for wired broadband until the 5G rollout hits high gear.

So net neutrality – especially the ban on harmful discrimination and the disclosure requirement – was floated as an alternative behavioral regulation in a market where structural remedies are undesirable. Even though the EU adopted the structural approach initially, it has come to believe that it needs a behavioral component as well. And because the EU is stuck with a single, copper-based DSL network, it needs the government to subsidize fiber.

The Only Genuine Solution is Competition Enabled by Technology

It’s best to understand net neutrality as a temporary tool that should be discarded as soon as we have the ability to obtain high-speed consumer broadband over wireless networks at prices within the reach of the average consumer and with sufficient quality to run the most important and useful applications. We’re very close to that target today (2 – 5 years,) hence regulators are well advised to look ahead to 5G and to avoid the urge to limit broadband markets to legacy networks.

While we don’t want to permit rent-seeking, rapacious behavior by ISPs (any more than we want to allow it from the Internet’s edge monopolies), we also don’t want to impair the path to more efficient markets by over-regulating the inefficient one we have today.

The Obama Title II broadband regulations sought to perfect a market that’s inherently limited to two players; the new approach enacted by Chairman Pai’s FCC last December seeks to blow up this limited market by bringing 5G-enabled ISPs into the mix.

Pai’s Risky Bet

Pai’s approach is bold and risky. 5G may fail to reach expectations for price, performance and coverage right away; there may be wars over 5G patents; some cities will make small cell deployment difficult; and new applications may not emerge immediately.

But the alternative is technical stagnation and endless debates over what kinds of discrimination are good and what kinds are bad; advocates, journalists and politicians misleading the public about the dangers we face online; and the Groundhog Day scenario where no issue is ever settled.

Frankly, waiting for 5G strikes me as a better bet. At least it offers some hope.