Training Wheels for the Internet

Tim Wu writes an impassioned, 3000 word criticism of FCC Chairman Ajit Pai’s Restoring Internet Freedom initiative in Wired. The article provides a sweeping overview of telecom regulation since 1970, concluding that Pai is about to strangle the Internet:

[The] broader issue: the elimination of protections that have been around since 2005 (arguably since 1970) and which have driven billions, if not trillions, of dollars in both investment and development of new markets, like streaming video.

Wu is clearly on board with the “end of the Internet as we know it” crowd. He argues that the Internet is neutral by design; that the FCC protected it from ISP abuse even when it relied on Title I for authority; and that ISPs will wreck the Internet unless the US rolls the regulatory clock back to 1970.

It appears that Tim Wu loves a fanciful, fuzzy, idealized version of the Internet rather than the real system with warts and all.

Co-opting Pai’s Argument

Oddly, Wu tries to steal Chairman Pai’s argument that historical US telecom policy is fundamentally sound. This goes back to the Computer Inquiries that distinguished “basic” telecom transmission services from “enhanced” information services. Wu’s error is the failure to recognize that Internet Services Providers have generally been regarded as enhanced services, subject to light regulation with a few modest exceptions.  ISPs weren’t treated as Title II telecom carriers until 2015.

Here’s the way the FCC saw this issue in the 1998 Universal Service Report:

73. We find that Internet access services are appropriately classed as information, rather than telecommunications, services. Internet access providers do not offer a pure transmission path; they combine computer processing, information provision, and other computer-mediated offerings with data transport.

The report discusses the argument for classifying ISPs under Title II, noting that their logic would also make email a Title II service, before arriving at its conclusion. The report also traces Title I back to the Computer Inquiries.

So the issue isn’t whether the FCC has distinguished basic from enhanced for a long time, it’s where ISPs should be placed.

Does The Internet Crave Neutrality?

Like his fellow protégé of Lawrence Lessig, Barbara van Schewick, Wu argues that the Internet is neutral by design, thanks to what he terms “the ‘end-to-end’ principle of network design.” He sees this principle in terms of layering:

Among the key features of the internet was its “layered” design, which was agnostic both as to the means used for carrying information and to what the network could be used for. The goal of the internet was to connect any network and support any application—hence, to be a “neutral” network.

This is simply a misstatement of network layering, a way of thinking about network functions in terms of scope.

Here’s the quick version of layering: the physical layer encodes bits on a wire or radio; the data link layer moves information around a single network; the network layer moves information between networks; and the transport layer moves information from device to device. Other layers relate to applications and can be collapsed into a single layer.

[See our video podcast for an expert discussion of this concept.]

End-to-End Doesn’t Dictate Policy

Nothing about this organization has policy implications because it’s fairly obvious that protocol layers are universal abstractions. Every inter-network is organized in this way because these steps are inevitable in an interconnected network of computers. It’s an amateur mistake to read neutrality into a universal structure.

Wu relies on a 1981/84 paper by two MIT graduate students and their advisor to flush policy from network organization, “End-to-End Arguments in System Design“. The paper is not about networks; it’s a recommendation for developers of applications designed to run on networks or other types of distributed systems.

It says that it’s generally easier to provide new features in a distributed system if you don’t depend on the the network, the operating system, or some other element to provide them for you. It also points out that there may be communication errors between the new application and the rest of the system, so it’s not safe to trust other people (or systems) to do your job for you.

Shortcomings of the End-to-End Argument Paper

The paper ignores some important issues: original features used by one and only one application are more likely to be buggy than features used by many apps; original features good for one app are often duplicated (in whole or in part) in other apps; and features coded into applications (rather than the system core) are prone to wide fluctuations in performance.

The paper even admits that its “arguments” are not hard and fast rules:

Low level mechanisms to support these functions are justified only as performance enhancements…Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.

It’s hard to read a regulatory policy into this paper without squinting, standing on your head, and chanting Sanskrit hymns.

The Internet Enables Features with Cross-Layer Code

The Internet’s layers are much more porous than Wu and his Stanford colleagues realize. Wu’s professor, Lawrence Lessig is fond of using the “daydreaming postal worker” analogy to convey his view of layering:

Like a daydreaming postal worker, the network simply moves the data and leaves its interpretation to the applications at either end.

Wu claims layers are isolated from each other:

Among the key features of the internet was its “layered” design, which was agnostic both as to the means used for carrying information and to what the network could be used for. The goal of the internet was to connect any network and support any application—hence, to be a “neutral” network.

Internet layers are not really all that agnostic. TCP, the transport layer, includes a feature known as the “pseudo-header” that incorporates knowledge of IP (network layer) constructs. Applications can request optional services from IP through cross-layer communication of Differentiated Services Code Points and Integrated Services Bearer Classes. The former are generally – but not exclusively – used within networks, and the latter cross networks to implement voice over LTE.

Still Not Feeling That Policy Dictate

None of this implies treating all bits, all packets, or all messages the same, whether they’re from the same applications or different ones. And none of it either permits or forbids charging for specialized treatment. The Internet is simply a toolkit and the houses we make with it are designed any way we want.

Wu has a different vision:

On this foundation—the idea of the “open internet”—was built the founding applications of the internet, now omnipresent, such as the World Wide Web and email, plus later innovations, like streaming video and social networking. All of these inventions depended heavily on the internet’s end-to-end design, which made possible “permissionless” innovation, and an extraordinary and fabled era of change.

Once again, end-t0-end is about application design, not about network design. The Internet’s founding applications – email, virtual terminal, and file transfer – all originated on ARPANET before there was an Internet.

The Web came along nearly 20 years after the Internet protocols were designed and ten years after they were deployed. Permissionless innovation is great, but it doesn’t begin and end with the Internet. Can’t I dial any phone number I want without permission? That’s why we have robocalls.

This is About Policy, Not Design

The idea that the Internet dictates a regulatory framework is so dubious that even the people who make it don’t really believe it. If the Internet is radically different from the telephone network, how does it make sense to impose Title II telecom carrier regulation on it? This conclusion undermines the entire case that Wu and his colleagues make for Internet exceptionalism.

That doesn’t mean that the ideals of the net neutrality movement are wrong or without foundation. The strongest part of Wu’s case has always been the behavior of first generation broadband carriers. There was indeed a time when ordinary consumers were forbidden from using VPNs or connecting multiple computers to a cable modem. [Humorously, Wu confuses the AT&T Broadband cable modem service with DSL.]

AT&T Broadband didn’t enforce this ban, however. I was one of their early customers and I used a NAT and a local area network to provide broadband Internet all over the house, the same setup I used for dial-up. As I wrote the code that setup the cable modem interface for the AT&T/@Home system, I can attest that it was Windows 95-standard TCP/IP. The Terms of Use language Wu cites was toothless, but it was still silly.

The Role for Regulators

In the early days of every new networking technology, providers are tempted to squeeze customers to get back their investment in the new tech. When these things are going on, competition is limited and a small user base places high value on the new tech so they’re squeezable.

It’s perfectly appropriate, even if a bit risky, for regulators to step in and install some training wheels on the new market lest greedy providers mess it up for everyone. But these interventions, like merger conditions, should be time-limited.

So even if we accept that the argument for broadband regulation was strong at the turn of the century, it doesn’t follow that it’s strong today. Thanks to mobile, we have many paths to the Internet and we’re on the brink of the rollout of a new technology that promises to offer even more competition for residential broadband. Allowing 5G to flourish is much more important than keeping the training wheels on the Internet.

The time has come to take off the training wheels and see what the Internet can do.