What the Heck is “Net Neutrality” Anyhow?

You may have noticed there’s been some talk on the Internet lately about something called “net neutrality”. It’s connected to a court decision on the FCC, in which the court (the DC Circuit) determined that the FCC has once again overstepped its boundaries and imposed some rules that it was never legally entitled to make. This is the second time the court has smacked the FCC around on net neutrality. I don’t generally like to define basic terms of Internet policy since I believe most of my readers are intelligent enough to know what they mean, but in this instance the FCC’s continued pigheadedness convinces me that a definition is needed.

The issue came up in American Enterprise Institute’s discussion of tech policy issues on the agenda for 2014, which you can see on C-Span’s web site: American Enterprise Institute Scholars Predict 2014 Tech Policy Issues

As I said, the root of net neutrality is the fear that we’re getting a raw deal on Internet service. This fear – grounded in the fact that it’s more expensive to build a network that covers a dispersed population than to cover one that lives in high-rise buildings – combines with a theory about network design and network quality that’s fundamentally defective. Network neutrality advocates believe that broadband information networks are very, very simple, about like the water system. All it takes to supply a city with water is a well, a pump, and some pipes, so hooking the city up to the Internet should just be about some wires, some switches, and a little bit of electricity. The wires may break from time to time, but when that happens you just patch ’em up and it all works like magic.

It would be great if things were really like that, but they simply aren’t. Broadband is like a water system that pumps fifty percent more water each year to each home for the same price. That would be pretty hard for most water systems to do unless they were massively overbuilt to begin with.

Most broadband networks were actually built for a different job than the one they’re doing now. The cable network was built to share a TV antenna, the telephone network was built for, you guessed it, old black telephones, and the mobile network was built for cell phones. These original tasks were all quite a bit less demanding than the tasks they’re doing now. If a phone call is a trickle of water, a pirated movie is a torrent; that’s much, much more water. Not only were broadband networks not overbuilt, they were quite under-built from today’s point of view.

The only broadband technology that was really meant for the Internet from day one is “Passive Optical Network” (PON), the glass fiber stuff that Verizon sells as FiOS (and Google sells as, well, Google Fiber,) and even it relies on telephone poles that were built for, you guessed it again, copper telephone wire. FiOS is like a water system that runs a three-foot pipe to each home in order to have capacity for 30 years worth of upgrades, and the other stuff is like the current water system plus some engineering magic that makes the water go faster every year by processes most of us don’t want to understand.

So the net neutrality people believe our networks are pathetically slow and overpriced, and that they’ll only get better if their owners are prevented from doing anything to them except making them faster and cheaper. The net neutrality rules are actually aimed at foreclosing all the engineering options that might improve network quality and profitability except those options that improve speed and lower prices. Going back to water: net neutrality advocates don’t want the water to be more pure or better tasting, they just want more water for a lower price, period.

So net neutrality is both a fear and a plan.

The plan happens to be wrong. The idea that networks should emphasize capacity over quality is a classic engineering error that goes back to the 1970s, when the first Local Area Networks (LANs) were designed. In those days, the computer industry was making its first foray into network design; previously, they relied on the telephone company to supply them with equipment to hook computers up to each other, typically over long distances. The phone company would supply us with modems that either worked over regular telephone connections – at low speeds like 300 bits per second – or over dedicated circuits known as “leased lines” that were much faster, like 56,000 bits per second. These devices could cover thousands of miles.

Minicomputer companies started making computers so cheap that companies could have more than one of them in any given office, so they invented LANs to connect computers at even higher speeds – like millions of bits per second – over distances of less than a mile. One of the great revelations of that era is that distance is the enemy of speed in communication systems. Actually, that revelation was about a thousand years old, but computer people came to realize it quite intimately in the 1970s.  More on that later.

The minicomputer companies were able to try a number of tradeoffs in the design of their LANs. Datapoint (of San Antonio, Texas) was the first to sell a full-featured LAN, which they called ARCNet. ARCNet was the first LAN I heard about in 1975 and the one that got me inspired to do some LAN inventing of my own a decade later. It was an ingenious little system that used some parts and cables designed by IBM for its 3270 terminal couple to a micro-controller programmed by Datapoint’s Gordon Peterson, a Wozniak-like character who was larger than life and immensely talented. ARCNet was capable of providing predictable service with a controlled delay. This made the system good for a wide range of factory floor applications as well as for less demanding office work. I programmed an ARCNet application 25 years after I first heard about ARCNet for printing presses that still worked perfectly well.

Other LANs of the 1970s used all of their logic circuits for speed and didn’t care about variations in delay. The infamous Ethernet system devised at Xerox was like that, but it was redesigned in the mid-80s to provide both higher speeds and lower delay. The Ethernet redesign was my first bit of network inventing, but most of the work was done by a fellow named Tim Rock who worked for AT&T Information Systems and by Pat Thaler of Hewlett Packard. The redesign enabled Ethernet to run over fiber optics as well as copper wires at a wide range of speeds; the whole panoply runs from 1 megabit/second over plain old telephone wire all the way up to 400 gigabits/second (gigabits are thousands of millions of bits per second) over fiber optics today.

In the 1970s it was necessary to make choices between speed and bounded delay because the chips we had to work with were expensive by today’s standards and not very capable. The most up-to-date Ethernet chips – the ones that run at speeds of 1 gigabit/second and more – are also capable of handling multiple levels of priority. If you have an application that needs to push information at a rate that matches the rotation of a hard drive’s platter, it needs both high capacity and bounded delay. If the information you’re pushing doesn’t get to the disk at the right time, the platter needs to rotate another full revolution before the data can be written to it and sometimes this is bad; if your network causes you to miss this rotational window millions of times a day, it’s noticeably bad.

A good network has a combination of high capacity and low delay, neither of which is a substitute for the other. An airplane will generally get you where you want to go faster than a car will, because the airplane has higher speed or “more bandwidth” in networking terms. But it the airplane only flies three times a week and you need to arrive on one of the days in between flights, you’re better off driving because the car has lower delay, or “latency” in networking terms.

Net neutrality people don’t understand that network quality is distinct from network speed in the same way that speed and delay are distinct for airplanes. So when the network operators want to supply networks with service offerings that limit latency as well as those that increase capacity they think they’re being scammed. They also don’t understand the relationship between speed and distance or the relationship between cost and distance, so they compare speeds and prices in the US and Korea and once again think they’re being scammed. In fact, the laws of nature are doing the scamming. Most Koreans live in Seoul; more than 65% to be precise, and Seoul is the most densely populated city in the OECD.

If US broadband companies invested exactly as much money per connection as Korean companies invest, and if both countries featured the markup of retail price over investment, would Americans and Koreans pay the same prices for the same broadband speeds? The answer, of course, is “no”. So why do net neutrality advocates complain about the speeds and prices for broadband in the US by comparison with South Korea, Hong Kong, and Stockholm, more densely populated areas than any US city? This doesn’t reflect well on them.

So net neutrality is a regulation that claims to improve the quality of American broadband networks by imposing a set of conditions that can only make them worse. It also blames limitations caused by the way Americans choose to live on the carriers. This regulation seeks to force carriers to over-invest in secondary network characteristics while ignoring the primary sources of the problems our networks really have, the distances between households and households and between households and network services. American firms already invest several times more money per capita than those in the city-states with the fastest and cheapest networks, and this investment leads to higher prices.

America’s position on the international broadband charts is determined by the way we live. Net neutrality ignores this and tries to make things worse. That should not be allowed to happen.

Network neutrality also violates the Internet’s technical standards and its architecture. Consider the text of RFC 2475 from 1998, titled Architecture for Differentiated Services:

This document defines an architecture for implementing scalable service differentiation in the Internet. A “Service” defines some significant characteristics of packet transmission in one direction across a set of one or more paths within a network. These characteristics may be specified in quantitative or statistical terms of throughput, delay, jitter, and/or loss, or may otherwise be specified in terms of some relative priority of access to network resources. Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service.

Now look at the FCC’s Open Internet rule that the court struck down:

2. No Unreasonable Discrimination

68. Based on our findings that fixed broadband providers have incentives and the ability to discriminate in their handling of network traffic in ways that can harm innovation, investment, competition, end users, and free expression we adopt the following rule:

A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.

69. The rule strikes an appropriate balance between restricting harmful conduct and permitting beneficial forms of differential treatment. As the rule specifically provides, and as discussed below, discrimination by a broadband provider that constitutes “reasonable network management” is “reasonable” discrimination. We provide further guidance regarding distinguishing reasonable from unreasonable discrimination:

76. For a number of reasons, including those discussed above in Part II.B, a commercial arrangement between a broadband provider and a third party to directly or indirectly favor some traffic over other traffic in the broadband Internet access service connection to a subscriber of the broadband provider (i.e., “pay for priority”) would raise significant cause for concern. First, pay for priority would represent a significant departure from historical and current practice. Since the beginning of the Internet, Internet access providers have typically not charged particular content or application providers fees to reach the providers’ retail service end users or struck pay-for-priority deals, and the record does not contain evidence that U.S. broadband providers currently engage in such arrangements. Second this departure from longstanding norms could cause great harm to innovation and investment in and on the Internet. As discussed above, pay-for-priority arrangements could raise barriers to entry on the Internet by requiring fees from edge providers, as well as transaction costs arising from the need to reach agreements with one or more broadband providers to access a critical mass of potential end users. Fees imposed on edge providers may be excessive because few edge providers have the ability to bargain for lesser fees, and because no broadband provider internalizes the full costs of reduced innovation and the exit of edge providers from the market. Third, pay-for-priority arrangements may particularly harm non-commercial end users, including individual bloggers, libraries, schools, advocacy organizations, and other speakers, especially those who communicate through video or other content sensitive to network congestion. Even open Internet skeptics acknowledge that pay for priority may disadvantage non-commercial uses of the network, which are typically less able to pay for priority, and for which the Internet is a uniquely important platform. Fourth, broadband providers that sought to offer pay-for-priority services would have an incentive to limit the quality of service provided to non-prioritized traffic. In light of each of these concerns, as a general matter, it is unlikely that pay for priority would satisfy the “no unreasonable discrimination” standard. The practice of a broadband Internet access service provider prioritizing its own content, applications, or services, or those of its affiliates, would raise the same significant concerns and would be subject to the same standards and considerations in evaluating reasonableness as third-party pay-for-priority arrangements.

See the problem? RFC 2475 says: Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service.

But the FCC second-guessed the Internet Engineering Task Force and said “…as a general matter, it is unlikely that pay for priority would satisfy the “no unreasonable discrimination” standard.” The FCC also rewrote history with its claim that “First, pay for priority would represent a significant departure from historical and current practice” despite the plain language of RFC 2475, the specifications for IP, and a host of other specifications for such things as Integrated Services which are part of LTE.

So not only do net neutrality advocates ignore the physical laws of the universe and seek to engage in amateur network engineering, the FCC’s Open Internet rules directly contradicted the design of the Internet architecture.

In a word, net neutrality is hubris, prideful overreach. But it’s motivated by fear, so we mustn’t be too hard on its supporters because they know not what they do.

 

Comments
  • Missyg

    Net neutrality isn’t dead yet and it’s now more important than ever to get informed on the issue. For a cool refresher on the basics, here’s this short mockumentary about the open internet: http://www.theinternetmustgo.com/

  • Richard Bennett

    Unfortunately, Missyg, that mockumentary is clueless paranoia. Internet service in the US is perfectly fine, and ISPs are way too busy competing against each other for customers to engage in nonsensical behavior.

    • Cameron

      I wouldnt say they are competing at all. All ISP’s have a non-compete clause so they can fix prices are all reap the benefits.

      • Richard Bennett

        Actually no, they don’t. The seven ISPs who serve my neighborhood certainly don’t and having a non-compete clause would violate the law.

Comments are closed.