Defining the Internet

One of the more interesting comments filed with the FCC in its recent Further Inquiry into Two Under-Developed Issues in the Open Internet Proceeding was filed by a group of illustrious computer industry stalwarts such as Apple hardware designer Steve Wozniak, computer spreadsheet pioneer Bob Frankston, Stupid Network advocate David Isenberg, and former protocol designer David Reed. Their comments are interesting because they come from such a diverse and accomplished group of people and also because they’re extremely hard to follow (one of the signers told me he almost didn’t sign on because the statement was so unclear.) After reading the comments several times, asking the authors for clarification, comparing them to previous comments by a similar (but larger) group known as “It’s the Internet, Stupid,” and to an even older statement by a similar but still larger group called the Dynamic Platform Standards Project (DPSP), I’m comfortable that I understand what they’re trying to say well enough to explain.

A Passion for Definition

The author of these three statements is Seth P. Johnson, a fellow from New York who describes himself as an “information quality expert” (I think that means he’s a database administrator, but it’s not clear.) Johnson jumped in the net neutrality fray in 2008 by writing a proposed law under the name of the DPSP and offering it to Congress. The gist of the thing was to define Internet service in a particular way, and then to propose prosecution for any ISP that managed its network or its Internet connections in a way that deviated from the definition. Essentially, Johnson sought authority from the IETF’s Internet Standards, but attempted to reduce the scope of the Internet Standards for purposes of his Act. The proposed Act required that ISPs make their routers “transmit packets to various other routers on a best efforts basis,” for example, which precludes the use of Internet Type of Service, Class of Service, and Quality of Service protocols.

IETF standards include a Type of Service (ToS) option for Internet Protocol (IP) as well as the protocols IntServ, DiffServ, and MPLS that provide mechanisms for network Quality of Service (QoS.) QoS is a technique that matches a network’s packet transport capabilities to the expressed needs of particular applications, ensuring that a diverse group of applications works as well as possible on a network of a given, finite capacity. ToS is a similar method that communicates application requirements to one of the networks that carries IP datagrams, such as Ethernet or Wi-Fi. Packet-switched networks, from the ARPANET days to the present, have always included QoS and ToS mechanisms, which have been used in some instances and not in others. You’re more likely to see QoS employed on a wireless network than on a wireline network, and you’re also more likely to see QoS on a local network or at a network edge than in the Internet’s optical core; but the Internet’s optical core is an MPLS network that carries a variety of private network traffic at specified service levels, so there’s quite a bit of QoS engineering there too.

The purpose of defining the Internet as a QoS-free, “Best-Efforts” network was to prevent network operators from making deals with content providers that would significantly privilege some forms of sources of content over others. This approach originated right after Bill Smith, the former CTO of Bell South, speculated that ISPs might increase revenues by offering exceptional performance to select application providers for a fee. While the service that Smith proposed has a long history in Internet standards (RFC 2475, approved in 1998, discusses “service differentiation to accommodate heterogeneous application requirements”), it’s not part of the conventional understanding of the way the Internet works.

Defining One Obscurity in Terms of Another

“Best-efforts” (BE) is a term of art in engineering, so defining the Internet in this way simply shifts the discussion from one obscurity to another. BE has at least three different meanings to engineers, and another one to policy experts. In the broadest sense, a BE network is defined not by what it does as much as by what it doesn’t do: a BE network makes no guarantee that any given unit of information (“packet” or “frame” ) transmitted across the network will arrive successfully. IP doesn’t provide a delivery guarantee, so the TCP code running in network endpoints such as the computer on your desk or the mobile phone in your hand has to take care of checking for lost packets and retransmitting when necessary. BE networks are appealing because they’re cheap to build, easy to maintain, and very flexible. Not all applications need for every packet to transmit successfully; a Skype packet that doesn’t arrive within 200 milliseconds can be dropped, for example. BE networks permit that sort of decision to be made by the application. So one meaning of BE is “a network controlled by its endpoints.”

Another meaning of BE comes from the QoS literature, where it is typically one of many service options in a QoS system. In the Internet’s DiffServ standard and most other QoS systems, BE is the default or standard treatment of all packets, the one the network router employs unless told otherwise.

Yet another definition comes from the IEEE 802 standards, in which BE is the sixth of seven levels of service for Ethernet, better than Background and worse than all others; or the third of four levels for Wi-Fi, again better than Background. When policy people talk about BE, they tend to use it in the second of these senses, as “the standard treatment,” with the additional assumption that such treatment will be pretty darn good most of the time, suitable for a fairly wide range of applications under most network conditions.

Johnson’s FCC filing insists that the Internet, properly defined, must be a best-efforts-only system; all other QoS levels should be considered “managed services” rather than “Internet” and therefore subject to different regulatory treatment. The filing touts a number of social benefits that can come about from a BE-only Internet, such as “openness, free expression, competition, innovation and private investment” but doesn’t explain the connection.

Constraining Applications

One of the implications of this view is that both network operators and application developers must adapt to generic treatment and refrain from relying on differentiated services or offering differentiated services for sale as part of an Internet service.

Unfortunately, the advocates of this viewpoint don’t tell us why they believe that the Internet must refrain from offering packet transport and delivery services that are either better or worse than generic best-efforts, or why such services would harm “openness, free expression, competition, innovation and private investment” if they were provided end-to-end across the Internet as a whole, or where the authority comes from to support this definition. We’re supposed to simply trust them that this is the right way to do things, relying on their group authority as people who have been associated with the Internet in various capacities for a long time. This isn’t engineering as much as religion.

There is nothing in the Internet design specifications (Internet RFCs) to suggest that providers of Internet services must confine themselves to BE-only, and there is nothing in the architecture the Internet to suggest that all packets must be treated the same. These issues have been covered time and again, and the FCC knows by now exactly where to look in the RFCs for the evidence that this view of the Internet is faulty. The Internet is not a packet delivery system, it’s a virtual network that only works because of the underlying physical networks that transport and deliver packets. This virtual network defines an interface between applications of various types and networks of various types, and as is the case in all abstract interfaces, it may provide least common factor services, highest common factor, or anything in between, all according to the needs of the people and organizations who pay for it, use it, and operate it. As Doc Searls said many years back, “nobody owns the Internet, anyone can use it, and anyone can improve it.” The capacity for constant improvement is the magic of the Internet.

Myth of the General Purpose Network

If we insist that the Internet must only provide applications with one service option, we doom application developers to innovate within narrow confines. A generic Internet is effectively optimized for file-transfer oriented applications such as web browsing, email, and media streaming; it’s fundamentally hostile to real-time applications such as immersive video conferencing, telepresence, and gaming. Some of the best minds in the Internet engineering community have labored for past 20 years to devise systems that would allow real-time and file transfer applications to co-exist happily on a common infrastructure, and these efforts are perfectly consistent with the nature of the Internet properly understood.

The central myth underlying the view of the Johnson and his co-signers is the “general purpose network” formulation. This terminology is part of telecom law, where it refers to networks that can support a variety of uses. When adapted to engineering, it becomes part of an argument to the effect that best efforts is the “most general purpose” method of supporting diverse applications and therefore the “best way to run a network.” I think it’s wrong to frame the challenges and opportunities of network and internetwork engineering in this way. I’d rather that people think of the Internet as a “multi-purpose network” that can offer diverse packet transport services suitable for diverse applications. We want network operators to build networks that serve all applications appropriately at a price that ordinary people can afford to pay. We don’t want consumers to pay higher prices for inefficient networks, and we don’t want to foreclose application innovation to the narrow bounds of legacy systems.

Segregated Systems are Harmful

Systems that allow applications to express their requirements to the network and for the network to provide applications with differentiated treatment and feedback about current conditions are apparently the best way to do this; that’s the general concept of Internet QoS. This has been the thinking of network and internetwork engineers since the 1970s, and the capability to build such systems is embedded in the Internet architecture. The technical people at the FCC who are reading the comments in this inquiry know this.

These arguments seem to endorse a disturbing trend that the so-called “public interest” advocates are now advancing, to the effect that advanced network services must be segregated from generic Internet service on separate (but equal?) physical or logical facilities. This is not good, because it robs us of the benefits of converged networks. Rather than dividing a coax or fiber into two frequencies and using one for IPTV and the other for Generic Internetting, it’s better to build a fat pipe that provides IPTV and Generic Internetting access to the same pool of bandwidth. The notion of sharing a common pool of bandwidth among multiple users and applications was the thing that started us down the road of packet switching in the first place, and it’s very important to continue developing that notion; packet switching is the Internet’s enabler. Segregated facilities are undesirable.

Integrating Applications and Networks

What we need in the Internet space is a different kind of vertical integration than the kind that was traditional in the single application networks of the past. QoS, along with modular network and internetwork design, permits applications and end users to essentially assemble networks as applications are run that provide them with the level of service they need at the price they can afford. We get to that by allowing applications to explicitly state their requirements to the internetwork, and for the internetwork to respond with its capabilities. Application choice meets the needs of innovators better than by a rigid “one size fits all” formulation.

The Internet is, by design, a platform for both generic and differentiated services. That’s its true legacy and its promise. We don’t need to run into historical blind alleys of myth and prejudice when the opportunity faces us to build this platform out to the next level. As more Internet use shifts to mobile networks, it will become more critical than ever to offer reasonable specialization to applications in a standards-compliant manner. The Internet of the Future will be multipurpose, not generic.

This article is cross-posted from the Innovation Policy blog and The Progressive Fix.