Van Schewick’s View of Net Neutrality and Quality of Service

Network Quality of Service (QoS) is the technical issues that lurks behind the policy issue of net neutrality. While there are many subtle variations of net neutrality, the concept as a whole comes from the idea that the Internet should have a “soft middle” that exerts little or no influence over the behavior of users and applications, and a very robust set of services at the edge that handle all the problems of congestion, charging, and innovation. This distinction is the point of the “end-to-end arguments” that seek to encourage designers of distributed systems to build vague and general systems in which all application-specific features are added as late as possible to the design. If elections were structured according the end-to-end arguments, there would not be any primaries, voters would simply select from a list of 50 candidates in the general (perhaps using some form of multiple-choice voting.)

The earliest papers on net neutrality, most notably Tim Wu’s “Network Neutrality, Broadband Discrimination” and Atkinson and Weiser’s “A Third Way on Net Neutrality” carefully distinguished the Internet’s behavior from the behavior of the individual networks that compose it with respect to Quality of Service. This is necessary because networks and The Internet have very different feature sets, goals, and histories with respect to application requirements. While networks – be they DSL, DOCSIS cable modems, Wi-Fi, Ethernet, or LTE – incorporate features that explicitly offer choice to applications – The Internet has never quite managed to deliver these choices to applications itself. This isn’t for want of trying, as The Internet has flirted with features such as Class of Service, Type of Service, Integrated Services, and Differentiated Services that are attempts at filling the gap, but none of them has been adopted on a large-scale basis. The desire to offer Quality of Service on The Internet is an unsolved research problem at the moment.

Networks provide Quality of Service because applications don’t all have the same requirements from networks. Some need to transfer a large amount of data in one direction at low cost, like Netflix, and some need to transfer a small amount of data in two directions with low delay, such as telephony, and some are in the middle, like the Web. Every data packet that traverses a network or an internet has some probability of experience loss or delay, and the better these probabilities are managed, the cheaper it is to build and operate networks that provide a wide range of applications with the service they need. The three trade-offs are cost, loss, and delay, they’re always in tension in any real network system, and there’s no silver bullet that optimizes them all at the same time. When I testified at the FCC’s first hearing on the Comcast/BitTorrent dispute, I addressed this problem and nothing I said was new to anyone in the network engineering business.

The problem with net neutrality as defined by the hard core “all packets are equal” crowd is that it would not allow this research problem to be resolved, which would not be a good thing for app developers of the future. Barbara Van Schewick (BVS), the director of the Stanford Center for Internet and Society, appreciates this and has been trying for some time to develop a net neutrality framework that permits certain forms of QoS. Her most recent effort is a very long white paper, “Network Neutrality and Quality of Service What a Non-Discrimination Rule Should Look Like”  that’s summarized in this blog post. Barbara’s prose style is somewhat hard to penetrate, but her main idea is that user-controlled QoS, paid for exclusively by the user and never charged to the application developer, that can be used in any way the user wishes, should be permissible as long as the network provides good baseline QoS,  and all other forms of QoS should not be:

The rule allows network providers to offer certain (though not all) forms of Quality of Service. In particular,  it  allows  network  providers  to  offer  different  classes  of  service  if  they  meet  the following conditions:

(1)  the different classes of service are offered equally to all applications and classes of applications;
(2) the user is able to choose whether and when to use which class of service;
(3) the network provider is allowed to charge only its own Internet service customers for the use of the different classes of service.

Provided:

…the rules should require the regulatory agency in charge of enforcing the network neutrality rules to monitor the quality of the baseline service and set minimum quality  standards, if the quality of the baseline service drops below appropriate levels.

Hence, the rule is similar to the Atkinson and Weiser rule with respect to the baseline requirement and the free usage of the feature, but is more restrictive with respect to who pays. Van Schewick is in thrall to the notion that innovation in network applications comes mainly (or at least significantly) from cash-strapped small scale inventors, hence her preference for the “ISP customer pays” idea. Innovation research doesn’t support this idea, and neither does common user experience on the Internet. The most likely users of a QoS feature today would be large data streamers such as YouTube and Netflix, for whom it would lower costs, and for telephony apps such as Vonage and Skype for whom it would improve call quality. These are dorm room innovators, and neither was Google in its startup days. (In fact, the founders of Google were paid by the National Science Foundation to work in one of the nation’s largest and most significant research institutions, Stanford University, on the very subject that became the fundamental feature of their company.)

Like many Internet buffs, BVS is very suspicious of networking companies:

Network  providers  are  not  beneficial  stewards  of  the  Internet  platform.  Neither  the interests  of  the  network  provider  and  users  nor  the  interests  of  the  network  provider  and  the public  are  aligned.  Network  providers’  interests  often  differ  from  users’  interests,  and  even  if they  do  not,  network  providers  do  not  know  what  exactly  users  want. Network  providers’ private interests and the public interests with respect to the evolution of the Internet diverge as well:  For a variety of reasons, network providers capture only a small part of the social value resulting  from  an  open  Internet.  For  example,  they  only  capture  some  of  the  social  benefits associated  with  application  innovation  or  of  the  social  benefits  resulting  from  improved democratic discourse.  Moreover, most of the gains they are able to capture are uncertain and will  be  realized  in  the  future,  which  leads  network  providers  to  discount  them  even  more.  Thus, when network providers decide whether to discriminate among applications or classes of applications,  the  immediate  private  benefits  of  discriminating  (i.e.  the  higher  profits  resulting from exclusionary conduct or from discriminatory network management) will often be higher than network  providers’  hyperbolically  discounted  share  of  the  private  benefits  of  refraining  from discriminatory conduct.

This is the sort of claim that needs to be supported empirically, which is hard to do since it’s so broad and sweeping. BVS’ book “Internet Architecture and Innovation” attempts the proof, but you’ll have to decide for yourself whether it’s convincing. One thing we can say is that this idea is very prevalent among the Old Guard in the IETF, the organization that oversees The Internet’s formal standards process. People such as Scott Bradner express it as a fact of life, but I think it’s a bit paranoid. Network providers certainly are motivated to reduce churn and to win new customers, especially from the third of Americans who don’t have wired broadband at home today, so the interests are actually quite closely aligned, but the suspicious view is nonetheless a sentiment that’s out there.

It seems to me that ardent proponents of QoS will find a great deal with which to quibble in “Network Neutrality and Quality of Service: What a Non-Discrimination Rule Should Look Like” but so will the hard core who want to impose an “all packets are equal” rule on the Internet. In that sense, BVS is splitting the difference as Wu, Atkinson, and Weiser have done. Her line is more restrictive than A & W, but more permissive than Wu.

The paper is worth reading because it does cover all the major issues, even though there are some technical shortcomings that I’ll address later.