Assessing Minimum Quality of Service

Touch screen mobile smart phone in male handsOne of the models for harmonizing net neutrality principles with innovation imperatives allows broadband ISPs to offer customized delivery services to real-time and non-time-critical applications provided that the basic service level available to “Edge Providers” meets a baseline quality level. The first nation to adopt this approach is Singapore. (Note: I do occasional consulting with Singapore as a member of the Infocomm Development Authority’s advisory panel. These views are mine alone.)

Singapore’s Broadband Internet regulatory framework consists of the three prongs laid out in paragraph nine of the June 2011 Net Neutrality decision:

  • Competitive market;
  • Transparent terms of service for consumers;
  • Quality of Service requirements on network availability and latency for fixed-line Internet broadband services.

Together, these three prongs enable providers to actively manage their networks and to offer for sale specialized and niche services provided all three prongs are satisfied.

Thus, Singapore has adopted a strategy that is likely to be adopted by the European Union shortly, per a proposal by Latvia. The Singapore plan is also consistent with the approach suggested by the U. S. FCC in its May 2014 Notice of Proposed Rule Making (NPRM) “In the Matter of Protecting and Promoting the Open Internet.” The U. S. did not proceed to adopt a similar plan because of corporate influence, populist pressure and political considerations; instead it reverted to a slightly relaxed version of the telephone network regulatory framework, Title II of the Communications Act, and a pre-emptive ban on “paid prioritization” services that would enable real-time applications to succeed over the Internet.

The Singapore approach is also consistent with A Third Way on Net Neutrality, an academic paper published by Atkinson and Weiser in 2006. The definition and measurement of Quality of Service (QoS) is a pivotal element this approach that has always been fraught with difficulty.

Quality of Service vs. Quality of Experience

While QoS is well understood in engineering, much is lost in its translation to policy because the net neutrality debate has been framed as conflict between Internet Service Providers and Internet “Edge Providers,” the firms that offer content, services, and applications that use the Internet as a platform.

Both of these two interests are clearly essential to the success of the Internet as a marketplace, a platform for innovation, and a force for improved quality of life for citizens. But framing the issue in such stark, binary terms tends to place too much emphasis on commercial transactions and too little on the dynamics of network management.

The issues that affect QoS and the larger term, “Quality of Experience,” are dictated by the mix of applications running within the consumer’s device or network, the unique demands each application imposes, and the many ways that applications interact with each other.

For example, the first net neutrality matter adjudicated by the FCC was a conflict between two applications, the Vonage Voice over IP telephone service and the Vuze implementation of BitTorrent, a peer-to-peer file-sharing program used mainly for copyright violation. An ISP, Comcast, was found to have impaired the operation of BitTorrent by taking management actions that reduced the upstream bandwidth it could consume. (Comcast didn’t actually block BitTorrent; it slowed it down.)

According to Comcast, this action was taken to enable the Vonage VoIP service to run successfully, an imperative for Comcast because it offered a competitive telephone service that was shielded from BitTorrent by virtue of running in a dedicated frequency on the Comcast cable network. Comcast never explained this motivation to the FCC, although the company’s engineers did discuss it in open meetings with other engineers.

Application Interaction

The issue with Vonage and BitTorrent arose because BitTorrent was originally designed to use all available bandwidth – it has since been changed – and VoIP applications always require low delay and low packet loss. Loss and delay are two of the three main elements of QoS measurement; the other element is bandwidth. VoIP is a low-bandwidth application, while BitTorrent is a high bandwidth application that is relatively insensitive to loss and delay. VoIP is also a one-to-one application, operating between a pair of users, while BitTorrent is a “swarm” application in which each user’s computer downloads from a number of other computers while also uploading to a number of other computers at the same time.

Comcast ultimately settled the issue by implementing a network management system that responds to network congestion by reducing the bandwidth allocated to heavy users by reducing their network access priority until the congestion period ends. This is not an ideal solution, and Vonage still does not work as well on the Comcast network as Comcast’s voice product does.

A similar issue arises when VoIP is used at the same time as video streaming, but in a less severe fashion. Video streaming services such as Netflix and YouTube are buffered services that transmit packets in clumps or bursts to ensure that data is always available to the video player running on consumer equipment. Examining these streams with a network analyzer shows that the video service will send a few hundred maximum size packets in a burst, and then will remain silent for a second or so before commencing the next burst. This is a unique traffic signature.

VoIP, on the other hand, sends a short packet (200 bytes or so) every 50 milliseconds for immediate rendering to sound. While the video streaming packets go into a buffer for playback in a few seconds, VoIP packets are processed immediately with no more than 100 milliseconds of delay.

Because video streaming causes delay for VoIP and VoIP is not tolerant of delay by virtue of the design of the human ear and brain, it is reasonable for the ISP to give VoIP packets priority over video streaming packets. In fact, doing so improves the Quality of Experience (QoE) for the VoIP user and has absolutely no effect on the video streamer QoE because it simply decreases the time video packets wait in a buffer before they’re rendered.

What constitutes ‘reasonable’ network or traffic management practices?

Hence, taking into consideration the unique properties of these two applications, their traffic signatures, and the interaction of these traffic patterns on each other leads to a “reasonable network management” principle:

  • It is reasonable to adjust the time sequence of packet delivery between two competing streams between the ingress to the ISP network and the ingress to the consumer network when doing so improves the QoE for one application without impairing it for the other.
  • It is also reasonable to make the same adjustment between the egress from the consumer network and the egress of the ISP network under the same conditions.

Internet users employ a very small number of applications in terms of traffic signatures; the meaningful set of choices is limited to:

  1. Web browsing;
  2. Real-time gaming;
  3. Real-time communication by audio and/or video;
  4. Video streaming;
  5. Fast file transfers;
  6. Slow file transfers such as patch downloads and remote backups;
  7. Miscellaneous other applications such as Twitter and email with loose performance requirements.

Consequently, we can create QoS standards by measuring a synthetic mix of applications on a SamKnows Whitebox. In fact, SamKnows already does this in a limited way. A QoS guideline can be created by assessing the impact to the delay, loss, and bandwidth usage of the applications in the synthetic mix and this mix can be weighted by popularity, desirability, or some other criteria.

This form of measurement is much more sophisticated than any of the approaches currently used by national regulators.

Defining and Measuring Quality of Service

Each class of application has its own QoS requirement, so the “one size fits all” approach that uses ICMP “pings” to measure packet latency across various points in the Internet doesn’t provide very much meaningful information. The user experience is colored by subjective factors:

  • How fast do web pages load?
  • How clear are my VoIP calls?
  • Do I lag behind others in multi-player combat games?
  • Can I watch video programs in high definition without buffering or disconnections?
  • Can I back up my local data while I sleep?
  • Are my patches and virus definitions up to date?

The user experience is difficult to measure objectively because of variations in web page size and complexity, speed differences among end user computers, performance variations between web and video servers, and load within each computer and each home network.

Hence, it’s best to measure between dedicated testing appliances such as the Whitebox to Test Servers with synthetic application streams that represent typical traffic signatures. The specification of these streams is an item for deeper analysis.

Interconnection Deals Between ISPs and Edge Services

Policy makers are more focused on providing incentives for excellent services than for ensuring that services exceed some baseline; it’s more important to achieve excellence than to avoid failure.

The shortcoming of synthetic testing between Whiteboxes and Test Servers is its failure to assess real-world experiences. Surfing the web, playing particular games or streaming from particular services vary in QoE because interconnection agreements between ISPs and Services play such a strong role in performance. Perceived performance also depends in server performance as well as network performance.

This shortcoming may be partially remedied by placing Test Servers inside Edge Service networks, but this approach suffers from accountability problems: is the path fast because someone is paying a fee and is it slow because the Edge Service does not have enough compute power, transit bandwidth, or port capacity?

As the disputes between Netflix and large ISPs in the U. S. illustrate, both parties to such disputes tend to blame the other. One way to illuminate these problems, if not resolve them, is to revert to the Ping method, using such devices as the Archipelago tool employed by CAIDA to measure Internet performance worldwide.

Conclusions

Minimum QoS level is difficult, if not impossible, to define in way that is meaningful for all applications and all networks; it’s like measuring dining in terms of calories and price instead of taste, nutrition, and value. We can easily set standards for minimum QoS in terms of latency, packet loss, and bandwidth, but this doesn’t accomplish much. Similarly, we can define a usability standard for applications based on elapsed time to load web pages or video quality, but this is a negative threshold rather than a standard of excellence.

Because we experience the Internet through applications, it’s better to focus on the performance of individual applications, types and mixtures of popular applications, and user control over Internet performance.

The net result of effective competition is the ability of users to obtain the services they want at a price they’re willing to pay. Each ISP need not be all things to all people if it can carve out a sustainable niche where it can excel. Consequently, it’s most productive to allow diversity, transparency, and accountability from ISPs. This can be accomplished with an application-centered system of performance measurement and reporting.

While the U. S. retreats to 20th century telephone regulations, other nations pursue excellence by recognizing application needs and taking the necessary technical and regulatory steps to ensure these needs are met for the widest possible pool of new applications.