Measuring Internet Performance
In my last big article, I explored the technology that makes the Internet different from the telephone network, packet-switching. This time I want to explore one of the major implications of packet-switching, statistical behavior. The short version of this piece is that the telephone network oriented a generation of regulators and policy geeks toward certain expectations about network behavior that are no longer valid, and the tension between the old view of networks and the new view is the source of a lot of conflict.
Measuring the performance of non-deterministic, packet-switched networks such as those comprising the Internet is much more difficult challenge than is generally appreciated, but it’s necessary – or viewed as necessary – by a host of consumer-oriented policies. As currently operated, the Internet provides no performance guarantees, relying on a “best-effort” system of packet transfer across facilities shared by a large number of users – some 500 million systems are attached to the Internet presently – operating under wildly different loading scenarios. Many advocates argue that this “best-effort” system represents an ideal state of affairs, and are offended by the notion that network operators might supplement basic service with a more deterministic, for-fee system with bounded performance guarantees. Bob Frankston, for example, believes that the Internet represents a “paradigm shift” in network construction because it radically separates transport from applications. How radical this shift is from a historical perspective is debatable, as the wheel pretty much accomplished the same thing. But I digress.
At a high level, the Internet as a whole exhibits marked variations of performance on every temporal scale, diurnal, weekly, and seasonally, and the systems that provide World-Wide Web and streaming audio/video services are subject to radical fluctuations in load on sub-second intervals. It’s a non-deterministic system by design, providing at best an “equal” opportunity to contend for bandwidth, but not a guarantee of equal outcomes. The networks that comprise the Internet may provide such guarantees – nothing about the design of the Internet precludes this – but you can’t assume they will. And some of our recent policy debates have made a goal of outlawing deterministic practices.
Mobile broadband networks add an additional level of non-deterministic behavior to the Internet’s intrinsically statistical nature. Mobile radio networks are inherently non-deterministic for three reasons:
- They share radio resources such as frequencies and digital codes among users;
- The communications medium they employ – air – is more susceptible to noise, interference, and signal fade than is wire; and
- Users roam about the network in ways that the network operator must attempt to predict but can’t control.
Consequently, variations in transient conditions are more radical on mobile networks than on wireline networks. Internet services over mobile radio networks is therefore multiply non-deterministic, so the basic premise of network performance measurement – that past performance predicts future results – is questionable. Internet service over mobile broadband is highly variable with respect to location, so measurements taken on one part of a network will not even predict performance on other parts of the same network at the very same time. In addition to the requirement for frequent sampling, mobile performance sampling needs to be fair and geographically representative as well.
So how do we measure the performance of the Internet, mobile or wireline, in such a way as to provide users with meaningful insight?
We’ll follow up with an answer – or more than one – next time.