Surveying Broadband Quality
One of the annual rituals at the FCC is the Broadband Progress Report, AKA the 706 report. This is a statutory obligation imposed by Congress on the agency by the 1996 Telecom Act. We’re in the midst of the 706 process since the FCC has released a Notice of Inquiry with a comment date of Sept. 6th.
What the 706 Report Requires
The description of the report is very terse:
(b) Inquiry: The Commission shall, within 30 months after the date of enactment of this Act, and regularly thereafter, initiate a notice of inquiry concerning the availability of advanced telecommunications capability to all Americans (including, in particular, elementary and secondary schools and classrooms) and shall complete the inquiry within 180 days after its initiation. In the inquiry, the Commission shall determine whether advanced telecommunications capability is being deployed to all Americans in a reasonable and timely fashion. If the Commission’s determination is negative, it shall take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market.
The FCC has delivered 11 of these reports, and each one is very different from the others in form and content. While you would think that the FCC would have a template to follow, each report reads as if it were the first: old terms are redefined, new terms are introduced, measurements are proposed, and conclusions are drawn on the basis of a record that’s always shifting in terms of detail, reliability, and coherence. This year’s Twelfth Broadband Progress Notice of Inquiry is the most shattershot and confused edition I’ve seen.
The remit consists of two parts: First, Congress wants to know how well the broadband providers are doing in the never-ending quest to provide citizens with world-class communication networks; and second, it wants to the FCC to “remove barriers to infrastructure investment” and to “promote competition” if it finds we’re lagging behind.
Defining the Terms
In a rational universe, the first 706 report would have defined the terms “advanced telecommunications capability” and “reasonable and timely deployment to all Americans” in a way that wouldn’t require constant maintenance. It might have done this by defining the capability in terms of the most commonly used network applications in the world. While the list of these applications would surely change from year to year, the methodology for identifying them wouldn’t. It would include the applications that get the most play as well as new ones that appear to be gaining eyeballs.
That defines “advanced capability”, so it’s step one. Step two is to identify the network characteristics that required to make these applications work, and this is strictly empirical. If the FCC finds, for example, that today’s applications include Netflix, the web, Skype, and Pokémon GO, it would simply identify the requirements of these applications in terms of measurable network performance terms such as bandwidth, latency, and error rate. (The current inquiry starts to do this, but crashes on the requirements for Netflix because it doesn’t know how many simultaneous streams to add up. The answer to that is the household average for reasons that will become clear as we do along.) So we would be looking for networks that can meet the needs of the leading applications in all of these dimensions.
That brings us to “reasonable and timely.” If the networks we have today meet the needs of the applications we have today, we’re just about done, but there’s a catch. We can’t run applications that our networks can’t support, so we have to give extra credit to emerging applications than work on a few networks but not on others as long as they’re either widely used or rapidly growing where they can be run. And that means two things: we want the best networks in the US to be capable of running the world’s best applications, and we also want rural Americans to have access to the applications running on America’s best networks, wherever they may be.
Where the FCC Goes Wrong
Generally, Democratic FCC’s have gone wrong by defining advanced networks in terms of arbitrary speeds needed by future applications that may or may not ever exist. For some time that meant 4 Mbps down and 1 Mbps up, but last year Chairman Wheeler multiplied the download speed by 6.25 and the upload speed by 3, yielding a new benchmark of 25/3. To the extent that the FCC has ever attempted to justify this term, it points to a recommendation by Netflix for 25 Mbps connections. Although Netflix also says HD video will stream quite well at 5 Mbps, it gets to 25 by “arguing that the benchmark should accommodate future demand, including emergence of new content and services” (footnote 165).
While I certainly believe that future networks should service future applications, I fail to see how the FCC, or Netflix, or anyone else can say what those applications are and what kind of performance they need. If the FCC would simply admit that it doesn’t either but has chosen to multiply current needs by five in order to feel safe, it should say so. Netflix’s oft-cited (by Chairman Wheeler, typically without attribution) line that “a 25 Mbps connection is fast becoming ‘table stakes’ in 21st century communications” (footnote 205) is nothing more than one party’s opinion, and a rather self-serving one at that.
It comes down to the claims that 4K TV is rapidly becoming the standard, which it isn’t; and that 4K needs 25 Mbps, which it doesn’t because of compression. In reality, the difference between Netflix 4K and HD videos is barely perceptible. The more likely reason Netflix recommends a 25 Mbps connection for a 4 Mbps stream is that high capacity enables its servers to catch up when they fall behind in refilling the user’s video buffer, thus saving the company money on video servers. Users perceive this as a margin of safety without realizing who’s to blame when their picture freezes.While this happens because of network congestion on some instances, more often than not it’s server overload. So the FCC allowed itself to be snookered by one of its pet companies.
How to Get it Right
But the video server requirement also depends on the number of concurrent streams users want. It seems to me that this is an empirical question that can best be resolved by surveying usage patterns. The benchmark speed should be sufficient to handle the typical number of stream to the typical household with a smaller safety factor, best set at one additional stream. This isn’t complicated: if the typical family streams two programs at 5 Mbps each this year, then it follows that the typical broadband connection needs to support three streams this year. If consumption goes up next year, then next year’s benchmark speed should go up.
We can go through a similar exercise for the other top 5 or top 10 applications, bearing in mind the dimensions of latency and packet loss. But we can’t do that until we have market-by-market measurements of these things, as well as good international data from relevant nations. Without the data, this NOI and the following report are little more than handwaving.
The Research Gap
The FCC’s Broadband Progress Report is a wasteful exercise in reinventing the wheel each year because it lacks a coherent methodology and also because it ignores the scholarly research on applications and broadband quality. I’m perhaps painfully aware of the pitfalls in evaluating broadband quality because I’ve written two research papers that address the problem on the basis of open data sets of national and international data. The first of these, The Whole Picture: Where America’s Broadband Networks Really Stand, compared the US to the OECD nations, and the second, G7 Broadband Dynamics, compared the US to the more directly relevant G7 nations. These papers start with application requirements, map them on to current networks, and conclude with an examination of network construction and upgrade project underway in the relevant regions. The FCC would do well to contract the broadband progress report to a qualified researcher. Others who have done this sort of thing include Christopher Yoo, Roslyn Layton & Michael Horney, Stuart Brotman, and Bruno Lanvin.