OTI United States of Broadband Map is Fake News

The New America Foundation’s Open Technology Institute (OTI) has a history of playing fast and loose with the facts: its discontinued Cost of Connectivity reports systematically created the false impression that broadband speeds in the US are lower than they are and that prices are higher. But the errors those reports made were somewhat excusable because direct comparisons of global speeds and prices are hard to make without significant expertise and an awful lot of work.

OTI’s latest attempt to sell the idea that the US is failing to provide citizens with high-quality broadband takes a different approach than the discredited work of the past. A new report, The United States of Broadband Map, assigns low quality M-Lab speed test data to census tracts, zip codes, and counties in an attempt to create a similar misleading impression.

Not only is the speed test data not reproducible, the mapping exercise is inept below the level of counties: entire zip codes and census tracts are missing, and many of these that are present are entirely unpopulated with test data. And the data that is reported is wildly inaccurate. The only impression a reasonable person can draw from this exercise is that OTI is no better at mapping than it is at measuring something as basic as broadband speed.

Good Data Tells an Entirely Different Story

We know how broadband speeds stack up across the country and the world because we have the Speedtest Global Index, a monthly snapshot of fixed and mobile connection speeds. We also have annual reports from Speedtest owner Ookla that summarize major findings and we have the FCC’s Measuring Broadband America data drawn from controlled test systems.

The latest Ookla snapshot has the US in 7th place in wired broadband speeds, trailing only South Korea and five nations whose populations wouldn’t even make them major counties in the US. Among the five largest nations, the US is at the top of the heap. Size matters in global broadband because rural areas are harder to serve than the high-rise apartments that house most people in places like Singapore, Hong Kong, and Monaco.

The M-Lab data is notoriously shaky, especially at high speeds. Most people in the US can subscribe to gigabit broadband now, thanks to increasingly common fiber-to-the-home (FTTH) networks, the DOCSIS 3.1 cable broadband standard that scales up to 10 Gigabits per second, and the nascent 5G rollouts that record speeds up to one and a half gigabits per second.

Inept Mapping Can Make Data Lie

The OTI report is shamelessly misleading, comparing the FCC’s data on coverage to speed tests to support the bizarre claim that people who subscribe to lower speed plans are victims of false advertising. The report seeks to refute a claim that no one has ever made: OTI assumes that everyone in a census tract subscribes to the highest tier on offer, but we know the bell curves for consumer choice don’t work that way in any market.

Even worse, the M-Lab dataset is too sparse for any conclusions to be drawn at the level of the FCC’s census tract deployment data. I tried to validate the OTI map by looking for data from my office location in Lakewood, Colorado only to find my zip code and census tract missing entirely from the map.

This is peculiar because I live in the primary zip code for Lakewood, Colorado’s fifth most populous city. Some of Lakewood zip codes are present, as are some census tracts, but not mine. OTI claims:

You can compare the datasets at the census tract, county, zip code, State House, and State Senate levels by zooming in and out. You can challenge FCC data, determine trends, and identify problem areas.

But this is blatantly false. The closest I can get is at the state House District level, where the average download speed is alleged to be 16.60 Mbps. This is an area served by CenturyLink with speeds from 40 to 80 Mbps and by Comcast from 100 to 1000 Mbps. Many of the census tracts nearby simply show [No results].

M-Lab Test Results Not Reproducible

In the interest of science, I upgraded by broadband plan from 400 Mbps to 1 Gigabit so I could compare M-Lab to Speedtest under stress. I’ll note that this was utterly painless: one evening I checked out my ISP’s upgrade plans online, and then ordered a faster modem from Amazon that will enable me to scale up to 2 Gbps should I desire.

When the modem arrived the next day, I installed it, activated it online, and called the ISP to make the upgrade. Within an hour from opening the Amazon box, I had gigabit service with four times as much upstream capacity as I had before, HD landline service, 20 more TV channels, and a lower monthly bill to boot.

As one does, I immediately ran Speedtest, recording 990 Mbps down and 41.4 up over the in-office 1/10 gigabit Ethernet. I was disappointed to learn that my Wi-Fi network peaks at 621 down and 42 up. I thought I’d built a faster Wi-Fi than that, alas.

M-Lab said my speed was a mere 201.5 down and 35.7 up. Speedtest gave me a believable ping time of 12 ms to a CenturyLink server, outside my ISP’s network, but M-Lab claimed my ping time was an utterly unbelievable 4 ms to one of their Denver servers, also outside my ISP’s network. The ping command on my computer measures the M-Lab server at 10 ms.

Speed Varies with Distance…and Other Factors

Speedtest allows you to choose from dozens of servers, some quite far away. Changing servers produced comparable results in all but one case:

Comcast in Denver: 939, 9 ms
Vivint in Salt Lake City: 939, 19 ms
Vistabeam in Gering, NE: 918, 17 ms
Spectrum in Grand Junction CO (other side of the Rockies): 940, 17 ms
Suddenlink in Amarillo, TX: 840, 49 ms
UTOPIA in Salt Lake City: 933, 23 ms
and the outlier, United Telephone Association in Dodge City, KS: 62, 44 ms (a day later its speed was 261.)

M-Lab chooses a server for you, but not always the same one. Its numbers vary an awful lot; after the initial test I got these from the Measurement Lab web site:

Run 2: 233, 4 ms
Run 3: 653, 4 ms
Run 4: 339, 4 ms

I also ran M-Lab tests from its NDT web site and got these:

Run 5: 176, 12 ms
Run 6: 434, 16 ms
Run 7: 277, 14 ms
Run 8: 696, 20 ms

The ping times show how far away the server is. In this latter test, the server farthest away (20 ms) records the highest speed, 696 Mbps. When all factors are equal, a server close by will be faster than the one farther away because of TCP overhead. When the one farthest away is faster, that’s an indication that the one close by is overloaded.

Test results should be consistent when performed one right after another, but we don’t see that. We can expect variation between tests by as much as 5 – 10%, but when your numbers vary by 400% it’s certain that your test data is so dubious that you can make it confess to anything.

Where M-Lab Goes Wrong

M-Lab is aware that its tests produce results that can’t be duplicated in any other test methodology in the world. As we explained in our last piece on M-Lab, there are four reasons for the difference between its tests and those of Speedtest by Ookla:

  1. M-Lab’s NDT can’t fill the pipe because it uses a single TCP connection instead of the multiple connections used by browsers and Speedtest.
  2. M-Lab test servers are underpowered.
  3. M-Lab’s geolocation method is  blind to network topology and therefore produces inefficient, dog leg paths.
  4. NDT uses a system known as “bottleneck bandwidth and round-trip propagation time” (BBR) to evaluate network capacity instead of probing for maximum capacity. Hence, M-Lab measures a prediction about a network rather than the network itself.

And there’s one more issue with M-Lab: its latency measurements are ridiculous. It takes 8-9 ms to get from my office to a Comcast server on the same network as the office. Therefore, any test must have a latency of at least 9 ms to be credible.

Speedtest over Ethernet is faster than it is over Wi-Fi because Wi-Fi is a bottleneck (in my case, not necessarily for everyone) that prevents my computer reaching wire speed. The difference between Wi-Fi and Ethernet is consistent because this bottleneck is a feature of the 2×2 MIMO Wi-Fi hardware I’m using.

Analyzing the Test Data

While my Wi-Fi only has to travel 12 feet over the air, the number of MIMO antennas in use limits the Wi-Fi network’s capacity at any distance. The fact that Speedtest is consistently higher than 900 Mbps to remote servers and M-Lab never even reaches 700 tells me the M-Lab servers are underpowered for measuring gigabit networks.

The variation in measured speed by M-Lab suggests that its BBR algorithm over-reacts to congestion by reducing its speed well below network capacity. BBR was designed to do this so that enterprise users can share constrained broadband pipes with each other politely, but it’s inappropriate to use this algorithm in a data collection system intended to guide public policy.

M-Lab claims the average speed in my legislative district is less than 17 Mbps, but this is impossible. CenturyLink’s lowest advertised speed is 40 Mbps in this district and Comcast’s is 100 Mbps. Comcast’s share of this market is probably 75% or more, so the honest average would be 85 Mbps even if everyone subscribed to the lowest speed tier.

OTI’s Principal False Claims

This OTI report is more replete with falsehoods than any previous OTI report, and probably more than any policy report I’ve read in the 15 years I’ve been concerned with broadband policy. Here are the big whoppers:

  1. The underlying assumption that FCC 477 data should be consistent with average download speeds is ridiculous. Every ISP offers multiple speed tiers, but 477 only asks for the highest.
  2. OTI claims that the FCC makes no effort to verify ISP advertising claims of speeds “up to” a certain level against measured speeds (“However, the FCC does not collect any data about…whether consumers actually receive those advertised speeds”.) But the FCC has been running its Measuring Broadband America program since 2010 to do just that. [Note: I introduced then-Chairman Genachowski when he presented the first report from this program at a Best Buy store in DC in 2011.] M-Lab and New America have participated in this program from the beginning, so they obviously know it takes place.
  3. The M-Lab system contains a number of peculiar design choices that add up to an inaccurate system that produces results that vary by as much as 400%. Contrary to OTI’s claims that M-Lab is more accurate than Speedtest because M-Lab servers are off-net while Speedtest’s are on-net, M-Lab servers generate wildly different measurements than off-net Speedtest servers.
  4. OTI’s claim that its map produces actionable data at the “census tract, county, zip code, State House, and State Senate levels” is blatantly false. Its census tract and zip code dataset is incomplete and its other datasets are wildly inaccurate.
  5. The M-Lab program places servers in off-net locations that don’t correspond to any web destination that any user would ever visit. It’s reasonable to test on-net because congestion can often be detected by on-net testing. It’s also reasonable to test end-to-end broadband speeds between consumer premises and peering centers, exchanges, and CDNs where Internet content is actually stored. But it makes no sense to run a test between a consumer’s location and a server located in a place where no actual content can be found, as M-Lab does. The test servers it uses to measure my connection are in Denver, a city that has no peering centers and no public Internet exchanges. What exactly is the point of this if not to mislead?

How We Got Here

Google announced the project that came to be called M-Lab at a policy conference in San Jose on May 12, 2008. I remember Rick Whitt’s announcement vividly because I was one of the other speakers on his panel.

At the time, Google was trying to buy Yahoo’s search business, a transaction that raised a lot of eyebrows. A couple of weeks later, I wrote an op-ed for the San Francisco Chronicle on the issues this transaction raised the tactics Google was using to make it seem palatable.

In short, Google sought to protect itself from regulatory scrutiny by pretending to be one of the Internet’s good guys. It wanted to encourage regulators to focus on the presumably more dangerous ISPs by creating the false narrative that broadband sucks in the US.

With regulators engaged on ISPs edge providers with dubious business models would be able to loot private data and buy potential competitors with abandon. This strategy took us to where we are today with calls for Big Tech breakups and record fines from the FTC.

A Meme that Serves Many Interests

The DoJ blocked Google’s acquisition attempt, but the “inferior broadband” meme stuck around. Alternative facts such as this one serve a variety of policy agendas.

It’s no secret that a number of policy interests would like to see the government take a larger role in broadband networks, something that’s often compared to rural electrification and the Interstate Highway System. Susan Crawford’s Fiber: The Coming Tech Revolution―and Why America Might Miss It is the latest incarnation of this idea in policy fiction.

The Commerce Department’s desire to take over America’s 5G network design and construction is an updated version of Crawford’s idea. Both Crawford and the Commerce Department’s Earl Comstock employ alternative facts to support their desired policy.

Grounding Policy in Reality

These views are driven by subjective preferences, bad economics, ignorance about network engineering, animosity toward incumbents, as well as a naive idealism about government’s ability to design perfect markets for imperfect goods and services.

The plain fact is that the US would not be the world leader in Internet-based services if all the networks within our borders were substandard and inferior. The domestic network represents the capabilities that our providers of Internet-based services assume.

There would be no Amazon, Google, Facebook, Netflix, Cloudflare, Airbnb, Uber, Twitter, Waze, Instagram, Slack, Snap, and Travelocity if our networks were not up to the task of supporting them. We wouldn’t dominate the markets for personal computer and mobile operating systems if we didn’t have up-to-snuff networks.

Rural Broadband Needs Work

Of course, there are situations where our networks can and should be much better. The US has a unique problem among developed nations when it comes to serving rural populations. We have more rural people living farther apart that most large industrialized nations do.

Addressing this problem requires different policies in the countryside than we have in the cities, for broadband no less than for electricity, water, wastewater, education, and housing. The rural development problem necessarily involves the transfer of tax money from the cities to the countryside.

But we need good data that’s targeted toward the problems we intend to address, not made up policy fictions that appear designed to make more change in the cities than in the small towns and hamlets. This OTI report and others like it, such as Broadband Now’s The State of Broadband in America (another bizarre mapping effort) aren’t helping.

Solutions That Work

Speedtest by Ookla and the FCC’s Measuring Broadband America project provide us with quality-controlled platforms that can provide us with all the reliable data we need on broadband performance. The 477 report obviously fails in large, sparsely-populated census tracts, but it’s unclear that its shortcomings are all that significant.

But we’ve made great progress by reallocating a great deal of the rural broadband budget toward reverse auctions of subsidy dollars toward players willing to compete for support and customers. We have advanced fixed wireless technologies that can deliver high speed, low latency services at a fraction of the cost of rewiring. And we’re developing LEO satellites that have a great deal of promise.

US broadband is nothing to sneer at, as all of us who have taken the time to study it in depth are happy to say. Alternative fact reports targeted at naive journalists have the potential to do serious harm, so I would encourage anyone who finds OTI’s United States of Broadband remotely credible to dig a little deeper before firing off clickbait headlines. You might be a victim of fake news.