Reacting to the Broadband Speed Data
Over on the Innovation Policy Blog I take a look at the FCC’s broadband speeds report and some of the reactions to it:
The results are surprising to some because they contradict a widely-circulated myth to the effect that America’s residential broadband users were not getting what they paid for. The FCC’s previous study, based on comScore data, claimed that Americans were getting only half the peak download speeds they expected to get, and that story fits the desired narrative of some public interest professionals perfectly. The old report was flawed on several grounds – there weren’t enough measurement servers for one – but mainly by the fact that it didn’t know which service tiers the measured users were actually on and tried to guess them from the observed speeds:
The trade-off made in applying this methodology is that subscribed speed tiers are inferred from observed speeds, rather than known directly (from, say, subscribers’ bills). For example, some machines in the data were tested more than 100 times: if any one speed read was more than 10% above the actual subscribed tier, the machine would be wrongly identified as subscribing to a higher speed tier. Alternately, if the maximum measured speed was substantially lower than the actual subscribed tier, that machine could be wrongly identified as subscribing to a lower speed tier. Both could bias the advertised tier upward or downward.
It’s fairly obvious that you can’t very well estimate advertised speed from observed speed without bias, and this method penalized the ISPs who offered actual performance above the advertised “up to” rates; the new study found that four of America’s largest ISPs (Verizon, Comcast, Time Warner Cable, and Cox) are in this group, giving users more than they paid for. This methodology was used because it was expedient: The National Broadband Plan needed the data on a short time line and couldn’t worry too much about accuracy.
Check it out.
[…] [Cross-posted at High Tech Forum] […]