Tom Wheeler’s Tangled Web Recycles an Old Story

Former FCC Chairman Tom Wheeler shared his vision of a new web in an article on the Brookings Institution web site this week. Wheeler says the Internet is set to transcend the status quo, which he calls a “find-and-display protocol that organize[s] the internet’s morass of incompatible databases”. In Wheeler’s new Internet, we’ll see “the orchestration of raw intelligence to produce something new.”

Wheeler’s new Internet is essentially George Gilder’s Telecosm, a system in which bandwidth is unlimited, there is no network switching, and every piece of information anyone places on the network goes everywhere, privacy be damned. Gilder’s vision proved naïve and unworkable – switching isn’t going away and bandwidth is finite – so Wheeler uses the term “Web 3.0” to lose the baggage.

Incidentally, “find-and-display protocol” is a Wheeler neologism.

Wheeler’s Web vs. Reality

Wheeler seems to accept the misconception that IoT devices will flood the Internet with sensitive information. Something like this happens when devices are hacked, but it’s not normal.

Like the traditional web, smart devices only share information with appropriate partners. While we can instruct a temperature sensor to report exceptionally high or low readings to an authorized monitor, it’s not par for my sensor to report to your controller.

So the IoT doesn’t imply “flooding the network with a tsunami of data” from everything to everywhere. But it does mean that more devices will be connected and more data will be transported.

What Wheeler Gets Right

While Wheeler doesn’t express himself particularly well, his vision isn’t completely wrong. The new Internet isn’t simply a bunch of gizmos, it’s services that arise to make sense of the information gizmos can generate. Many of the new devices will be “driverless” in the sense that they’re not going to be directed by humans in real time.

And we should expect that more data will traverse the Internet year-by-year, as it always has. Sensors create new information by their nature: any time the condition of a sensor changes, information is created. If some of this information is reported to artificial intelligences, old services become richer and new ones are enabled.

The success of an internet that includes real-time information and AIs (as well as consumers viewing content) depends on networks delivering data in a timely manner with an acceptable level of reliability and cost. Wheeler puts this down to conformance to established conventions of openness, and that hope falls short..

Progress Is Only as Fast as the Slowest Parts

The Gilder model of networking abolishes norms of network architecture by assuming something that’s unlikely to exist: networks of infinite capacity. He believed that network speed increases at three times the rate of semiconductor speed because it did for a certain period in the late ’90s.

This was because optical networks were in their infancy in those days, while microprocessors were fairly mature. Ultimately, network technology can’t outpace semiconductors in general because optical networks are made out of semiconductors and light.

Over the short term, some chips do get faster than others: microprocessors have outpaced memories, giving rise to complex caching schemes to prevent inevitable bottlenecks. And we’ve seen rapid advances in specialty optical transceivers, outpacing processors. But whole systems aren’t much faster than their slowest parts.

Gilder claimed future networks would be “dumb” because bandwidth would be infinite. That’s more aspiration than realism.

What Wheeler Gets Wrong

Wheeler insists that his vision only depends on applications “being free of interference from those who run the networks that take us to and from the internet.” This is a plug for his claim that the Open Internet Order of 2015 didn’t regulate the Internet per se, just the access networks.

This is a defective view of the Internet and especially of the Internet of Things. IoT is the sensors and the monitors with which they communicate. We can’t divide the networks that form the Internet into access networks, backbone networks, and edge service networks without losing the Internet’s essential unity. It’s all of these things working together, and more.

And it’s not enough for networks to “refrain from interfering” when data delivery requires action on the network’s part. Networks need to actively deliver correct information to all the right places, not just indiscriminately re-transmit everything they see to the entire web. And they need to deliver this data on time.

Indiscriminate retransmission assumes bandwidth that doesn’t exist (and probably never will exist without quantum magic.)

Freeing Networks to Impove

While bandwidth will never be infinite and free as Gilder and Wheeler wish, there is a way forward. We can develop networks that provide sufficient bandwidth, latency, reliability, and economy to make a much broader range of applications practical.

This is what network engineering has done throughout the broadband era. Consumer bandwidth was frozen by regulation to 4 Khz per connection for decades under the Communications Act of 1934.

But the advent of broadband technology – and the loosening of constraints on network service models provided by the 1996 Telecommunications Act – allowed capacity to increase as fast as technology and investment could run.

This is why broadband network capacity has improved by an annual average of 25% for the past twenty years, from about 1 Mbps in 1996 to close to 100 Mbps today.

Substituting Intelligence for Infinite Bandwidth

Applications are activities tailored to particular needs, and as the needs vary so do the network requirements of applications. It’s not enough for networks to “refrain from interfering”, they need to give each application the treatment it desires within a budget of finite capacity and reasonable cost.

Networks do this by recognizing application needs and allocating resources in an optimal manner. This practice – made controversial by open Internet regulations  – is obviously the tradition in all resource management systems in every sphere.

As I explained in my post on Time-Sensitive Networks, resource management is not a zero-sum game. It complements bandwidth growth by providing performance improvements using a second tool.

The IEEE 802.11n standard for Wi-Fi pursued a two pronged approach: it enabled 4X higher speeds than 802.11g by doubling channel size – more bandwidth – and also by improving transmission efficiency through aggregation.

Back to the Basics of Networking

This method of improving performance by increasing raw capacity and efficiency in lockstep has served us well in Wi-Fi for 15 years. The same model holds in 3GPP technology for licensed networks.

And the same model will hold for 5G networks. Capacity will improve by making networks denser and by exploiting more radio spectrum. But the real win for applications comes from opening a dialog between networks and apps.

Instead of being required to guess what applications need, 5G networks will be told. And instead of applications having to guess what the networks can supply, they also will be told. This is all explained in our podcast with Peter Rysavy on 5G application support.

Rather than trafficking in ancient speculations about the future of networking, would-be visionaries would be better served by developing an understanding of networking technology. That’s the real driver of innovation.

Grounding Visions of the Future in Reality

None of us actually knows what the Internet will look like in 20 years: we don’t know what apps we’ll be running, what threats we’ll be facing, or how regulatory concepts will evolve. Hence, it’s better to create systems and structures that promote experimentation and progress in many directions than to impose personal visions.

Wheeler regards “openness” as the sole virtue of networking. He’s more concerned with limiting risk than with promoting progress. This is the road to disaster.

Openness is fine except when it isn’t: an Internet that’s every bit as friendly to unlawful behavior as to life-saving technologies should not be our goal. We want more from the Internet than merely mitigating some of the problems it creates.

Internet generations of the past have all tended to concentrate on a single application. The Web 1.0 generation was actually built around the needs of the Web as an application. What Wheeler calls Web 2.0 is actually the abandonment of the web as the dominant application in favor of video streaming.

The Question for Regulators

We’ve never really had a network that did a fantastic job of adapting itself to a diverse portfolio of applications. We understand how to do this now because network technology has given us the tools.

So the question for regulators like Mr. Wheeler is whether they’re going to allow us to use the tools we’ve developed – bandwidth reservation, software-defined networks, and network slicing – to support an invigorated innovation economy.

Rather than insisting on archaic frameworks like Title II and over-valued single issue fixations like “openness” to provide a map of the future, I’d much rather see a new regime where we can question assumptions, explore new paths, and put all options on the table. But this idea is very threatening to the old guard.

The Internet’s dominant firms – Google, Facebook, Amazon, Netflix, and Microsoft – are doing well with the Internet we have. A new and improved Internet threatens their positions, hence they fight the future and insist on preserving the status quo. Tom Wheeler cleverly protects this interest by arguing for a “new” Internet that’s really no different from the one we have.