Rewiring America Smartly
The new BITAG report, 2020 Pandemic Network Performance is a good reality check for broadband policy. Policy makers are prone to overreaction, so it’s good to calibrate current proposals against observable reality.
As we’ve noted, proposals are afoot to revise the standard for subsidy-eligible broadband from 25 Mbps down/3 Mbps up to levels all but impossible to achieve without replacing all of today’s perfectly functional networks with fiber optic cabling of some sort and some very power-hungry electronics. The new requirements would be 100/100 or 1,000/1,000.
The move away from networks that provide greater downstream than upstream is motivated by the belief that networks of the future will need to be symmetrical because a single symmetrical application – videoconferencing – emerged during the pandemic. This demand is not consistent with what the BITAG investigation found about the pandemic traffic mix.
Changes in the Traffic Mix
The BITAG report (of which I was a co-author) found that the pandemic had two major impacts on the overall traffic mix, one on volume and the other on distribution. The Lumen (formerly CenturyLink) experience was typical:
Daily broadband peak traffic for some video streaming services increased by 30%. Daily broadband subscriber combined upload and download volumes, in gigabytes, increased by about 40% when stay at home orders began, but have since leveled off. The ratio of broadband downstream to upstream monthly gigabytes decreased from 16:1 to under 14:1 as the use of teleconferencing increased.
At the September 2020 Internet Architecture Board (IAB) COVID-19 Impacts Workshop 2020, researchers from the Interconnection Measurement Project put traffic growth in perspective:
Much of the traffic growth is attributable to video streaming and video conferencing. As a percentage of the overall traffic video conferencing grew from 1% of the traffic to 4%, while video streaming’s share of the overall traffic declined from 67% to 63% while still growing overall.
While both video streaming and conferencing grew, video streaming still accounts for most Internet traffic.
Video Streaming is more Traffic Intensive than Conferencing
Video conferencing as a percentage of user minutes of engagement grew faster than any other application:
In response to the crisis, Zoom introduced a limited free hosting service, adding hundreds of thousands of paying users over the course of the year. In June 2020, Zoom said that daily meeting participants had grown from 10 million in December 2019 to 300 million.
BITAG found that traffic growth from conferencing didn’t directly follow this pace because it generates a lower traffic volume than streaming:
Video streaming typically requires 3 to 5 Mbps for high-definition video, and video conferencing typically requires anywhere from 500 Kbps to 2 Mbps for both upstream and downstream. Knowing this we can infer that upstream growth is due to the increased usage of video conferencing and the downstream growth is primarily driven by video streaming.
Video streaming mainly consists of movies and sports shot with 4K cameras while Zoom resolution is limited to 720p, barely HD. Live action is also less compressible than conferencing; the more change that occurs in a video scene, the less opportunity there is to compress.
What Does This Mean for Future-Proofing?
Since the 1990s, advocates for rewiring America with fiber optic cables have insisted their approach allows us to meet all future needs by simply upgrading relatively inexpensive bits of electronics known as transmitter/receivers (transceivers) without touching the wires. The future-proofing claim assumes that data networks are like imaginary power grids that occasionally need transformer upgrades but never need higher capacity cables or smarter switches.
This view of the power grid is an oversimplification because no part of the grid is forever. As demand for electricity grows thanks to the replacement of fossil fuel-powered cars with electric ones, we will certainly need to increase the grid’s overall capacity.
This means more wires, smarter load balancers, and better management: “The Electric Power Research Institute (EPRI) [pegs] the cost to move the US to a smarter national grid with better protection against blackout events at somewhere between $338bn and $476bn,” dwarfing the president’s $100B broadband check.
Configuring Fiber Networks
As demand for broadband grows, it often becomes necessary to replace or supplement existing cables – both copper and fiber – with new ones. The standard architecture for residential fiber to the home, xPON, shares fiber optic cables among groups of users just as the cable TV plant does and just as wireless networks share radio frequency spectrum.
There is a limit to the utility of xPON in scenarios where multiple services (wireless backhaul, institutional connections, and residential users) all need to be supported. While it’s true that installing a brand new cable plant in an area with no previous network is expensive, adding or replacing wires in an existing network is more or less the same as adding or replacing electronics.
Fiber optics aren’t a good end-to-end solution for networking. Devices that can directly connect to optical fiber are very rare, largely because optical cables are hard to fabricate on the field. For ease of installation and maintenance as well as functionality, we prefer hybrid networks of fiber and either copper or wireless.
Networking at the Edge
The first thing that data center operators do with each generation of optical technology is make a limited distance copper substitute such as IEEE 802.3 10GBASE-T Ethernet, a 10 Gbps system, or its 25 and 40 Gbps cousins. A residence with fiber to the home (FTTH) service is all copper and wireless inside.
Fiber is an important element of mobile networks, but cell towers don’t organize their backhaul cables the way that FTTH services do. Mobile base stations generally use Ethernet topology, where each device has its own cable to a switch port.
If data consumption continues to grow, as it probably will, at some point xPON networks will need to be reconfigured to Ethernet topology as well. So there’s really nothing “future-proof” about today’s FTTH networks except for their licenses to access poles, conduits, and rights of way.
Networks of the Future will be Wireless
- Seventy percent of all Internet traffic begins or ends on a cellular or Wi-Fi network.
- The public switched telephone network (PSTN) will be shut down in the next five to ten years.
- First responders are dispatched over LTE networks.
- Most consumers with Internet problems during the pandemic were on older versions of Wi-Fi.
- Seniors who fall and can’t get up can call ambulances via cell phones or watches but rarely by wire.
- New residential broadband competitors use 5G or Low Earth Orbit satellite networks.
- Wi-Fi 6e provides multi-gigabit per second service.
- Precision Agriculture tractors and sensors are wireless.
- Transportation is wireless; indeed, the first application for wireless data was railway telegraphy.
- People gain more from ubiquitous coverage than from ultra high speed broadband to a single point.
- NASA’s Ingenuity Mars helicopter is controlled from Earth through wireless. See how it works!
Which of these observations suggests that the networks of the future will be all wired? All I see for the wired future is more wireless access enabled by fiber and wireless backhaul.
Networks of the Future Will be Asymmetric
Despite the rise of video conferencing to insane levels during our quarantines, Internet traffic is still dominated by downstream over upstream traffic by a 14:1 ratio. If consumers start massively connecting video cameras to cloud services, this could change to 10:1 or even 5:1 after some decades.
Nest video is the second largest source of upstream traffic in the Nokia Deepfield survey. But HD cameras only generate 3 – 5 Mbps of traffic, most of which can be processed and stored on a local server without perturbing the Internet.
Since computer designer Gordon Bell’s presentation Internet 1, 2, & 3. Past, present, and future at InternetWorld in 1995, it has become fashionable for visionaries to tout the benefits of symmetrical networks. Most of these come down to the prediction that broadband will erase the distinction between producers and consumers of digital content.
We’ve never seen that because it’s a lot harder to create content than to consume it. Most of us will never create as much content as our security cameras do for us.
[Incidentally, Bell’s idea of a future-proof network was 25 Mbps; I wonder how silly today’s network forecasts will look in 25 years.]
All Networks are Symmetrical
Let’s look at some of the paradoxes of networking. Advocacy for networks with the same upload and download capacity presumes at least four things, some doable and others not:
- Symmetrical network transceivers;
- Bi-directional media;
- Symmetrical usage decisions on the part of users;
- Symmetrical bandwidth allocation decisions on the part of operators.
When we look at networks alone, without considering applications, symmetrical network elements made perfect sense. It’s nice to have symmetrical Ethernet chips and bi-directional cable or wireless channels because one computer’s upstream is another’s downstream.
Video servers – whether Netflix or my animal cams – generate 20 times more upstream than downstream traffic. Consumers see all of that traffic as downstream, so it all balances out.
Even if we’re broadcasting, the number of messages sent and received is equal. So broadband networks are necessarily symmetrical viewed holistically. This is what points one and two are about.
No Networks are Symmetrical
Viewed from the access interface – the on ramps and off ramps – symmetry is as rare as hens’ teeth. That’s because each part of a network plays a role; the transmitter and receiver of every message are located on different devices.
Like it or not, Netflix does more transmitting than receiving and television sets do more receiving than transmitting. So the network segment attached to the Netflix server needs to support a lot of transmitting and the one attached to the TV set has to handle more receiving.
This being the case, it makes sense for network service providers to allocate communication resources the way they’re actually used. This is what points three and four are about.
Symmetry Saves Money
The consumer’s choice of applications determines just how asymmetric the traffic stream is on any given segment of the network. Having observed this fact since the dawn of network time, operators allocate resources asymmetrically; this is called “giving customers what they want.”
Aside from old-fashioned half-duplex telephone modems (such as the Bell 201-C) and basic Wi-Fi, networks allow attached devices to transmit and receive at the same time; this is called “full-duplex operation.” Full-duplex requires the network to separate transmit and receive information at each interface.
The total capacity of a gigabit Ethernet is actually two gigabits per second. Ethernet chips and cables allocate equal resources to the upstream and downstream sides because this approach increases the manufacturing volume of the parts, reducing end user cost. Because I pay to build my own Ethernet but not to use it, this approach works fine for me.
Symmetry Wastes Money
When I need to move my data to and from Europe via undersea cables, I pay an ISP to handle it for me. The ISP allows me to share strands of fiber and frequencies of light with others, greatly reducing my cost over owning my own cables.
The Internet’s core technology – packet switching – is a clever way of sharing network communication bandwidth with others in a flexible and dynamic way. As soon as any message leaves any Internet device, it engages in packet switching.
Packet switching mediates access to both transmit and receive channels, but it doesn’t require them to be equal in capacity. So don’t let anyone tell you the Internet is meant to be symmetrical in any sense but the most holistic one, where each transmitter is joined to a receiver.
Asymmetric Network Design
Most full-duplex networks separate their upstream and downstream data by frequency division. Most cellular networks use frequency division duplexing (FDD) where upstream might be 800 MHz and downstream 1800 MHz (others use time division duplexing, a form of half-duplex.)
Fiber optic cables use two strands, one each for transmit and receive, and then aggregate them at switching points onto light frequencies (lambdas or colors) within a common cable. Traditional DOCSIS cable modem systems assign distinct RF frequencies for upstream and downstream, and then police the upstream for fairness.
Their downstream is always several times more capacious than the upstream, of course. If you’ve read this far, you know why. Making RF networks fully symmetrical means taking several channels away from the downstream side – where they’re in use – to the upstream side, where they will be idle. This makes the network slower and more expensive in most use cases.
Everything We’ve Learned About Networks is Wrong
When policy makers tell network operators to drop everything and build nothing but symmetrical networks that signal at 100 or 1,000 Mbps, they display ignorance. When they do so without offering a coherent reason, they’re also showing arrogance.
When they insist that taxpayers pay for these spruce gooses, they’re failing to allocate our money irresponsibly. In effect, they’re robbing us even while their intentions are no doubt noble.
If the prediction that the 14:1 traffic mix of today will shift to 1:1 comes to pass, it certainly will not do so overnight. It would take at least a decade (more like two) for new applications to be developed, popularized, and used. That means operators will have at least a decade to upgrade networks to fit this potential usage scenario.
Everything We’ve Learned About Networks is Right
There certainly are scenarios where having more upstream capacity would be useful. Uploading videos and doing off-site backups are useful, but these use cases are relatively rare, accounting for no more than a few minutes a day for most people.
If anything, they show a need for modest revision of current standards from 25 down/3 up to 5 or even 8 Mbps up. YouTube can’t digest uploads at 100 or 1,000 Mbps speeds in any case.
I have to wonder why Congress is Jonesing to withhold funding for 25/3 networks at a time when the most pressing issues in broadband policy are connecting rural people (those who lack access to any sort of broadband apart from old-school satellites and cell phones) and helping the urban poor pay for access to available services.
Getting the Ball Rolling
Assuming a finite bucket of money can be spent responsibly over the next four years, it’s abundantly clear that it needs to be prioritized. Massive government spending on over-specified fiber networks has a checkered history both in the US and in the rest of the world; Australia is a good example. No broadband plan survives the administration that created it.
We need some incentives to put the rural build on the right track. One way to do that is to emphasize rapid construction schedules over extravagant speeds by paying for rapid deployment in greenfield areas. This may well mean we’ll have more terrestrial and low Earth orbit satellite clusters than symmetrical gigabit fiber networks.
I think the people with no service today would be fine with that, but we can always ask them. In any event, I think there’s way too much fiber bigotry in these plans. We’re a wireless first nation where fiber has long since been delegated to a supporting role.
Barriers to Future-Friendly Policy
The symmetrical broadband boondoggle is simply one of many mistakes in a long series of Internet policy failures driven by public interest groups and advocates (PIGs?…let’s go with PIAs) that couldn’t be bothered to do their homework before telling Congress what to do. While these people mean well, their prescriptions too often lack substance.
Net neutrality was supposed to protect Internet users and entrepreneurs from rent-seeking ISPs. In reality, its principal effect was to divert the gaze of policy makers from abusive practices on the part of the FAANG monopolists toward theoretical abuses by ISPs that never came to pass.
PIAs have also run campaigns to create the mistaken impression that US broadband is overpriced by cherry-picking data. When it became clear that wireless was beating fiber, they even shifted to manufacturing baseless health concerns about 5G.
Empirically Unsound Foundations
If Congress wants to define the broadband network speeds and characteristics of subsidy-eligible networks and services, it needs to be at least as rigorous with the facts as the FCC is. Congress generally relies on advocates to tell it what the facts are, however.
While the BITAG report should guide Congress toward an understanding of what the COVID-19 stress test shows about the soundness of the US approach to broadband regulations, PIAs are offering alternative facts. Public Knowledge’s Harold Feld has managed to dig up a report by a European consultant that purports to show massive slowdowns in US broadband that escaped detection in all the reports BITAG analyzed.
Rather than looking at measured upload and download speeds, the consultant (Paul Foley) looks at ping (latency) times in 97 cities in Europe and 67 cities in North America on April 3rd. The report claims US latency figures were better than those in the EU27 but not as good as the ones in the EFTA (Iceland, Norway, Liechtenstein, and Switzerland) and Canada.
So What?
Aside from the fact that nobody cares how the US compares to the EFTA, ping times don’t measure network congestion by themselves. Ping responses come from network clients and servers, not simply from switches and routers. They tell us how heavily loaded the combination of networks and services is.
We can expect ping times to increase as more data is traversing the network due to lockdowns; Sweden’s didn’t degrade because the country tried to ignore COVID-19. Feld cites the third of four reports by Foley, but the fourth correlates ping times to resource upgrades by cloud services, noting as well that the BEREC directive reducing the resolution of video streaming in Europe was in effect for the entire duration of the study.
Sound broadband policy depends on sound data, but it also needs fair and responsible interpretation. The task of determining the eligibility of broadband networks for federal subsidies is best left to technical agencies, such as the FCC, capable of separating fact from fiction.