What happens if everybody uses Meerkat or Periscope at the same time?
I’ve been hearing a lot about personalized video streaming services, like Meerkat and Periscope, that will let you instantly live-stream your life. This sounds pretty cool, as I have long wanted to have company at breakfast, and these apps would allow me to essentially share my breakfast experience with everyone in the world. But I’m worried about the Internet. I’m worried that if everybody tried to stream their breakfast at the same time, something would break. Please explain, Mr. Engineer. I hope you can assuage my fears.
Worried in Washington
What happens if everybody tries to stream their breakfast at the same time? Sadly, for the time being, everyone would be eating breakfast alone.
First, let me congratulate you for having heard about Meerkat, because it seems to have had the shortest shelf life of any over-touted new app. Its founders raised $14M from venture funds on the very day Twitter announced Periscope, and if anyone ever had buyer’s remorse it was those investors. Periscope is a jazzy little app that allows people to share real-time video streams with their Twitter followers, and it’s exciting because it joined the top 30 list of iPhone apps a mere three days after it was unveiled.
But let’s answer your clearly sincere question with another question: What would happen if everyone emptied their bathtub at the same time? Or if everyone headed for the same freeway on-ramp at the same time? Or went to Costco, or the barber shop, or P. F. Chang’s? Right, it would be a disaster because all networks have limited capacity.
An article by Alexis Madrigal on Fusion says Periscope consumes 20 megabytes every three minutes, which is just shy of one megabit per second. The typical Wi-Fi access point serves two users, but it’s not uncommon to reach 20 users or so in a small office/business setting like a coffee shop. So Periscope will do fine on Wi-Fi as long as the wired backhaul is 20 Mbps or so, which is doable on a business class connection but not common on a residential grade service. Comcast, for example, provides 10 Mbps on the upstream side for residential accounts but 20 Mbps for business class service. VDSL services are more symmetrical than cable, but it’s hard to know how fast many of them are since broadband tests emphasize download speeds. Across all VDSL plans, upload tends to be 25-50% of download speed. That’s a lot more symmetrical than cable, which tends to download about ten times faster than it uploads. Some VDSL plans do reach 20 Mbps for uploads, but that’s right at the leading edge.
In the realm of fiber-to-the-home or building, fully symmetrical plans (the same upload and download speeds) are available: Even Frontier, a very small carrier, says it will soon offer symmetrical service all the way up to 150 Mbps. But mobile networks are going to be severely challenged by the widespread use of Periscope. The typical macro-cell (that’s a big tower) is engineered to serve 1,000 potential users with a mix of roughly four times as much download as upload data. Cells are commonly divided into six sectors, each of which can manage up to 10 Mbps of upload data at the same time, depending on how close the handset is to the tower. But 1,000 users each demanding a megabit in 6 sectors with 10 Mbps in each sector means the demand for universal Periscope would be leave the mobile network overloaded by 15 – 20 times. That’s not going to be a happy fit.
So it’s a good thing the scenario you imagine is, ahem, highly unlikely. Within the next few years, however, network capacity will increase (under optimistic assumptions) enough to make your scenario work. All that’s needed is an upgrade from LTE to LTE Advanced, sector splits in 12 ways instead of six, small cells in heavily used areas, and twice as much spectrum. But by then we’ll be using a 4K version of Periscope’s successor that wants 10 Mbps per user, so there’s that to consider.
But the key insight is that no network is designed to work as well in crazy overloaded conditions as it does in normal ones; we strive for networks to degrade gracefully as they become overloaded and we design them to work well within sensible projections about typical load. We don’t design them for crazy conditions because doing so would raise costs – and prices – to a level nobody would want to pay. Every network is a statistical system in some sense.
I hope that answers your question.