The Trouble with End-to-End
The Internet is supposed to be an end-to-end network. This means that it invests users with control over the entirety of the network, in contrast to the centralized telephone, cable TV, and telegraph networks.
End-to-end not only gives users more power to control their communication, it also invests them with more responsibility for protecting the integrity of the overall system. The traditional telephone network was (mostly) free of junk calls, and there was no such thing as spam on the telegraph network. Abusive communications could be blocked by the network intermediary, and they also cost money.
The Internet is free of intermediaries by design, but that doesn’t mean it doesn’t have them in effect. The Internet assigns no special intermediary role, so firms that want to mediate communication between users have to design systems that play the role from a standard Internet endpoint. So the Internet is only free of mediation in the abstract. In reality, Twitter, Facebook, and the Google properties mediate.
Accepting the Responsibility
Being an intermediary is great job if you can get it; there’s lots of opportunity to make money from being an indispensable part of the communications that flow between the billions of people who use the Internet. But there are also a lot of annoying legal, moral, and political obligations that come with the role.
In the pre-Internet era the opportunities for profit and the obligations to the public were held in tension by regulators. While network operators complained about specific regulations, they didn’t quibble over the basic setup.
The Internet has, at least in the US, always had a different deal. Regulation focuses on the players on the networking side of the Internet, and we’ve traditionally allowed the platform companies that use the Internet to do pretty much anything they want to do. Section 230 of the Communications Decency Act says platforms may censor content if they wish, but it doesn’t force them to do much of anything.
This was a deliberate choice on the part of Congress, evident in the 1996 Telecom Act and in the DMCA. It probably helped US firms to dominate the global web, which we Americans like. But it has also allowed them to dominate a large portion of American life, which we don’t always like.
Piracy, Porn, Poison, and Prostitution
This end-to-end network has transformed itself into a mediated network with a twist: whereas the old mediators were regulated, the new ones aren’t. The ISPs are more heavily regulated than the mediators, but even they aren’t as heavily regulated as their ancestors.
The new mediators have strongly resisted the responsibilities that businesses of their type have traditionally shouldered. They banded together to defeat the Stop Online Piracy Act and Protect Intellectual Property Acts by claiming they were the end of free speech and fatal to the Internet. They organized to regulate ISPs under the Title II regulations written for intermediary networks, and they defeated anti-porn laws such as CDA, COPA, and CIPA, in court.
Earlier this month, the EPA settled a case with Amazon over 4,000 violations of FIFRA, the law governing pesticide use in the US. Amazon has been selling imported pesticides that are unlawful in the US since 2013, but for some reason the Obama era regulators didn’t think it was important. The EPA fine, $1,215,700, wasn’t massive, but getting the unlawful poisons off the market was a win.
And now Congress is poised to pass a bill called FOSTA that cracks down on sex trafficking. This action is not without controversy, but it’s not clear whether the mediators are more worried about its actual provisions than with the fact that they’re no longer immune from regulation. Let’s hope it’s the former.
Internet Acceleration Moves to the Edge
The emergence of dominant mediators at the edge of the Internet apparently came as a surprise to many. We’ve heard endless paeans to the Internet’s magical end-to-end architecture that have claimed consolidation impossible as long as carriers are controlled. But the reality is that a fully end-to-end architecture is sub-optimal for performance, security, and innovation. Christopher Yoo made this point at the recent Silicon Flatirons conference, Regulating Computing and Code.
Yoo pointed out that a fully distributed, end-to-end network becomes slower as it grows larger (slide 7) because path lengths get longer. If the Internet were a flat, mesh network, each packet would be forwarded thousands – if not millions – of times as it traveled from source to destination. In fact, most paths are 10 – 20 hops long.
That’s why the Internet is organized around switching centers called Internet Exchanges. Each major service in the US – both content and ISP – connects to a dozen or so IXes and exchanges traffic through Ethernet switches.
Some of the IX-based services are Content Delivery Networks that provide their customers with fast lanes to ISPs. America’s five largest companies (Apple, Google, Facebook, Microsoft, and Amazon) have their own CDNs and therefore have their own fast lanes.
These CDNs are “edge services” in the terminology of the Wheeler FCC, but in reality they’re deeply embedded in the core of the Internet, the IXes. CDNs can actually accelerate the Internet to a greater degree than ISPs can because they contain the content, the processing, and much of the networking.
Banning Paid Prioritization is a Phony Baloney Move
Yoo’s presentation prompted Judge Stephen Williams to mention his dissent in the DC Circuit case over the Wheeler regulations. He said: “the 2015 Open Internet Order’s ban on paid prioritization was a complete phony. The heavy hitters – the people who use CDNs – have all kinds of alternative methods for getting speed, and they will pay for it and get it. The people who are affected by the ban are very small players who are hurt by it.”
So why all the clamoring for a phony ban that only hurts small players? I don’t think it’s deliberate on the part of the public interest advocates who seek it. Rather, it shows that Public Knowledge, Free Press, EFF et al. simply don’t understand the Internet well enough to appreciate the impact of the regulations they seek.
This is a harsh assessment, but not as harsh as the alternative claim that they’re simply in the pockets of the Silicon Valley interests that fund them. They argument they present is that paid prioritization discourages ISPs from investing in capacity, which they mistakenly believe to be a way of achieving the regular delivery of information that network management seeks.
The truth us that ISPs can’t deliver information faster than they receive it, and they don’t get web data anywhere close to their capacity to relay it. But the end-to-end model appears to blind public interest advocates to what’s really going on inside the Internet.
End-to-End Really Doesn’t Matter
The Internet isn’t an end-to-end network in any meaningful sense. If it were, the streaming videos we watch would come directly from their creators instead of from YouTube or Vimeo. Creators upload to these intermediate services, and consumers download from them. So pretending the underlying paradigm is end-to-end is delusional because it’s end-to-end-to-end-to-end…ad whatever.
When the paradigm is wrong, regulations like the paid prioritization ban at the heart of net neutrality that seek to enforce it are counter-productive and unenforceable. We can’t force the Internet to become something other than what it is by imposing fantasy regulations on it.
The disconnect between the way the Internet really is and the way neutrality advocates wish it were came into stark relief today: while some Congressmen were outside the Capitol giving speeches on the importance of net neutrality, those inside the building voted to make significant, harmful changes to Section 230, the real protector of Internet speech. And they didn’t even notice.