Berners-Lee, Jacobson, and the FCC

Rumor has it that FCC Chairman Julius Genachowski will circulate an order on Wednesday to put net neutrality on the agenda for the December meeting. If this happens, and it may not, rumor has it that the net neutrality framework will rely on a new theory of the FCC’s Title I jurisdiction, possibly Kevin Werbach’s interpretation of Section 706 of the Communications Act. Section 706 says:

The Commission and each State commission with regulatory jurisdiction over telecommunications services shall encourage the deployment on a reasonable and timely basis of advanced telecommunications capability to all Americans (including, in particular, elementary and secondary schools and classrooms) by utilizing, in a manner consistent with the public interest, convenience, and necessity, price cap regulation, regulatory forbearance, measures that promote competition in the local telecommunications market, or other regulating methods that remove barriers to infrastructure investment.

It may seem like a bit of a stretch to use a section of the law that orders the FCC to remove barriers to network infrastructure investment to impose restrictions on network operator business practices, but stranger things have happened, and there is a nexus between network regulation and enabling innovation in the app space. Werbach’s paper points out that the Internet doesn’t precisely fit into the regulatory framework of the Communications Act in any case.

The Act was written in an era in which networks were simpler and more self-contained than they are today. The telephone network was wholly contained within telco-owned facilities that ended at the network termination box outside the house and is wholly managed by the telco itself. Other than putting a massive voltage spike on the telco’s wire, there’s nothing the PSTN user can do that will affect others on the network. So telco-era regulations drew a sharp line between the network and the information services that ran across the network, putting the network under Title II of the Act and the services under Title I. This nice, neat system worked reasonably well for 70 years.

Now along comes the Internet and the bright line isn’t so bright any more. The Internet is an end-to-end system in which a vital function – congestion control – runs on the user’s system instead of being contained within the network itself. The Internet wasn’t designed to work this way: In the beginning, it was supposed to manage its own congestion state, but that part of the system was flawed, so a fellow named Van Jacobson devised a three line code modification for TCP to correct it. Without Jacobson’s patch, people who accessed the Internet from a campus Ethernet caused their access routers and large parts of the network to lock up simply by transferring files during network prime time.

So post-Jacobson, the Internet became a system that spanned both sides of the Title I/Title II divide, and in fact couldn’t function properly unless it did. And it wasn’t just congestion management that caused the Internet to spill over the regulatory boundary: By the time it became a public network, the Internet included functions like the Domain Name System (DNS,) the directory service that converts names like “itif.org” into IP addresses, which are part application and part infrastructure as well. In fact, many of the Internet’s basic network functions are implemented through the Internet itself, such as routing.

Computer scientists call this property of a system “recursion,” and it was an unexpected feature of computer networks. In the early days of network engineering, network architects believed they could extend the telephone network’s notion of functional separation into the new world, but that notion didn’t fully pan out. In the 1970s, they invented layered models of network architecture that didn’t correctly predict how features and functions would be deployed in fully-developed networks.

The Internet is built on a platform of one basic function, the transfer of data in small units called packets which have embedded addresses, but it uses that basic function to provide more advanced functions such as routing, name lookup, and congestion control, which the user sees as inherent in the network. So what’s basic and what’s added on depends on where you sit in the network. The Internet uses some of its parts to produce other parts, and these other parts make user applications possible; it’s a network of networks not just in the sense of “networks connected to networks,” but also in sense of “networks within networks.” In many ways, the Internet isn’t really a network in the tradition sense, it’s a collection of communication platforms that enable other communication platforms to be built for various specialized purposes.

Therefore, the Internet doesn’t fit under Title I and it doesn’t fit under Title II – in order to function, it has to straddle the boundary. If you’re a user, this is a great thing. You’re able to share a network with others that allows you to freely use all the network facilities that nobody else is using, which enables you do download files extremely fast during most times of the day, and while it slows down during the busy hour, it doesn’t lock up as it used to do. The advent of broadband Internet edge networks created demand for fatter pipes and more speed, which required more or less continual investment by network operators. This was an economic challenge.

Operators met the economic challenge by bundling additional services into their broadband networks such as telephony by cable operators and TV by telcos, blurring yet another regulatory boundary. As over-the-top service providers such as Skype, Netflix, and the iTunes store came along, so did the perception that network operators, who now relied on application-level services to finance their networks, had a conflict of interest between their own services and those from third parties. And that perceived conflict gave rise to the net neutrality controversy, which reached a fever pitch in 2006 and hasn’t been resolved to anyone’s satisfaction anywhere in the world, despite the dearth of concrete offenses.

The idea that network operators are out to get providers of over-the-top services and content just got a big boost from web creator Sir Tim Berners-Lee. In a feature article in Scientific American, Berners-Lee claims:

Cable television companies that sell Internet connectivity are considering whether to limit their Internet users to downloading only the company’s mix of entertainment.

He doesn’t provide any evidence or support for this (shall we say) highly speculative observation; he simply passes it on as if it were common knowledge. A similar sentiment was just offered by an anonymous FCC spokesman to The Hill’s Sara Jerome:

There are some cable and phone companies out there that want to decide which apps you should get on your phone, which Internet sites you should look at, and what online videos you can download.

Once again, the claim is made without evidence. We can only conclude that the lack of uncorrected problems of this sort since net neutrality became a cause has forced its supporters to turn away from the search for actual evidence to the realm of the imaginary; we’re now dealing with thought crimes.

There certainly have been some recent issues regarding the inability of users to access desired video: Because of a dispute over retransmission fees, Cablevision customers weren’t able to see the typically hapless San Francisco Giants break their lifetime World Series curse, and in an unrelated matter, people with Google TV boxes can’t access Hulu. In these cases, it’s not the network operators restricting access to content, it’s the content owners and licensees themselves, however, so they don’t support Berners-Lee’s and the FCC’s speculative exercises. Perhaps we’ll need to consult a psychic Tarot card reader to get to the bottom of what the cable and telco leaders are thinking; even the TSA’s body scanners can’t read minds.

How do the network providers defend themselves against these witch trial notions that they harbor evil intentions? If they deny the desire to shoot themselves in the foot by blocking user access to content and services, their accusers will simply claim “if you’re not planning to do these things, what’s the harm in our imposing a regulation forbidding you from doing  them just to be safe?” Aside from the “guilty unless proven innocent” angle of this approach, there’s actually plenty wrong with enacting regulations without any evidence: They’re inevitably going to be too broad, and will therefore prevent useful new services from reaching the market, not to mention spurring endless rounds of complaint, investigation, and litigation. The alternative is to stand pat until there’s actual, objective, measurable evidence of harm that we can all see without consulting a psychic friend. If the cable and telephone companies did have such evil intentions, it shouldn’t be hard to see them play out in the real world. Once a line is crossed, it will be relatively easy to correct it.

If we assume that the FCC is following Berners-Lee’s lead, a not-unreasonable guess at this point (ask me again if there’s a net neutrality order on Wednesday,) we should look into the nature of Sir Tim’s complaint. His essay criticizes a number of trends taking place on the Internet today, not just speculations about the phone and cable companies; he’s upset about Google, the iTunes store, Facebook, and the rise of app stores for handheld devices. All in all, the essay is downright grumpy about the new ways of using the Internet that have arisen in the past couple of years; it reads like an old man haranguing the neighborhood children to get off his lawn along with their new toys.

Berners-Lee senses that things are happening on the Internet which undermine the primacy of the Web, and this perception is fundamentally correct. It’s been made by others recently in a series of “Death of the Web” articles in Wired magazine and elsewhere. The web isn’t just a networking platform, it’s a user interface that doesn’t function properly without a full-sized display and a mouse, so it’s inevitable that it would be replaced, or at least heavily supplemented, by a user interface that’s better geared to tablets like the iPad and hand-held devices. That’s largely what handheld apps are about.

But the Web has a larger problem than its clunky user interface: The Web’s basic model of the relationship of users to content is deeply flawed. Van Jacobson observes that the Web is a victim of the Internet’s design:

TCP/IP comes from a world of a few, fixed PCs used by lots of users processing a relatively small quantity of data. As such, TCP/IP connects one endpoint to another using a stable, known IP address.

This is a “conversational” model borrowed from the phone system, where the endpoints are trusted and known. According to Jacobson, the problem is that people on the net aren’t having “conversations” — despite what the Web 2.0 crowd say. Ninety-nine per cent of traffic is for named chunks of data — or content. People are downloading web pages or emails.

TCP/IP was not built to know what content people want, just to set up the conversation between the endpoints and to secure those connections. That’s a problem because people can — and do — flock to the same servers to watch exactly the same video or get the same piece of information, and proceed to overload sections of the network and take sites down.

Jacobson proposes a new platform for content, to be built on top of TCP/IP, but in a way that’s very different from the Web’s conversational model. It’s worth noting that Bob Kahn, the Internet’s principal project manager in the very early days, has been working toward a similar end for some years now. Jacobson and Kahn want to change the way we identify and locate information. They observe that the Web lacks a true Universal Resource Identifier (URI) that would enable a user to say: “get me a copy of the Game 3 of the 2010 World Series from any convenient location.” Instead, it has a Universal Resource Locator (URL) that requires you to tell the web where the content you want can be found. This is why search and content stores are so big: they enable users to deal with content by name rather than by address.

Berners-Lee doesn’t appear to get this; his Sci-Am essay asserts that the Web URL is really a URI. This may seem like an inside baseball discussion among computer scientists, but it has major implications for the information dissemination function Jacobson and Kahn have in mind. We won’t get beyond the locator to the identifier until we at least acknowledge that the Web lacks an actual content identifier. An identifier is something like a social security account number, while a locator is more like a street address. They’re similar at a superficial level, but significantly different in effect.

The best way to understand Berners-Lee’s Sci Am essay is to see it as a defensive response to the work of Jacobson, Kahn, and the others who seek to improve the Web; the FCC has apparently mistaken it as justification for an ongoing tussle with network operators and the incoming Republican majority in the House. There’s a serious misunderstanding afoot here, clearly. Computer scientists are arguing with each other, as they are wont to do, and regulators (the UK’s network regulator appears to be modifying his views on net neutrality in response to Berners-Lee’s concerns as well) are taking sides. Suffice it to say, a regulatory incursion into a computer science dispute isn’t going to end well.

This would be a good time to disengage, take a deep breath, and spend the Thanksgiving holidays doing anything but trying to figure out new justifications for regulating a system that appears to be developing just the way that it should. Can we all resolve to do that and see how the Internet looks a week from now? This is not the time to be solidifying regulations with the aim of preventing change in the way the Internet is used and operated. Technologies that succeed in the long run are those that are open to improvement, and there’s too much fear of improvement in the current movement to lock the Internet into legacy status as a unchangeable system. Let’s at least identify a concrete problem that the regulators need to solve.

(cross-posted from the Innovation Policy Blog.)

Edited Nov. 25 for clarity.