Networks on Demand: The Promise of Software-Defined Networking
This is an exciting time for the Internet. You already know some of the reasons for this: Internet use is shifting from desk-potato systems and applications to mobile ones; mobile apps are exploding; and new users around the world are beginning to use the Internet, people who’ve never used any form of electronic communications before. The Internet of Things is taking off, as everything from heating and air conditioning systems to toothbrushes have network connections.
Underneath the explosion of users, applications, and devices, we’re also witnessing a comparable explosion of network technology: US Internet service providers are installing fiber optic cable even faster than they did during the infamous “fiber bubble” of the late ‘90s and early ‘00s that left us with a “fiber glut” and a slew of bankruptcies because the demand didn’t exist (yet) for that radical increase in network capacity. Other nations are installing LTE mobile networks, following the US’s lead and increasing the ability of app developers to tap new potential. Wi-Fi is transitioning from 100 Mbps to the gigabit (1000 Mps) range, unlocking the power of gigabit home broadband connections.
These trends are dramatic and easy to see, but another development is also underway that can have equal if not greater impact on the future of networks and applications: networking on demand. The technical terms for the thing I’m calling “Networks on Demand” are “software-defined networking” and the even more obscure “network function virtualization.”
This simply means that the core technical feature of the Internet, the TCP/IP protocol, is becoming less of a limitation than it has been. While some maintain this is long overdue – the Internet’s core protocol hasn’t changed substantially since the 1970s – progress has been slow in this area because the Internet is a victim of its own success.
While it’s relatively easy to upgrade a laptop or make a network connection run faster, it’s extremely difficult to make substantial progress in software features that have to interconnect billions of devices. When the Internet encountered a fatal flaw in IP in the mid ‘80s (congestion collapse), it was fixed in the most conservative possible way because engineers considered the Internet already too large to accept major revisions to its software base. There were some 10,000 users in the mid ‘80s, a paltry fraction of the more than two billion we have today.
So how is it that engineers can improve TCP/IP today when they couldn’t in 1985? It really comes down to cleverness. Networks on Demand doesn’t actually change TCP/IP as much as it bypasses the parts of it that impair the ability of applications and actual networks to communicate with each other. In technical terms, IP is a virtual network. This means it presents a network model to applications that simplifies the diversity of the physical (“real” as opposed to “virtual”) networks that carry Internet Protocol (IP) information across the Internet. As Fred Baker and David Meyer put it in RFC 6272 Internet Protocols for the Smart Grid:
The Internet layer provides a uniform network abstraction network that hides the differences between various network technologies. This is the layer that allows diverse networks such as Ethernet, 802.15.4, etc. to be combined into a uniform IP network.
From the beginning, IP was meant to run on diverse networks spanning a wide range of capabilities – everything from wired to land-based wireless to satellite – with very different issues with capacity and timing. While IP accomplished this task, it didn’t do so efficiently; hiding the differences means hiding the key features of common technologies such as Ethernet. So hiding network details limits the ability of the Internet to improve. This is one of the problems that Networks on Demand must solve.
In order to complete the design of TCP/IP and begin building networks, a number of shortcuts were taken that lead to problems later on, such as congestion collapse and a relative lack of security and reliability. Hiding the details is one such shortcut; Networks on Demand, Software-Defined Networks and the emerging WebRTC project each seek to expose network details when doing so is beneficial to applications.
Applications interact with TCP/IP using an application program interface (API), a software library that opens up a set of network controls to the application. The traditional APIs are Winsock in the Windows operating system and Sockets in the derivatives of Unix such as Mac OS X and Linux. So if you want your network to present a wider set of controls to the application that you can’t communicate via TCP/IP because your network is richer than the TCP/IP “virtual network”, you can bypass the bottleneck by enriching the API.
That’s how Networks on Demand works: a new API called “Open Flow” gives applications a way to advise network services how much bandwidth they want, where they want it, and how long they want it; they can also specify more detailed performance characteristics, such as packet loss and delay, and in principle they can negotiate prices. The Open Flow API conveys application wishes to network service providers, who can then make resources available (or not, if they don’t have any).
Networks on Demand provides some of the same benefits that Cloud Computing does: instead of each user of data center services owning his own equipment, users of cloud services like Amazon’s AWS lease the services they want when they want them. This allows users to have access to more capacity to handle peak loads without the expense of paying for high capacity when they don’t actually need it.
It’s like an old-fashioned farm co-op that spreads the cost of harvesters across a group of users. For Networks on Demand to work, the service provider simply needs to be able to pool shareable resources and then dole them out, as they’re needed.
One of the more interesting wrinkles of Networks on Demand is the fact that we don’t actually know how many ways it may be usable; commercial users don’t tend to experiment a great deal with their daily workflows, so they’re not going to be pushing the envelope. Academic users tend to be resource-limited, so they’re not able to experiment even though they’re willing.
One way to resolve this chicken-and-egg dilemma is to make SDNs available to selected universities — this is how the Internet got started in the first place. One company that’s pursuing this approach is AT&T, in the form of an SDN Network Design Challenge that will enable winning researchers to have access to a real virtual network. The AT&T approach is Java-centric, fitting because the current SDN wave is based on insights embedded in Sun’s (now Oracle’s) Java programming language. I expect others to follow this line.
The overall interplay between commercial and academic computing is interesting: The Internet was designed for research, so it has shortcomings in the commercial setting. But one of the key pathways toward overcoming these shortfalls is to return to the academy for inspiration; it will be interesting to see what comes of it.
Is software-defined networking the future of communications? Sound off in the forum!