There are no original stone tablets on which these specifications were handed down from the mountain: it is an entirely man-made networking world. Why is TCP/IP dominant today? Try a free government-funded ISP, a gratis networking stack that didn’t need to earn its keep, a totalitarian approach to networking where everything has to be on IP, ongoing wars between IBM and telcos that hobbled better-engineered rivals, and a whole bunch of political manoeuvring (read the article and pay special attention to the words ‘or had other motives’). Don’t take the victors’ PR at face value!
TCP/IP is not derived from deep foundational principles in the same way that computation is anchored in the work of Gödel, Turing, Church and von Neumann. Indeed, Internet Protocol is “Bandwidth Division Multiplexing”! It’s just the new TDM, but with the time/space coin flipped over and a mirror set of issues. Rather than great flow isolation but weak multiplexing gain, we instead get weak flow isolation and great multiplexing gain. What we want is both to be great! (This is indeed possible – see my previous newsletter “Network of Probabilities”.)
We can now see how the flat Internet model is fundamentally constrained and flawed. The existence of a working alternative, one that allows us to peer over the networking horizon, proves that there are other ways of seeing the world. However, rather than being round, the networking world is recursive.
The Internet is not an inter-net
Our networking world is a product of trial and error. Unfortunately, there are a lot more errors than we would like. In the process of its birth, the Internet lost a layer, and ceased to be an inter-net. There are no inter-network gateways that hide the implementation of one network from the next. The most basic level of separation and abstraction is missing: the Internet is not an inter-net, but a concatenated network of networks.
As Day notes, that makes the Internet more like DOS than Windows. Sure it’s a ‘success’ – and so was DOS. That doesn’t mean it’s the last word or the end of the technology journey. You can see a summary of these arguments in Day’s presentation “How in the Heck Do You Lose a Layer!?” [PDF], or in the paper “Is the Internet an unfinished demo?” [PDF].
What lies behind the Internet is an unconscious belief that networks deliver packets between computers. This is obvious, right? The problem is, it sees networking as a mechanistic activity, and fails to capture its true nature as a form of distributed computing that is all about moving information between computing processes, not network interfaces.
You can see this play out in the way IP only partially delivers data, as it addresses network interfaces, not the true destination application process. This absence of a separating layer between networks is the outcome of a basic mis-categorisation of what networks are. It ends up resulting in complex hacks at every stage to fill in the gaps and compensate for these errors.
Increasingly the work of the IETF and similar bodies is to create new hacks to deal with the side-effects of problems from the old ones! This presumes that the hacks work: packet fragmentation in IP has never worked, and new hacks (Codel is the latest and greatest) introduce unknown and unforeseen new hazards and failure modes.
Key features of RINA
RINA takes the polar opposite approach of ‘rough consensus (and groupthink) and working code (with unknown failure modes)’. RINA is a return to the fundamentals of networking architecture, based on strong invariant design principles, and a rigorous and scientific approach to cause and effect.
The core insight behind RINA is the observation of a simple recurring pattern in all of distributed computing. “Communicate this for me from here to over there” is a ‘what’, which is then followed by a bunch of common functions that are the ‘how’. Those ‘how’ functions relate to dividing the data stream up into datagrams and reassembling them. That can be done in any way the lower layer sees fit, subject to the contract it has with the upper layer.
Network nodes at a shared layer can collaborate on the ‘how’ as part of a ‘team’ structure (the Distributed Interface Function) which provides services to the ‘what’ (Distributed Application Function). It’s all unfamiliar and confusing, until you see its simplicity and beauty. There is just a single layer that recurses over and over, at different scopes, as we share distributed state by copying information. The very thing that is missing from the Internet – inter-network gateways – is actually the defining characteristic of how scalable distributed computing should actually work!
Benefits of RINA over TCP/IP
A lot of the benefits of RINA can be explained from this one slide being presented by John Day.
John Day points at the ‘why’
Non-security mechanisms: Internet – 89; RINA – 15
Security mechanisms: Internet – 28; RINA – 7The simplicity speaks for itself.
The benefits of the RINA approach are numerous:
- Scalability. The recursive structure scales indefinitely. No more router table size explosion.
- Security. Each layer is a securable container, and most of your firewalls, session border controllers and intrusion systems disappear. No more port scanning, much less scope for mischief.
- Performance. The overheads of routing are far lower, the algorithms can be implemented simply in silicon.
- Manageability. You can swap out protocols and mechanisms at lower layers without upper layers knowing or caring. Reconfigure your data centre whilst it is running!
- Flexibility. You can implement any and all QoS mechanisms within the architecture, not just ‘best effort’, and (if the mechanism supports it) create a composable trading system for allocating resources according to any policy you see fit.
- Reliability. Multi-homing goes from being complex to trivial. Reliability is ordinary, not outrageously hard.
- Mobility. No more complexity to address mobility as a special case: it falls straight out of the architecture. You can shred a lot of your 3GPP standards, too.
- Cost. No more hacks-upon-hacks. This is the minimal ‘necessary and sufficient’ amount of functionality needed.
Perhaps the hardest thing for RINA will be to fully escape the failed “network alchemy” approach of TCP/IP and fully adopt “network science”. That means using algebra to model the entirety of the success and failure modes of the system. This deeply contrasts with the current approach of “think of an algorithm, try it, tinker around a bit more, think of a theory to justify it, run it in a simulator, and extrapolate those specific results to be a general truth in the world”.
If you cannot spot the logical leap, then consider these sentences: “The sky in Arizona is blue, so the sky is always blue everywhere.” – “This TCP/IP algorithm works now and here, so this TCP/IP algorithm works always and everywhere.” Much networking research is riddled with such basic methodological flaws.
Some controversial conclusions
There are some immediate – and undoubtedly controversial – consequences of this work on RINA.
The first is that IPv6 is a waste of time and money. It is the wrong answer to the wrong question. It fails to tackle the fundamental problems of Internet Protocol: addressing the wrong thing (interfaces, not applications); tightly coupling the whole system; confusing naming and addressing; perpetuating hacks like DNS and Mobile IP to paper over the gaps; and a host of other sins condemning us to networking purgatory. Indeed, IPv6 will create a whole new slew of performance, security and implementation problems we have yet to fully experience. The absence of user benefit explains the slow take-up.
The long-term future of the Internet, without a “scientific networking revolution” is a gradual increase in complexity and cost, and gradual decrease in performance and security. There’s no sudden cliff and collapse. Whilst everyone admires the elaborate baroque architecture, the foundations are missing, and the cost of pinning the edifice upright keep rising. The current Internet is a digital Venice: fabulous, fancy and flawed as a long-term solution to the needs of a modern civilisation.
Welcome to Networking Science
This was perhaps the most profound professional event I have ever attended. I was fortunate enough to be asked to present an abbreviated version of my “Lean Networking” presentation. In the audience was John Day, my colleague Neil Davies, as well as Louis Pouzin – the inventor of the datagram (aka packet). It very much felt like presenting to Feynman, Bohr and Oppenheimer – an experience I shall not forget in a hurry!
Indeed, those conversations reminded me of this Wikipedia quote: Feynman was sought out by physicist Niels Bohr for one-on-one discussions. He later discovered the reason: most of the other physicists were too in awe of Bohr to argue with him. Feynman had no such inhibitions, vigorously pointing out anything he considered to be flawed in Bohr’s thinking. Feynman said he felt as much respect for Bohr as anyone else, but once anyone got him talking about physics, he would become so focused he forgot about social niceties.
Feynman and Bohr — aka Day and Davies — in matching attire
The design principles behind RINA [PDF], plus the 3 basic laws of ΔQ, are effectively the foundational concepts of an emerging Networking Science. RINA is the necessary and sufficient instantiation of those design principles into an architecture. Likewise, Contention Management (CM) is the necessary and sufficient means of implementing control over allocation of ΔQ. RINA requires letting go of a “flat (Earth)” model of networks; CM requires letting go of a “work” model of networks.
Both failed paradigms are unconscious anthropomorphic models of packet systems – “beads on a string” — that treats packets as if they were physical objects, and misapplies the mathematics of the physical world to a virtual one. Letting go of both sets of false belief — at the same time — is something very few people have contemplated thus far.
Some distributed computing nuclear arms technology is about to enter a conventional networking war. My personal plan is to open a high-end arms dealership for exotic means of slaying the competition. You can be sure that only the finest quality of intellectual weaponry will be on offer.
[Editor’s note: This article was originally published as a Future of Communications newsletter. You can subscribe at www.martingeddes.com.