The Network of Probabilities

At eComm, I interviewed on stage Neil Davies, founder of Predictable Network Solutions. (Disclosure: they are a consulting client, we are working together to commercialise their technology.) The transcript of the interview is up on the eComm blog, titled The Internet is Not a Pipe and Bandwidth is Bad.

Neil’s achievement is a breakthrough advance in the use of applied mathematics to describe behaviours of statistically multiplexed networks. The consequences are potentially widespread across the telecoms industry. The problem is that the mental models we use — of pipes, flow, bandwidth — do not match the reality of statistical multiplexing. This mismatch drives us into endless small fixes that deeply sub-optimise the overall use of the capacity available.

Historically we have build the “Network of Promises” (with a hat tip to Bob Frankston for the naming inspiration). Technologies like circuit-switching, ATM and IMS perform capacity reservation, admission control, and session management. Together they provide complete predictability and control — at a price of an “all or nothing” approach. Once the network is fully reserved — that’s it — and if someone reserves capacity and doesn’t use it, tough luck. The result is a costly and inflexible network.

In contrast, the Internet is a generative “Network of Possibilities”. The application and user discover what is possible. There is constantly variable capacity and quality, and we adapt to the discovered “network weather”. Skype may work, it may not; video may be high definition, low definition, or unusable depending on what else is going on. We can tip the scales in favour of some applications using QoS, but that comes at a cost. When we prioritise some packets, we end up shrinking the overall value-carrying capacity of the transmission system. The more time-sensitive the traffic, the more the shrinkage when we prioritise. The downside of this approach is that the only real answer to poor network quality is more capacity. This may work for core networks, but becomes unaffordable for access networks.

What Neil has discovered is that we have been modelling our networks at the wrong logical layer, and have fundamentally misunderstood the control theory around how data is managed. Instead of managing packets, we need to manage something two logical layers higher: flows of packets over time. With the right mathematical “lens” to see the time-based effects, a new and much simpler way of building and managing networks emerges.

This “Network of Probabilities” works with networks that have statistically stable properties over short periods (milliseconds to seconds) — as most do have. His technology can reduce the network to two control points, entry and exit in each direction. (More complex topologies, e.g. with CDNs, can also be managed, but at added complexity to signalling and maths.) Packets are re-ordered and dropped in a new way, such that they (virtually) never “self-contend” on their onward journey, and all subsequent buffering can be eliminated.

Any link or network segment that can saturate can now be managed in a new way. The “pie” of quality attenuation (loss and delay budget) is kept constant, but can be allocated in a fine-grained way to different flows over the network. There is still a longer-term adaptation of applications to the sensed network conditions; there is no magic to overcome the fundamental (and variable) capacity of the network.

The bottom line? We can load up networks to 100% of capacity, mix multiple classes of traffic together, and also add in scavenger traffic (with no cost impact on the rest of the network).

It’s like the Philosopher’s Stone of telecoms, with one difference: it exists.

[Editor’s note: Martin Geddes is founder of Martin Geddes Consulting Ltd, and this article is cross-posted with permission from his blog. The original article is here.]
Comments
  • Steve Crowley

    “We can load up networks to 100% of capacity, mix multiple classes of traffic together, and also add in scavenger traffic (with no cost impact on the rest of the network).”

    That sounds similar to what is done in cellular networks today, as part of resource allocation and scheduling. On the other hand, reading the transcript of the interview, the inventor says he was able to “more than double the effective capacity of one of the U.K.’s mobile network’s data” just by some mathematical tweaking, before even applying his invention. I wouldn’t think there was that much slack in today’s radio access networks that can be so readily addressed. I hope it works as described. It would be interesting to see more on this.

  • gucci loafers

    Very interesting post, it sounds like a great solution, and Silverlight seems amazing, but I don’t think I’m techie enough to absorb everything you say.http://www.noblebagsale.net
    I think I will have to try another, newbie approach, thanks for your explanation though.

Comments are closed.