Spectrum Reality, Part 2

In my last post, I promised to follow up on the 700 MHz band’s unique properties, so here you go. The background fact about spectrum is that opportunistic spectrum access procedures such as the CSMA-CA system used by Wi-Fi and Bluetooth only work at short distances, while the 700 MHz band excels at pushing information over long distances. In this context, “short distances” are measured in dozens or hundreds of feet, and “long distances” are measured in kilometers or miles.

This is the case because radio networks share common frequencies or channels. A TV broadcaster transmits a signal from a single antenna, and millions of TV sets receive it (most ignore it, but that’s another issue.) There is only one transmitter in this system, so there is no problem with interference caused by the multiple elements of the system. Transmitters generate interference with each other on a shared spectrum system, but when there’s only one transmitter, there aren’t any collisions. TV broadcast is one transmitter, multiple receivers, so it works best when the one transmitter can blanket a large area with a common signal.

Data networks operate in a very different way because they feature two way communication and multiple transmitters. Multiple transmitters thrive on frequencies that travel limited distances, for two reasons:

  1. Interference from collisions – multiple stations transmitting at the same time – decreases as the number of potential transmitters visible to a given receiver declines. More transmitters means more collisions, in other words, and each collision is a waste of capacity. This problem is mitigated in several different ways, and each of them is most effective when the pool of potential transmitters is small. Limiting the travel of each transmission limits collisions.
  2. The overhead of detecting and correcting collisions is a function of the distance a packet travels before a collision can be detected. In the old Ethernet system devised at Xerox PARC, collisions were detected in the first 64 bytes of each packet. Transmitters were capable of seeing that their packets had collided, so they truncated packets that weren’t going to make it. This was a very low overhead system in which collisions didn’t impose a significant toll on the pool of available bandwidth. Wi-Fi is very different because transmitters aren’t aware of collisions as they happen, only after an entire packet has transmitted and an interval has elapsed for the receiver to acknowledge the packet. No acknowledgement means a collision probably happened. The waiting interval for the acknowledgement is sized to the latency of the network over its largest extent, so it’s a function of the speed (a thousand feet per microsecond) of light over the distance that the network is expected to cover. Wi-Fi networks can’t be larger than 5000 feet without bending the rules and reducing capacity. They require an acknowledgement within 10 microseconds from end of packet, during which time both their own (transmit) packet and the acknowledgement have to cover the whole distance, and the packet has to be checked for correctness. The greater the separation of the most distant stations, the more likely they are to collide as well, because they have to wait longer to see each others transmissions, an additional form of system overhead.

So the two reasons are the probability of a collision happening and the overhead of the collision avoidance and detection systems. Cellular systems don’t have big collision problems because they rely on scheduling systems to allocate air time in a way that prevents collisions, so they can break Wi-Fi’s 5000 foot limit without losing efficiency. This makes them better able to use the propagation benefits that come from the 700 MHz band as opposed to the more limited 2.4 GHz band that most Wi-Fi systems use.

But these things are not completely black and white, and we’ll explain why in the next installment.