CTIA’s International Case for More Spectrum

In a recent blog post, CTIA compares some measures of the U.S. wireless industry to those in nine other countries. The purpose is two-fold; to show the U.S. is a leader in number of subscribers, lowest cost per voice minute, and spectrum efficiency, and to argue the need for getting more mobile broadband spectrum in the “pipeline.” These goals are somewhat at odds, and the spectrum-efficiency argument I don’t get, as I’ll explain, but within the constraints of a blog post I think CTIA makes the case that the U.S. is a clear leader in some areas, and that the prospects for more mobile spectrum in the U.S. are fuzzier than they should be today.

Looking at the chart, we see the U.S. has the most subscribers of the countries chosen for comparison (China and India each have about three times as many subscribers as the U.S.).  Then we see that the U.S. average revenue per voice minute, which can be considered a proxy for subscriber cost, is 4 cents, the lowest of all countries compared. Very impressive. I’ve seen criticism that this figure may be lower than actual because of a possible assumption that the consumer uses all, and no more of, plan-minutes per month. Even with further adjustments I’d expect the comparison to be favorable. With the growing role of mobile data services, data costs would be a useful to see as well; such a comparison is conspicuous by its absence.

CTIA then positions the U.S. as a leader in efficient use of spectrum, by a factor of two-to-one or more, using “Subscribers Served per MHz of Spectrum Allocated.” Here, they lose me. Cellular spectrum is reused. We are not partitioning the spectrum allocations such that each subscriber has a unique fraction. Though there are many ways to measure spectrum efficiency, and new ones can be created, I don’t see how this qualifies.

We’re back on track with the last two lines showing spectrum allocations and new spectrum in the pipeline.  I’m concerned that they are based in part on “regulatory and company websites and press reports.”  As we have seen, some press reports are more reliable than others. I’d like to see the specific reference for each figure.

I find it noteworthy that Japan and South Korea, nations with very progressive wireless industries, make do with less spectrum than the U.S. It may be no coincidence that South Korea has one of the smallest spectrum allocations, one of the smallest amounts of spectrum in the pipeline, and has operators that are aggressively deploying Wi-Fi offloading and six-sector base-station antennas, which nominally double spectrum capacity compared to the more-common three sector antennas.

The 50 MHz pipeline figure apparently comes from the National Broadband Plan. There’s been a lot of spectrum policy activity since then, but we’re agreed there’s little in the pipeline now. In my next post I’ll try to take a snapshot of where we stand with the usual spectrum candidates, and suggest one or two others. Later I plan to review where we are with offloading and other technologies, along with tiered rate plans, that can reduce, but not eliminate, the need for new mobile broadband spectrum.

[cross-posted from Steve Crowley’s blog]

Comments
  • Richard Bennett

    AT&T says the iPhone caused an 8000% increase in demand for mobile capacity. That’s a lot more than doubling the capacity of each tower can handle (not that going from 3 to 6 sectors really doubles capacity without more spectrum, mind you.)

  • Steve Crowley

    I’d like someone to explain to me how the 8000% figure is determined. Is it supposed to be a rate of increase in air-interface throughput, a sum of fractional increases throughout the network, or something else? Did Wi-Fi traffic get lumped in somehow? Now, the figure has no meaning to me.

    I don’t think we need more spectrum to do six-sectors. Everything is on one frequency now, cell-to-cell and sector-to-sector, within a section of a system. If one is able to say, double spectrum, one can do both and quadruple capacity just with those techniques.

    Ideally anyway — sectorization is less than idea, whether from three to six sectors, or from none to three sectors. The move in 3GPP toward self-organizing networks takes some of the pain out of optimization, which is one reason six-sectors never really took off despite having been around a while. Truck rolls are death.

  • Richard Bennett

    The 8000% number represents the increase in total AT&T RAN traffic from 2007 to 2010. In order for sectorization to increase capacity, it requires non-overlapping frequencies (or codes in CDMA.) As I understand it, AT&T deploys a unique 10 MHz channel in each sector.

  • Steve Crowley

    Individual channels in the same sector would be made orthogonal through coding. Adjacent sectors have to rely on carrier to interference ratio. There’s a hit at the cell or sector edge due to the lower ratio at the edge. Capacity goes down for users there. These and other practical limits are why one never sees, say, 12-sector base stations. With LTE and LTE Advanced, OFDM subcarriers from different sectors can be allocated in frequency and time to reduce overlap and improve edge performance, bringing back the concept of frequency reuse, but practical limits will remain.

  • Richard Bennett

    The coding systems that allow for non-interfering multiple use reduce efficiency, so they aren’t that much of a boon.

    I think the bottom line issue is that additional spectrum is a much cheaper path to increased capacity than more sectors or more base stations, by a lot. Sector splitting and increased base station density both introduce increased interference unless the spectrum is there to avoid overlaps.

  • Steve Crowley

    I am not aware of a significant efficiency loss with the use of codes as a multiple access technique. The technique has been used successfully in the mobile marketplace starting with Qualcomm’s IS-95 system in 1993.

    Regarding the second point, when the alternatives are more sectors or more base stations, I would say spectrum may be the cheaper path to capacity improvement. In other cases, more sectors or base stations may be cheaper, especially small cells and femtocells. Qualcomm estimates that a reasonable deployment of femtocells in one macrocell can increase capacity 10 times. With about 500 MHz of spectrum allocated for mobile now in the U.S., we can allocate the entire 300-3000 MHz band to mobile broadband and not reach that level of improvement. (This ignores other solutions to the capacity crunch such as offloading to Wi-Fi, improved radio technology, and more rational rate plans that better connect the user’s costs to resources used.)

    Another advantage of more sectors and more cells is that the user’s device requires less power on the uplink due to higher sector gain or reduced distance to the cell. This increases battery life. It also reduces user exposure to RF energy, which may be important to those that give consumers advice on how to reduce exposure to RF energy, such as CTIA.

  • Richard Bennett

    How well do you understand coding theory, Steve? I’d say not very well judging from your comment. It’s like this: You have a frequency that can support data rate of X. CDMA divides X over a number of simultaneous stream of X/N where N is the subdivision of the total code space allocated to each stream, or the number of simultaneous streams the coding system can handle. It’s not any different in principle from TDMA or FDMA in terms of efficiency. There’s no such thing as a free lunch.

    The logic of femtocells is the same as the logic of Wi-Fi offload: You pass the costs of the base station off the end user, taking advantage of his wireline bandwidth, his real estate, and his power. That makes for some impressive-looking numbers because it’s mainly an accounting trick.

    The fact remains that for each sector or each base station added to a cell, the relative efficiency of the system declines because you have more edges and higher roaming overhead.

    The cost comparison between spectrum and base stations differs by an order of magnitude, and not in the direction you think.

  • Steve Crowley

    I’ll take your four points in order . . .

    First, you incorrectly imply that in my previous comment I said there was an efficiency loss with coding when I actually said “I am not aware of a significant efficiency loss with the use of codes.”

    Second, I agree except for the accounting trick part. The capacity gains within a macrocell from femtocells and other complementary access systems are real. The point is to increase capacity in a given area using a heterogenous network instead of the previous homogenous networks of 3G. We want wireless capacity increases, beyond the factor of, say, 2, that we would get by doubling spectrum for mobile broadband. The industry can’t serve all our wireless needs with 3G and 4G only. See ITU-R Reccommendation M-1645, Figure 5. http://bit.ly/q4vMqQ They had the right concept in 2003.

    I disagree with the third point. All evidence I see points to an efficiency increase in these cases. If your argument were to hold, we could increase capacity by converting all three-sector base-station antennas to nondirectional whip antennas, as that would reduce “edges” and “overhead.” One can search the literature and find support for the concept of increased capacity from increased sectorization. Here’s one example: http://bit.ly/npa0HJ . I have not seen any example that says increased sectorization reduces system capacity. SK Telecom would not now be deploying 500 six-sector antenna systems in Korea if they resulted in capacity loss compared to three-sector antennas.

    On your last point, I’ll disagree and reiterate what I said earlier: “I would say spectrum may be the cheaper path to capacity improvement. In other cases, more sectors or base stations may be cheaper, especially small cells and femtocells.”

  • Darrin Mylet

    Agree with Steve. It is an infrastructure problem, not spectrum.
    We have 3 tools available to meet this demand growth Using the existing Macro network topology:

    1. Increased Spectral efficiency by 2015 = x 3.4 (2011 Ofcom report)
    2. Increased spectrum available by 2015 = x 2 (UK spectrum auction plans)
    3. Introducing intelligent small cells in the same spectrum Delivers = x 56 (3GPP)

    cite: Ubiquysis

  • Filling the Spectrum Pipeline « High Tech Forum

    […] my last post I looked at how the U.S. is behind some other countries in having new mobile broadband spectrum in […]

  • Richard Bennett

    That’s an interesting set of claims, Darrin. Would you like to expand your argument into a blog post? I’ll gladly publish it.

    Also, the 56x claim for small cells isn’t well sourced. Ubiquisys claims it comes from 3GPP, but they don’t cite a specific document name or provide a link. A search of 3GPP.org doesn’t turn it up, so if you can fill in the gaps, it would be helpful.

  • Steve Crowley

    I’ll take a crack at this. First, the Ofcom report on 4G capacity gains is worth a look, if nothing else for the figure on the cover: http://bit.ly/j8uGyU . It’s an excellent report, but one of the authors has told me that any numbers in it should be used with caution with regard to any other country because the report is tailored to the U.K. market.

    I don’t know where the 56x number comes from so I’m curious myself. I have a feeling it may be related to system-level simulations on femtocell capacity improvements done by Qualcomm and presented in 3GPP in this contribution: http://bit.ly/ngDEqo See Figure 9. Put simply, the blue curve is the average user equipment downlink throughput in an HSPA cell without femtocells, assuming 34 users in the cell. The red curve is the same except 24 of those users indoors on femtocells. Eyeballing it I’d say it’s a gain of roughly 50.

    Less obvious from these results is that the gains are not only to indoor femtocell users but also to outdoor macrocell users, because
    a disproportionate amount of system resources is no longer being used to serve indoor users who are under worse propagation conditions.

  • Richard Bennett

    So the claim is that reducing the demand for tower capacity by 70% increases the performance of the remaining 30% by 5000%?

    That’s a bit counter-intuitive.

  • Steve Crowley

    Put that way I would suggest the intuition on the results is as follows: Within the limitations of the simulation model, when 70% of users in a macrocell are moved off the tower and onto femtocells, their average throughput increases by more than 5000% because 1) each user has a “cell” to themselves and no longer has to share downlink capacity with 33 other users, and 2) the signal quality a few feet away from the femtocell is so high they get peak rates a lot of a time instead of infrequently as in a macrocell.

    The other 30% still in the macrocell will see a performance improvement because they are no longer competing with the femtocell users. Their improvement will be less than 5000% as they are still sharing one downlink, and their signal quality is not so good due to the distance from the tower.

    Combining the average improvements of both, one gets roughly a 5000% improvement.

    • Richard Bennett

      A recent test of LTE networks in the U. S. by Laptop.com found that the fastest one downloads at 11 Mbps under real world conditions, so 50x that rate is 550 Mbps. Short haul, that’s still going to take ~160 MHz per active user. The new, higher, high-rate 802.11 PHY is in that general territory, so for nomadic applications that’s a possibility someday given the 700 MHz already allocated for Wi-Fi in the 4 or 5 bands it’s got.

      We’ve still got to figure out how to supply 100 Mbps bursts to mobile users, however, and we need to determine whether the self-organizing claims that Ubiquisys makes are reasonable at scale.

Comments are closed.