Traffic Management and the Common Framework

We’ve been writing a lot about Internet traffic management here, and lo and behold the common framework for Internet regulation unveiled today by Google and Verizon deals with the subject…

August 9, 2010 0

Virtual Cocktail Party

I’m quoted a couple of times in Newsweek’s article “How Fast Will Your Internet Be in 2020?” Google’s initiative would offer a speed to everyone—one gigabit per second—that the FCC…

August 6, 2010 0

Battling Visions of Traffic Management

Jay Daley, an Internet address registrar in New Zealand, offers an interesting “net-head view” of Internet traffic management: Traffic management by definition is about protocols and pipes, about balancing services…

August 6, 2010 0

Senate Privacy Hearing

Check my post on Internet Evolution about the Senate’s hearing on Internet privacy and the control of personal information. For the US federal government, privacy rights on the Internet —…

August 4, 2010 0

Linux Kernel Improvements

Code bloat is prominent in the dark side of software that’s successful over the long term; every application that we use becomes more feature-rich and in some sense larger and…

August 2, 2010 0

Paul Vixie on DNS Blacklisting

Paul Vixie is well known in the Internet community as the implementor of DNS. He makes a very important announcement today on CircleID about a new tool he’s developed to…

July 30, 2010 0

Measuring Internet Performance

In my last big article, I explored the technology that makes the Internet different from the telephone network, packet-switching. This time I want to explore one of the major implications of packet-switching, statistical behavior. The short version of this piece is that the telephone network oriented a generation of regulators and policy geeks toward certain expectations about network behavior that are no longer valid, and the tension between the old view of networks and the new view is the source of a lot of conflict.

Measuring the performance of non-deterministic, packet-switched networks such as those comprising the Internet is much more difficult challenge than is generally appreciated, but it’s necessary – or viewed as necessary – by a host of consumer-oriented policies. As currently operated, the Internet provides no performance guarantees, relying on a “best-effort” system of packet transfer across facilities shared by a large number of users – some 500 million systems are attached to the Internet presently – operating under wildly different loading scenarios. Many advocates argue that this “best-effort” system represents an ideal state of affairs, and are offended by the notion that network operators might supplement basic service with a more deterministic, for-fee system with bounded performance guarantees. Bob Frankston, for example, believes that the Internet represents a “paradigm shift” in network construction because it radically separates transport from applications. How radical this shift is from a historical perspective is debatable, as the wheel pretty much accomplished the same thing. But I digress.

July 28, 2010 0

Web sites vulnerable to phishing attacks

A new report from security firm Dasient concludes that the majority of websites are running third-party JavaScript somewhere on their sites, which could be putting them at risk. See the…

July 26, 2010 0

The Apple Antenna Story

What happens when real antenna engineers measure the performance of the iPhone 4’s antenna under different grip conditions? They find that it really needs a bumper: We can make several…

July 22, 2010 0

FCC’s Deeply Flawed Broadband Report

In my day job, I’m a policy researcher at ITIF working on Internet issues, so the FCC Broadband Deployment Report naturally caught my eye. See the analysis: The bottom line…

July 22, 2010 0