Tuesday, August 31, 2010

Why the FCC can’t protect Internet Users

A response to a letter to the editor by Michael J Copps
In today’s Washington Post Michael J Copps, a member of the Federal Communications Commission, lays out his argument about why the FCC does not but should have jurisdiction with “protecting” the internet.
Below is my respectful response:

Mr. Copps,

In your letter in the post this morning your state, “Now is the time to put broadband back under Title II, where it belongs -- and under which many smaller companies continue to offer Internet access to the public.”  You further state that the Verizon-Google plan that the Post endorses, “creates a  two-tiered Internet at the expense of the open Internet we now have…”

The internet, as it exists right now, already has in place a number of offerings that provide advantage to some providers that others may find it difficult to replicate.  I will only use one example.  Google (and others such as Hulu, Netflix, etc.) have begun to deploy massive replicator servers that they are placing throughout the internet in order to provide their content at a closer physical location to the user. This creates an advantage for them as it means that users have fewer routers to flow through in order to deliver their content to the end user. 

To many, this is a good thing because it means that the the user experience delivered by those content providers who can afford to replicate their server content on a massive scale will have a better user experience.  This means that the little guy you are trying to protect, will have a tougher time competing against those who can afford this type of replication.  This has nothing to do with the way that traffic gets delivered, it simply takes advantage of the laws of physics and the fact that the fewer routers a stream of data must flow through to be delivered to the user, the more likely it will be delivered in a manner that is acceptable to the end user.

Google (and others) are doing this because they can afford it.  However, smaller content providers may be squeezed out in the process because if they can not afford the same level of replication, their content will have to travel a much longer route and “bang up” against a higher number of routers, any which of them could be a potential “choke” point.  What is fair about that?

What if an internet service provider were to offer a service that would allow a content provider to affordably deliver their traffic with a higher priority?  This would provide smaller content providers with another means of competing with the big content providers without having to purchase server space all over geographically. 

Your present position on so called net neutrality would deny smaller content providers with that option.  They would be forced to compete with the only weapon presently available, content replication.

Here is my point.  The present system is already unfair and tilted to the existing content providers in a way that has nothing to do with so called net neutrality. Traffic such as video, on the internet is growing at a geometric pace.  This traffic is very sensitive to delays and packet loss.  Your proposed policy provides only one way to manage this dilemma which the large content providers are already exploiting.  Denying internet service providers with a different means of managing the traffic may jeopardize the very thing you are trying to protect.  Because providing smaller content providers a way to expedite their traffic across the internet could be a less expensive and more powerful way of leveling the playing field with the large content providers.

Please reconsider your position.  As it stands right now, your present stand, protects the content providers already in place and denies smaller, less well financed providers, with a more cost effective mechanism with which they might complete.

Let Freedom Ring

No comments: