How we broke the Internet

August 14, 2004
By

I would say it started around 1998. There were probably forces in play long before that, but somewhere around the 1998 timeframe is when things really started to take effect. This is when things like Layer-5 switches, network-based caching technologies, and various optimizations began to be deployed around the net.

This was all done in the name of making things better, especially to improve web browsing performance, the dominant interactive application of the time. But in our haste to make things better, we broke the Internet. We destroyed the very fabric that permitted the application we were trying to optimize to come into being in the first place.

Let’s say it’s 1990 and we had decided to start optimizing the network for one of the then most popular applications, say FTP. What happens when HTTP comes along? Well, perhaps we would have never seen the web happen because the network was so optimized for one application that it no longer supported other applications very well.

Things get worse after 1998. We continued to build a network around the applicatons people used at the time. More and more, we built a web and email network, and less and less, a general-purpose, stupid, but efficient, packet delivery network. The general-purpuse stupid-network nature of the Internet is what allowed new applications nobody had thought about when they built the network to arise, and yet no network changes were required to support these new applications. That’s the beauty of a good stupid general-purpose network.

So, whether intentional or not, we did things to enforce a web and email network, and thereby prohibit or restrict future “not web and email” innovations. A few examples of web and email centric design decisions include:

  • The asymmetric nature of broadband connections. We assume a user sends a “little” request and gets back a “big” answer. And we all say it’s okay, because that’s what we have been doing. But it restricts forever what we can do on that pipe. It’s a self-fulfilling, self-renforcing decision.

  • That NAT routers and firewalls assume connections are TCP and are initiated outbound from the user to a server. Gee, that’s how email and the web works. Oh, and because that’s what routers support, that’s how all new applications must be implemented. Again, a self-renforcing decision that restricts new innovations to that framework.
  • Latency can be handled with large TCP windows. Who cares if latency is high. It won’t hurt web surfing and email too much if we use a large TCP window size. Gee, but it kills applications that don’t work like web and email. Notice a theme here?

I could go on. The point is that we took the general-purpose stupid-network that was the Internet in 1990 and turned it into the smart and optimized web, email, and DNS network of 2004. The network of 1990 permitted new applications that nobody built the network specifically for, including all those innovations we saw through the nineties that we enjoy today. The Internet of 2004 places tremendous limits on the nature of innovations that can be supported going forward and most people don’t even realize it. They will say I’m crazy (well they may be right about that, but that’s a post for another day). There is probably no going back now. IPv6 could be the saviour, but as a result of the self-renforcing nature of the existing IPv4 web, email, and DNS network, people don’t see a need for it. They don’t realize the Internet is broken. They are FTP users in 1990 not seeing that the web of the future cannot happen on today’s restricted version of the Internet.

3 Responses to How we broke the Internet

  1. September 4, 2004 at 12:46 pm

    Can you be specific? What innovations are precluded by the optimizations you mention?

  2. September 5, 2004 at 1:54 pm

    Well Scott, the whole point is that we cannot know in advance what innovations the future holds and that’s why we want to keep the network as general purpose and fair as possible.

    That said, some examples of limits of asymmetrical links are easy. 64Kbits/s (or less in some cases) upstream and 6Mbit/s downstream doesn’t really work if we want to use peer-to-peer applications. That tiny upstream pipe gets filled very easily. Take something like video conferencing or audio conferencing. These are symmetric and synchronous applications. We don’t want the *smart* network caching these real-time packets and attempting to *fix* them or *retransmit* them. By the time a real-time packet gets re-transmitted, it is old and of no use to the real-time stream.

    As computing horsepower grows and gets cheaper, it makes sense to push more things to the edge. It makes sense for something like distributed audio-conferencing for instance, where there is no central server and the end-nodes each do their own mixing. However, we cannot do that because the massivley asymmetric last-mile pipes forbid it.

    As for NAT (home routers), individual nodes are no longer individually addressable. Always-on applications/devices (such as VoIP phones) are a nightmare to make work with NAT. Beyond phones, what about when we want to address the sprinkler system, the refridgerator, or the garage doors (see GarageBot on this blog http://www.toyz.org/mrblog/archives/00000133.html)?

  3. November 30, 2004 at 4:01 pm

    I just don’t see the problem being as bad as you do. The web was success in part because latency is built into the paradigm. The experience is similar no matter how long it takes a page to load. Compare this with X Windows. There is a good reason I am typing this in a web browser and not an X app.

    Unforunately latency is something we still need to deal with as it so unpredictable.

    As far as VoIP. Sure it doesn’t work behind NAT, put it infront of your NAT. Problem solved. As far as controlling devices behind the NAT. That can be mostly solved with a proxy.

Buy Me A Beer

css.php