I would say it started around 1998. There were probably forces in play long before that, but somewhere around the 1998 timeframe is when things really started to take effect. This is when things like Layer-5 switches, network-based caching technologies, and various optimizations began to be deployed around the net.
This was all done in the name of making things better, especially to improve web browsing performance, the dominant interactive application of the time. But in our haste to make things better, we broke the Internet. We destroyed the very fabric that permitted the application we were trying to optimize to come into being in the first place.
Let’s say it’s 1990 and we had decided to start optimizing the network for one of the then most popular applications, say FTP. What happens when HTTP comes along? Well, perhaps we would have never seen the web happen because the network was so optimized for one application that it no longer supported other applications very well.
Things get worse after 1998. We continued to build a network around the applicatons people used at the time. More and more, we built a web and email network, and less and less, a general-purpose, stupid, but efficient, packet delivery network. The general-purpuse stupid-network nature of the Internet is what allowed new applications nobody had thought about when they built the network to arise, and yet no network changes were required to support these new applications. That’s the beauty of a good stupid general-purpose network.
So, whether intentional or not, we did things to enforce a web and email network, and thereby prohibit or restrict future “not web and email” innovations. A few examples of web and email centric design decisions include:
- The asymmetric nature of broadband connections. We assume a user sends a “little” request and gets back a “big” answer. And we all say it’s okay, because that’s what we have been doing. But it restricts forever what we can do on that pipe. It’s a self-fulfilling, self-renforcing decision.
- That NAT routers and firewalls assume connections are TCP and are initiated outbound from the user to a server. Gee, that’s how email and the web works. Oh, and because that’s what routers support, that’s how all new applications must be implemented. Again, a self-renforcing decision that restricts new innovations to that framework.
- Latency can be handled with large TCP windows. Who cares if latency is high. It won’t hurt web surfing and email too much if we use a large TCP window size. Gee, but it kills applications that don’t work like web and email. Notice a theme here?
I could go on. The point is that we took the general-purpose stupid-network that was the Internet in 1990 and turned it into the smart and optimized web, email, and DNS network of 2004. The network of 1990 permitted new applications that nobody built the network specifically for, including all those innovations we saw through the nineties that we enjoy today. The Internet of 2004 places tremendous limits on the nature of innovations that can be supported going forward and most people don’t even realize it. They will say I’m crazy (well they may be right about that, but that’s a post for another day). There is probably no going back now. IPv6 could be the saviour, but as a result of the self-renforcing nature of the existing IPv4 web, email, and DNS network, people don’t see a need for it. They don’t realize the Internet is broken. They are FTP users in 1990 not seeing that the web of the future cannot happen on today’s restricted version of the Internet.