Now is a great time for the web. HTML5 is coming (latest draft was released on 20th March 2012), LTE is coming or already working, Fiber is coming to every home within 10 years and tablets and phones can finally provide high Quality of Experience to users.
One of the bottlenecks is HTTP. It’s a great, flexible protocol. It’s also a flawed one. One of the major problems with it is that the requests are serialized – you cannot ask for the next object in a page until the previous one is finished downloading. This is a problem because if a large object is being downloaded (like a large media file), it “blocks” the rest of the downloads, making the page “load slow.” To circumvent that, browsers open multiple TCP connections to web servers. This is a good workaround but put more stress on the network – the more TCP connections there are, the more work stateful devices such as firewalls or proxies have to process and keep track of. Opening more connections also lowers the average packet size (handshakes hold no data), and that added overhead is not desirable.
Google introduced SPDY as a solution. It works over a single TCP connection but allows parallel HTTP requests, among other things. Now Microsoft is introducing HTTP 2.0, or “HTTP Speed+Mobility” (a more reasonable name) in an IETF draft called “HTTPBis”. Unfortunately, the document is kind of disapointing. They describe a lot of intents, but not a lot of actual suggestions. I think we can all agree that Congestion Control should stick to layer 4, and that any new version of HTTP should be backward compatible.