I saw this blog entry linked on Digg (it currently has over 2000 diggs), and felt that I should respond to it.
The author claims that poor latency is causing problems with TCP congestion control algorithms. Basically, this entire article is based on a flawed understanding of how TCP works.
TCP has built-in congestion control algorithms that attempt to determine the amount of available bandwidth between two hosts on a network, and determine the rate at which to transmit information. If you transmit data faster than the link can handle, you end up with lost packets, where as if you transmit data too slow, you aren't using the full capacity of your network, so it's important to try to find the optimum point. These algorithms aren't based on latency: they can be affected by latency in some ways, but the overall effect in determining the available bandwidth is in general not affected by latency.
The author uses the analogy of passing sand scoops over a wall to explain his point. Unfortunately, it's a false analogy. A better analogy would be trucks driving between cities. Imagine that you have two warehouses, one in Southampton and one in Manchester. You want to transport things from Southampton to Manchester, so you put the things on a truck, the truck drives to Manchester and then drives back again.
Suppose you move the Manchester depot to Edinburgh instead. Now the trucks have to drive a lot further. If you only have one truck, doubling the latency halves the transfer rate. However, the point to realise is that with TCP, there is more than one truck. The author says, "As distance increases, the TCP window shrinks". This is the exact opposite of what happens in TCP. To use the trucks analogy again, if you increase the distance between depots, the logical thing to do is to increase the number of trucks to sustain the same throughput. This is exactly what TCP does. TCP window size = number of trucks. Latency increase leads to window size increase.
There are flaws in the existing congestion control algorithms. For example, there is a problem that people are experiencing on very high bandwidth connections where TCP window size does not scale up fast enough. However, this only affects very high bandwidth networks: 10 gigabits or more. This isn't something that will affect users on a home DSL line.
Finally, yes, latency is important for certain applications. Gaming and video conferencing are two examples of applications where latency is incredibly important. The reason is that in these situations low latency is important. Arguably, the popularity of Web 2.0 applications where users need fast updates from web servers also means that latency has increased importance. However, when speaking about download speeds, latency is irrelevant. Here, bandwidth is all that matters.