Over the years networks have gotten faster and more reliable. 10 years ago a Skype call between two different continents was often unreliable and of disputable quality. A little over a year ago I was able to Skype over 4G from Malaysia to my home country the Netherlands and the quality was pretty good. Fast and reliable internet is sort of the norm nowadays. With this in mind I would like to touch upon a phenomenon that resulted from faster and more reliable networks, which is an apparent move from TCP based applications to UDP based applications.
For years we have grown up knowing that most network communication consisted of two protocols, one connection oriented (TCP) meant for reliable communication and one connectionless (UDP), meant for speed but less reliable. Traditional applications that needed reliability, such as database applications, file transfer applications and web use TCP, applications that needed low latency use UDP, examples are Voice and Video.
During the times that bandwidth was still very expensive and the internet wasn’t as reliable as now, connection oriented protocols made a lot of sense. The three-way handshake and the ability to negotiate the window size between client and server, built a reliable path where data could be exchanged with a guarantee. The congestion avoidance algorithms used in TCP also guaranteed that the bandwidth was used in a more fair way with other applications.
Now that bandwidth and reliability are no longer a delimiting factor in network operations, the measures build in to TCP are starting to be the delimiting factor, the three-way handshake, slow start and head-of-line blocking are slowing down client-server traffic.
To overcome the limitations that came to light with the increased bandwidth and reliability, specially suited congestions avoidance algorithms were created for TCP , for high bandwidth networks this algorithm is called High Speed TCP. To fully utilize these enhancements both sides have to support the congestion avoidance algorithms in the OS kernel. Because this is often not the case, WAN optimizers such as Cisco and Riverbed still have a purpose. They encapsulate existing traffic between the two WAN appliances in the path to take advantage of the improved congestion avoidance.
Another, perhaps better approach is to re-engineer the applications, a good example of this is the QUIC protocol. The QUIC protocol is in development and has been submitted for standardization at the IETF. QUIC is build atop of UDP and optimized for HTTP/2 semantics. Due to its usage of UDP it’s much quicker in setting up a client-server connection.
According to the IETF draft QUIC provides “multiplexing and flow control equivalent to HTTP/2, security equivalent to TLS, and connection semantics, reliability, and congestion control equivalent to TCP”.
Who knows, in the future we might be seeing not only latency and jitter susceptible applications using UDP, but also client-server applications such as Web, database and citrix.