Packet loss is a fact of life on the wide area network (WAN). In many cases, what separates the top-rated service providers from second-tier competitors is the quality of their services and the ability to minimize the number of data packets that are lost across the WAN. Still, in the current economic conditions, many enterprises are seeking ways to cut operating costs out of IT budgets, and the organization’s WAN links seem to be an...
easy target. In many cases, second-tier WAN service providers promise higher bandwidth at a lower price point.
This was precisely the situation for Robert Klages, Director of IT for Monotype Imaging. He noted, “Like many organizations over the last couple of years, we were looking for ways to cut costs. To that end, we switched from a top-tier networking vendor to someone that offered high bandwidth at a lower cost.”
However, despite the increased available bandwidth, performance on the company’s remote backup jobs and database synchronizations suffered after the change in WAN vendors. “We didn’t know as much as we should have about packet loss prior to changing WAN vendors,” continues Klages, “and found out the hard way that while a 1% packet loss claim may not seem like a lot on paper, it can certainly have a big impact on performance.”
While lost or corrupted packets are accounted for within the TCP/IP protocols, the impact of dropped packets can still be significant and cumulative. When a network connection starts experiencing problems, missing or corrupted data packets are resent, slowing down the transfer itself. Likewise, if enough errors creep into the connection, the transfer protocol will throttle itself back by shrinking the size of the receive window, requiring additional acknowledgements between sender and receiver in order to stabilize the connection. So while the connection becomes more reliable, overall bandwidth utilization is lowered.
Fortunately, packet loss is not a new problem and there are a number of tools available to determine the quality of your WAN connection. By using common TCP/IP utilities, like ping for Windows, WAN engineers can spot trouble on their connections. Although primarily designed for home broadband users, websites such as PingTest.net and the FCC’s Broadband Quality Test offer a quick way to check the quality of Internet links to outside servers.
For Monotype Imaging, moving to new WAN optimization appliances curbed their packet loss issue. They chose Silver Peak appliances that not only perform the typical WAN acceleration functions, such as compression and caching, but also utilize protocol optimizations. Protocol optimization identifies specific issues in application and transfer protocols, such as those used by Microsoft Exchange, Windows CIFS, and TCP/IP, and makes specific fixes when those protocols are in use. For TCP/IP, the appliances perform error correction algorithms and packet loss techniques that ultimately prevent the protocol from using its own techniques to improve the quality of the connection. The result is optimized WAN links they can not only use for their existing processes, such as remote backups, but also their additional applications.
Klages concluded by saying, “Improving the packet loss issue has allowed Monotype to consider new applications, primarily Microsoft’s Office Communications Service and video conferencing. We would never have even considered these apps without WAN acceleration and optimization in place.”
While not every service provider and application use case will require an organization to deploy WAN optimization products, the issues that Monotype Imaging experienced are not by any means unique. A positive wide area network experience goes beyond sheer bandwidth and needs to factor in the quality of the connections your headquarters and remote sites have available. Simply throwing additional bandwidth at a problem rarely resolves the issue and a deeper level of troubleshooting and management will be required to support the applications your organization wants to use over those links.