The key to making the most of wide area network (WAN) links lies in grooming and reducing the traffic that travels across them, and in avoiding as many potential sources of delay as possible. WAN accelerators are
The following WAN bandwidth optimization tools and techniques play important roles in enabling WAN optimization appliances to move the most traffic and achieve the highest throughput across an organization's WAN links:
- Protocol substitution or protocol proxy
- Hardware compression
- Compression/symbol dictionaries (aka deduplication)
- Object caching
- Traffic shaping and management
- Traffic prioritization and grooming
- Forward error correction
Each of these WAN bandwidth optimization techniques is described in order below.
WAN bandwidth optimization resources
Read this book chapter on WAN
Learn strategies to improve WAN performance in this tutorial.
What's killing WAN bandwidth? Network monitoring tools help.
View our WAN optimization and application acceleration resources
Learn the difference between WAN optimization and WAN acceleration.e
Any chatty protocol (a protocol that involves lots of back-and-forth messaging between peers, or clients and servers) typically doesn't behave well when extended across wide-area links. Where outright protocol substitution isn't feasible, many WAN bandwidth optimization devices terminate protocol connections for things such as CIFS (Common Internet File System) locally, then substitute another more streamlined protocol to encapsulate key traffic elements across wide-area links. In action, a 30 MB file transfer may take as long as seven minutes across a WAN link using CIFS, but that delay can be reduced to under a minute using Riverbed's wide-area file services (WAFS) instead.
In general, compression refers to any mathematical pattern analysis and bit or string substitution technique that analyzes traffic contents and replaces longer patterns or strings with shorter ones. Usually, these are obtained by applying various encoding techniques that seek to eliminate repetition in data blocks and replace repeated elements with short, symbolic pointers to original content as a way to reduce the volume of data that transits a WAN link. When this kind of volume reduction is handled by device hardware it runs fastest, and this explains why hardware compression is generally considered mandatory in state-of-the-art WAN optimization devices.
WAN bandwidth optimization and storage
Compression/symbol dictionaries (aka
Compression dictionaries are collections of arbitrarily long strings (or even entire files) that appliances on each end of a WAN link need to exchange only once, after which they may be associated with short, unique symbols that may be 64 to 256 bits in length. Once the dictionaries for a pair of devices are synchronized, as repeated patterns or content are detected in outgoing traffic, they will be replaced with a unique symbol that references the original uncompressed information in a dictionary, then sent across the WAN link. The receiving device will then replace each symbol it recognizes in incoming traffic with its copy of the original information to restore the content to its original form.
This WAN bandwidth optimization technique eliminates the need to send duplication strings or files across a WAN link, which is why it's often called deduplication.
Object caching involves exchanging and managing stored collections of software objects between pairs of devices and represents another way to implement shared compression and symbol dictionaries. In addition, this approach generally associates some kind of refresh interval or session timeout/age-out information with objects in the cache to force them to be refreshed whenever such intervals expire.
Rate limiting resources
does User-based Rate Limiting (UBRL) work if I have oversubscribed a port or switch?
Bandwidth allocation: How can I give a download limit for each user?
Standardizing performance measurement procedures
Traffic shaping and management
WAN optimization devices can apply all kinds of traffic shaping and management techniques to speed time- or latency-sensitive packets on their way while relegating time- or latency-insensitive packets to available bandwidth that might otherwise go unused over time. When traffic shaping is applied to a set of packets (which is usually called a flow or a stream) it imposes additional delays on some packets so that they conform to a predefined set of constraints called a traffic contract or a traffic profile. This lets WAN devices control the volume of traffic sent across a link over a specific period (known as bandwidth throttling) or the maximum rate at which traffic may transit the link (known as rate limiting). Sometimes, more complex regimes may also be applied, such as the generic cell rate algorithm used to shape traffic on ATM networks.
Traffic prioritization and
Some traffic needs to go faster than others or, at least, be subject to minimal or predefined ceilings on latency. Prioritization essentially pushes such traffic to the head of all the queues under its control and helps speed such packets on their way. This is a natural consequence of quality of service (QoS) regimes or of service-level agreement (SLA) guarantees for latency, throughput, response time and so forth. WAN optimization devices play key roles in helping to define, monitor and manage QoS and other priority schemes.
Traffic grooming ensures not only that bandwidth is subject to priority but that unwanted or potentially dangerous protocols are either blocked from accessing a WAN link or limited to exceedingly small bandwidth allocations. Think about various peer-to-peer protocols that don't have legitimate business uses or various kinds of streaming multimedia protocols for watching movies or videos that have no normal place at work. Traffic grooming can prevent such protocols from consuming precious bandwidth. Many experts believe that minuscule allocations for such protocols are desirable because they permit such traffic to move and thus can trace traffic back to senders or receivers of the same!
More on WAN optimization
Forward error correction
Forward error correction (FEC) is a method of obtaining error control in data transmission where the transmitter sends redundant data and the destination recognizes only the portion of the data that contains no apparent errors. Strict latency requirements for some kinds of packets, especially those used for voice, video or multimedia communications, require packets that age more than a certain threshold amount to be discarded. WAN optimization devices can include error correction bits into all such packets without adding excessive overhead to this kind of traffic; that data can then be used to reconstruct discarded packets on the receiving end of an appliance pair. This helps control jitter and helps keep streaming communications and voice traffic smoother and more intelligible, even when such errors get corrected at the tail end of a set of packet transfers. As long as enough traffic gets through to permit error correction to work, resulting traffic will be smooth enough to deliver a satisfactory user experience.
What IT managers should know about WAN bandwidth optimization techniques
When a WAN device combines multiple optimization tools and techniques to maximize wide area bandwidth and throughput and minimize latency and data loss, an organization can make more and better use of its WAN links and can often accommodate growth across existing links without having to acquire additional bandwidth capacity. The payback is lower recurring communications costs, which usually more than offset the costs involved in acquiring, deploying and maintaining this kind of optimization hardware.
This was first published in June 2010