Today's office looks nothing like those of the last decade. Decentralization has become the name of the game, with remote and branch offices users requiring more network access. Consequently, there is a large and growing market for WAN optimization appliances, with business networks wanting to lower latency, manage quality of service (QoS) for critical and recreational applications, and increase performance.
However, ever-increasing network traffic rates and a myriad of ways to evade application-layer classification makes implementing high-performance WAN optimization a difficult task. In order to combat the performance trade-off required for high-speed QoS, leading appliance vendors are turning to dedicated content processing platforms to bridge the gap.
The rise of the branch office
There has been a dramatic change in the layout of offices around the globe, with our increased ability to communicate over vast distances and the rising cost of inner-city real estate combining to drive the growth of branch and remote offices. The email has replaced the memo as the single most ubiquitous form of detailing information, and many meetings take place at least partially online. Along with the ever-increasing number of Internet applications required to stay competitive (think VoIP, sales tracking, direct market campaigning, Web research and distributed file systems, few companies can manage without a decent WAN.
Appliances keep traffic up and costs down
Although available bandwidth has exploded in recent years, it's still a major cost for many network managers. This, in turn, has created a massive market for network appliances that keep traffic up and costs down. These devices typically deliver:
- Compression, whereby information is compacted as much as possible to increase overall throughput
- Monitoring and reporting on traffic flows to gain a clear understanding of the types of traffic on a given WAN link, drilling down to individual TCP connections
- QoS control, by applying a set of corporate policies to WAN traffic (i.e., minimizing peer-to-peer traffic while giving priority to VoIP).
The last item is one of the most difficult to implement. In order to control individual connection traffic rates, each TCP connection must be classified in real time in terms of which application it belongs to.
Tunneling and obfuscation
The prevalence of HTTP as a Layer 7 protocol has lead to the adoption of "HTTP tunneling" in order to bypass Layer 3/4 firewall rules. Whereas originally applications could be identified by their source and destination port, many now simply pack their information into HTTP packets, thereby obfuscating the real application. This makes it increasingly difficult to correctly classify:
- HTTP traffic on a non-standard port such as 31796
- Non-HTTP traffic such as KaZaa tunneling over HTTP on port 80
- Traffic using dynamically allocated ports such as passive FTP.
Effective QoS requires that the appliance perform deep-packet inspection and actually read the packet payloads in order to circumvent this type of obfuscation. Next-generation appliance vendors have taken this approach and many now use advanced signature databases to classify traffic irrespective of their TCP 5-tuple.
Managing QoS can degrade performance
The process of traffic classification based on Layer 7 data requires two things: processing the OSI stack to identify the packet payload, then matching that payload against a known signature library for classification. This process is typically run at the start of every TCP connection, with the classification then applying to the entirety of the session.
The exponential back-off nature of the TCP protocol means this process is extremely sensitive to latency; a delay in processing the first few packets will affect window sizes and thereby throughput for the whole connection. As traffic speeds continue to increase, this poses a challenging trade-off for appliance vendors: the increased processing required to manage QoS degrades the throughput of the system as a whole.
High-speed content classification
In order to combat this trade-off, leading appliance vendors have been looking for ways to accelerate the content-processing stage. The key issue is that matching packet payloads against a traffic signature database takes CPU cycles, memory, and other resources that a typical appliance simply doesn't have to spare -- all its power is dedicated to processing the network stack and passing packets on. This has driven leading vendors to deploy dedicated content processing platforms as part of their high-end WAN optimization appliances.
By offloading the signature-matching task to a dedicated co-processing platform, the appliance CPU can spend more system resources on processing the network stack, a task for which it is best suited. Such a platform must, of necessity, be able to handle the entire traffic signature database, scale effectively in terms of TCP streams, and be easy to integrate into the appliance itself.
The market for WAN optimization appliances is large and growing, but effective QoS requires Layer 7 deep-content inspection to defeat obfuscation techniques such as HTTP tunneling. Building a high-performance WAN optimization appliance therefore requires a system capable of wire-speed, low-latency content inspection. By installing a co-processing platform dedicated to these CPU-intensive tasks, the appliance as a whole can be turbo-charged to gigabit or multi-gigabit speeds.
About the author:
Mick Johnson is the Product Marketing Manager at Sensory Networks, an OEM provider of high performance network security acceleration technology. The company's hardware acceleration products include a broad range of chipsets, accelerated software libraries, PCI acceleration cards and appliance platforms for antivirus, antispam, antispyware, content filtering, network monitoring and QoS, firewalls and intrusion detection/prevention systems. Mick has a BSc University Medal in Computer Science from the University of Sydney, and can be reached at firstname.lastname@example.org.
This was first published in June 2006