Leverage your network with WAN optimization

Today's global and mobile businesses rely on the WAN more than ever. But bandwidth use has skyrocketed, and recent trends toward centralized applications and server consolidation have created severe congestion in the network. WAN optimization and acceleration can overcome these challenges and help enterprises cost-effectively maximize network resources. Learn about the benefits of WAN optimization, how it works, which applications are the best fit for it, and how to integrate it with security and network management.

Today's global and mobile businesses rely on the WAN more than ever. But bandwidth use has skyrocketed, and recent trends toward centralized applications and server consolidation have created severe congestion in the network. WAN optimization and acceleration can overcome these challenges and help enterprises cost-effectively maximize network resources. Learn about the benefits of WAN optimization, how it works, which applications are the best fit for it, and how to integrate it with security and network management.

In this E-Guide:

   Server consolidation
   Application acceleration and WAN optimization
   Acceleration architectural challenges
   When does acceleration and optimization work best?
   Placement is critical

  Server consolidation  

The modern day enterprise network has many forces at work, and almost all of them are creating more traffic in more places. Mobility makes it possible for one person to connect via many different devices through different types of networks. Real-time applications flood networks with large amounts of traffic that demands high priority. And the globalization of today's business climate means that networks must be available to branch offices in all parts of the world at any time of day.

Branch offices are important to the business. They need the same quality IT services as corporate headquarters and can't be treated as second-class locations. When branch offices lose connectivity, there is an immediate impact on the overall business. The fear is that providing excellent service to branch offices can also create a money pit. It is not just server consolidation that has made managing distributed networks more difficult. The move to Web browser-based applications has increased the size of each transaction. If that were not enough, the HTTP protocol has some inefficiency built into it that can make it slower than older client/server applications.

The growth of server-based applications has done wonders for productivity and provides important functionality to the people in distributed locations. File servers allow users to quickly retrieve important business data. Email servers such as Microsoft's Exchange provide fast and efficient email service. Having these and other servers in branch offices has made good response time, and thus productivity, the norm.

But the growth of servers and applications in the branch office has a dark side. Maintenance and problem resolution are expensive. It takes the IT staff extra time and expensive tools to remotely diagnose problems. The remoteness leads to frustration for both the branch office workers and the IT staff. Remote servers waste resources if they are running at low utilization, which is a common occurrence. Backup and recovery takes longer and uses expensive WAN resources when the server is remote. The security of the server is also challenging and makes it harder to meet many of the regulatory requirements of providing data protection.

All these reasons have led to the desire to move remote servers to the data center. The IT staff is located there and can react quickly when a problem arises. Backing up or restoring a server is faster when it is at the data center. It is easier to apply best practices and ensure that the data on the server is secure in the data center. With all servers at the data center, the IT staff can take advantage of server virtualization technologies, such as VMWare, to combine several servers into one.

It doesn't matter whether it is called server consolidation or data center consolidation -- the concept solves many problems. However, moving servers to the data center is not the perfect solution. The workers in branch offices frequently see poor response time that negatively affects productivity and morale. Consolidation can also affect the budget, because servers residing in the data center and transmitting all data to branch offices require significant WAN resources.

The question is: How does a good infrastructure design become a better infrastructure design? The answer is incorporation of new technologies that allow an enterprise to capture all the benefits of server consolidation while solving its problems. In this case, the solution at hand goes by two names: WAN optimization and application acceleration. Both names refer to virtually identical technological solutions. If a vendor wants to emphasize WAN bandwidth savings and resulting cost savings, it focuses on WAN optimization. If the vendor wants to highlight improvement in response time and productivity, it focuses on application acceleration.

  Application acceleration and WAN optimization  

Acceleration applies many techniques to solve the twin problems of poor branch office response time and the extra bandwidth required. The techniques can be grouped into two general categories. The first are "generic" techniques. Generic techniques apply to all the data going to the branch office, no matter the protocol. The benefit is that the technique helps CAD/CAM, file and Web traffic equally. The primary generic techniques include TCP/IP protocol optimization, bandwidth management and shaping, quality of service (QoS), and compression. Some of these optimization techniques have been around for a while; the biggest recent improvements are in the area of compression.

Older compression techniques generally reduced the amount of data sent by two to three times, while newer techniques, called dictionary compression or de-duping, can reduce bandwidth requirements by 10 to 50 times. Applying these newer compression algorithms means that despite the large bandwidth requirements for server consolidation, the overall utilization of a WAN link could be less than it was before consolidation. Response time is also improved because the overall amount of data that needs to be sent is decreased and because the smaller compressed packets are automatically combined into larger packets, reducing the number of packets sent.

The second group of techniques that accelerators apply to improve response time and solve the problems of consolidation are "protocol specific." Many protocols, including Microsoft's Common Internet File System (CIFS) and HTTP, are not very efficient. This inefficiency is unnoticed in a LAN environment because of the speed of the LAN and the short distances traveled. Over the slower WAN, however, protocol inefficiencies can affect response time. Accelerators understand the protocols and apply techniques that overcome their shortcomings. For example, Microsoft file servers can experience close to LAN-like service with the combination of generic techniques and CIFS-specific acceleration.

Accelerators can do wonders, but they don't help with certain types of traffic. Video, such as training films, is not helped much by accelerators because video is already highly compressed. Voice traffic can actually suffer because there is little an accelerator can do, and trying to accelerate it can actually slow it down. It is best to have the accelerator recognize voice traffic and pass it directly through at a high priority.

In dictionary compression, also known as de-duping, the accelerator learns patterns from the data flowing through it and stores them in a large cache, located both in memory and on a disk drive. The patterns are generally 100 characters long. Accelerators are located at the data center and the branch office, and they both learn the same patterns from the data. The first time the data passes the accelerator, it can only apply older compression techniques; the real advantage comes when it sees the pattern the second time. When the pattern shows up again -- this can be in any data, including data totally unrelated to the first instance -- the accelerator substitutes the entire pattern with a reference number. The reference number refers to the pattern it has stored, and because the accelerator on the other end has learned the same pattern, it can easily rebuild the message.

For example, if a PowerPoint presentation is attached to an email, the first time it is sent to the branch office there is some data reduction. When the file is sent back to the data center with a few changes, the accelerator can use its pattern database to remove all the parts that haven't changed and send only reference numbers to those parts along with the changes. The result is that a file that was 5 MB can be reduced to a few kilobytes.

  Acceleration architectural challenges  

Acceleration can do wonders for response time and significantly reduce bandwidth requirements, but several issues must be addressed for a successful implementation. The first is a good understanding of which applications are using the WAN.

Gone are the days when identifying traffic by port number was enough. Knowing that Web applications are using port 80, for example, tells you little. Web-based applications using the same port number can include those that are mission critical along with those that are time wasters. Accelerating all Web applications may mean that music-sharing applications run faster. Before applying acceleration, the network group needs to implement application monitoring tools that report on the applications that are using the network, not just the ports that are being used. This information will allow the accelerator to accelerate business applications before non-critical applications. It is also important because, in many cases, network managers will not be aware of all the applications using the network.

The next issue is what to do about encrypted traffic. The movement of applications to Web interfaces has made it easier to encrypt the traffic using Secure Sockets Layer (SSL). There are many good reasons to use encryption, but it is impossible to apply many of the acceleration and optimization techniques, such as dictionary compression, to encrypted traffic. If business critical traffic, or a significant amount of overall traffic, is encrypted, then an accelerator that can de-encrypt traffic, accelerate it and then re-encrypt is needed. There are accelerators that can perform this function, but not all do it equally well.

Securing the accelerator is also necessary because of the new capabilities of dictionary compression techniques and file caching. The dictionary compression file has a copy of all the patterns that have passed through the appliance. With file caching, a copy of the file is stored on the appliance. If someone hacks into the accelerator or runs out the door with it, then it is possible that the sensitive data could be compromised, and traffic could even be recreated from the stored patterns. This is not likely, because the patterns are short and nothing in the accelerator relates one pattern to another, making it very difficult to reconstruct a file. The solution is to encrypt both the cache and the compression files. This feature is available from many acceleration vendors, but not all of them.

Another architectural issue is transparency. This issue has two layers. The first is how the traffic is packaged when it is sent between the two accelerators. The most common way is to create a tunnel between the two accelerators with all the accelerated traffic having a new TCP/IP header added to the packet. The transparency issue is that any monitoring or security device between the two accelerators will no longer see the traffic as coming from the client or server. This loss of visibility causes problems for monitoring and security equipment. The solution is to move all the monitoring or security devices before the accelerator. Some of the acceleration vendors do not create tunnels between the accelerators and thus do not have this problem.

A larger transparency issue is created by the accelerators. Accelerators significantly change the traffic by compressing it and combining multiple packets into one larger packet. When any monitoring or security device that performs deep packet inspection looks into the packet, what it will see is nothing like what the client or server sent. Because compressing packets is inherent in the acceleration and optimization process, the only solution is to place all security and monitoring devices before the accelerator.

Fast application response time is meaningless if branch connectivity to the data center is lost in the event of an outage or disaster. Providing backup connectivity has always been difficult and expensive. Even if two service providers are in the area, their actual infrastructure often follows the same route out of the building and may be subject to the same backhoe accidents or other disasters.

A new alternative is wireless connectivity from cellular vendors. The connectivity many mobile workers use to get a broadband connection can also be used to connect a branch office. Branch office routers are available that integrate this option directly into the router. The biggest advantage of this option is that the cellular last-mile infrastructure is completely separate from landline facilities. The speeds are not as high as normal landline connection, but wireless can provide significant bandwidth that allows the office to continue working.

  When does acceleration and optimization work best?  

Why not accelerate and optimize all your traffic? On the surface, it would appear to be a good idea. Accelerating all traffic means everything will have faster response time, leading to better productivity and happier users, right? All the accelerators in the market can greatly compress the traffic, thus lowering the line utilization, pushing the next costly bandwidth upgrade further out into the future.

While it seems like a good idea it really isn't. There are some types of traffic that won't benefit from running through an accelerator or WAN optimization box. The first is VoIP. Voice traffic is very latency sensitive. Delaying the traffic is never a good idea, and that is what an accelerator will do. Accelerators gain their efficiency by combining packets and then significantly compressing them. Holding back VoIP traffic will only introduce jitter. VoIP traffic is already compressed, so there is little or no compression benefit.

The best scenario is for VoIP traffic to bypass the accelerator. There is one exception: If the accelerator is performing quality of service (QoS) functions for the network, then the VoIP traffic should still travel through it. It should bypass all the accelerator functions except for QoS and go straight to the front of the output queue. If the accelerator doesn't allow VoIP traffic to bypass the processing, then skip the accelerator altogether and instead let your router perform the QoS function.

Video also doesn't get full benefit from acceleration. The reason is that video is already compressed. Additional compression will not result in any gain; it will just take time and use up the accelerator's resources. This is true for any traffic that has already been compressed. Unlike VoIP, there can be a case made for passing video traffic through the accelerator, and that's because most video traffic uses TCP. Many accelerators can apply techniques that can improve TCP performance at the protocol level. They implement a better version of TCP's fast-start algorithm, allowing a TCP session to take advantage of the entire bandwidth available and speeding up transmission. Accelerators also have improved error-recovery, allowing for a smoother session with higher throughput if there are problems. One accelerator company even boasts it can significantly reduce the time it takes to transmit video traffic from its proprietary TCP acceleration techniques alone. Additionally, some accelerators have built in a content delivery network (CDN) solution which improves some video's performance. Before using an accelerator for video traffic, make sure that the product can apply just the TCP improvement and not compression to the traffic.

Although most traffic can benefit from acceleration, that doesn't mean the accelerator can handle it. Many accelerators cannot process UDP traffic. If UDP is important, make sure the product you select can handle it. If the product can accelerate UDP, don't expect the same level of benefit that TCP receives, as this is generally not the case.

There is a case when traffic should not be accelerated even if there would normally be a benefit from accelerating it. Accelerators come in all sizes and can handle varying amounts of traffic. If an accelerator is overloaded, the results will not be pretty. If the traffic grows beyond the limits of the accelerator, it is best to select some traffic not to be accelerated. The best answer is to get a bigger or multiple accelerators, but if that is not possible, some traffic should take the bypass.

  When does acceleration and optimization work best?  

The accelerator is out of the box. It has been configured and a place in the rack has been found. It's powered up and plugged into the network. All that remains is to direct traffic to it. Stop and think twice, because if the accelerator is placed in the wrong place in the packet flow, it can break network management and security.

The problem is that accelerators change data they receive, substitute a pattern reference for parts of the message, compress it and even combine multiple packets into one packet. All the application and application headers such as URLs can be hidden from downstream devices. Combining packets also means that important state information is lost. All this makes it impossible for security devices -- such as firewalls, anti-virus, IPS, data-loss prevention, application firewalls, and any application-level security devices -- to perform deep-packet inspection and analysis. They will still work, but they won't find anything. The same applies to network management devices. A significant part of the application reporting that is critical to running the network will disappear. Tools will only be able to report on TCP and nothing more.

The solution is to place all security and monitoring devices before the accelerator. This will change many network designs. Many of the security devices and management probes are the last things before the router, but now the accelerator needs to be the last. It gets even more complicated if the security and management devices are integrated into the router.

Vendors understand these problems and have added features to help. Many vendors produce their own traffic statistics on the un-accelerated traffic, and some can output NetFlow information. Some allow third party applications to run inside the accelerator.

A better solution is to use the accelerator as a control point that passes the un-accelerated data to security devices and receives information about how to handle that data. For example, if a virus is detected, the anti-virus device tells the accelerator to discard the packets or disrupt the connection, stopping an accelerated version of the virus from being sent. A protocol called the Internet Content Adaptation Protocol (ICAP) is the key to performing this control point function. Unfortunately, this protocol is not widely enabled in products yet.

The key point is to understand the effect an accelerator will have on the network environment and make sure it fits in without causing problems.

About the author: Robin Layland is president of Layland Consulting. As an industry analyst and consultant, Robin has covered all aspects of networking from both the business and technical sides and has published more than 100 articles in leading trade journals, including Network World, Business Communication Review, Network Magazine and Data Communications. Prior to his current role, Robin spent a combined 15 years at American Express and Travelers Insurance in a wide range of jobs, including network architect, technical support, management, programming, performance analysis and capacity planning.

This was first published in January 2009

Dig deeper on Managed services

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchNetworking

SearchUnifiedCommunications

SearchTelecom

SearchSDN

Close