Developing an effective and flexible WAN strategy doesn't end with selecting the right optimization solution and putting it in place across your network. The next step is to plan for the future.
For the most part, this means continually monitoring the activities and performance of the WAN long after the system has gone through initial pilot testing and is in general operation.
This SearchEnterpriseWAN.com WAN Nation series will look at selection and deployment issues, including the importance of launching relevant and user-centric pilot projects. The series will also highlight the importance of contined monitoring to future-proof your WAN investment, and outline FTP acceleration alternatives to full-scale optimization.
Matching WAN optimization, acceleration options to network needs
Organizations looking to amp up application performance have lots of options today, including WAN optimizers, application delivery controllers, FTP accelerators, wide-area file services, managed optimization services, and overlay network services.
Any of these options may help if you are experiencing persistent problems in delivering good application performance to your remote end users. Some can help ensure the performance of Software as a Service (SaaS) solutions--or even an organization's own applications hosted on external cloud resources--to users anywhere on your network. But, to really solve performance problems you need to pick the right flavor of WAN optimization
The first step on the road to performance improvement is to determine your needs. What are the persistent problems and who is experiencing them? This requires going beyond squeaky-wheel problem reporting to careful testing. When a user says soft-phone audio quality is bad, is it because of the WAN? Or does it relate more to task prioritization in the desktop operating systems? Or does the problem actually lie in the TCP stack?
To select the right kind of optimization, it helps to have the two taxonomies of optimization in mind: Optimization patterns and optimization techniques.
Performance is an end-to-end phenomenon, and optimizing the middle won't fix the endpoints. Look to see whether other users in the same location are experiencing the same performance problems as those who report them. Also, look to see whether users in other, similar locations are experiencing similar problems with the same applications.
Be sure to document test results and network performance numbers in all the locations, and compare results to the same tests conducted at other locations not experiencing the problems. Are the afflicted sites experiencing higher packet losses or more variability in network performance (jitter), or are they simply short of bandwidth?
WAN prioritization: QoS, CoS and effect
The second step is to look at options for remediation. Endpoint issues aside, fixing network traffic may not require adding new optimization gear or services. In some cases, bandwidth is available, but high-requirements traffic (like voice, video, and even that associated with remote desktops) is tossed around in the rapid flux of other applications' traffic.
The first, fast, and relatively cheap thing to try here (assuming the current network equipment allows) is setting up class-of-service (CoS) and quality of service (QoS) features to prioritize real-time traffic over other classes and to push the most forgiving things, like email, into a below-normal priority class.
If that is impossible or insufficient, three options remain:
1. Reduce demand on the WAN through application re-engineering.
2. Reduce demand on the WAN by moving applications closer to users.
3. Optimize the WAN.
The first choice is great for folks who rely mostly on applications they develop themselves. The second option runs counter to the movement that is still sweeping through most organizations -- to centralize IT in an effort to wring the greatest efficiencies out of IT resources and staff.
So WAN optimization is the best solution for most. In our latest research, Nemertes found nearly 73% of organizations currently deploying some WAN optimization. To select the right kind of optimization, it helps to have the two taxonomies of optimization in mind: optimization patterns and optimization techniques.
Picking a pattern for network performance
The patterns for deploying optimization are: symmetric, asymmetric, carrier/cloud/managed, and overlay.
- Symmetric deployments put an appliance (or a soft client) at both ends of the connection to be optimized; Riverbed Steelheads or Cisco WAAS nodes are good examples.
- Asymmetric optimization controls only one end of the conversation; BlueCoat PacketShapers or Anagran Flow Managers are deployed asymmetrically.
- Carrier/cloud/managed deployments put symmetric or asymmetric optimizations into the hands of the WAN or Internet provider and can involve customer premise equipment (CPE) or be all in the carrier cloud. AT&T, Verizon Business and TaTa, among others, provide carrier/cloud optimization.
- Overlay optimization involves handing specific traffic off to a separate network (neither your own WAN nor the Internet) for delivery; it's mostly used for media content delivery, but providers are branching into application services too. Overlay providers include Akamai, Internap, and LimeLight.
- The techniques available are content-reducing, aimed at cutting the amount of stuff that has to move across the wire, or behavioral, aimed at improving the behavior of traffic crossing the network. Content-reducing optimizations include caching and compression. Behavioral-optimizations include:
- Traffic shaping, or controlling the amount of bandwidth any application or user gets.
- Protocol accelerations, which can, for example, cut roundtrips out of a client-server handshake sequence and therefore speed operations.
- Traffic integrity fixes, which improve error correction and prevent retransmits.
IT can select which techniques (compression, acceleration, etc.) are most appropriate to remediate, based on the applications having the most problems. They can then move through a vendor selection process that emphasizes fixing the core problem.
A network sometimes has more than one problem in delivering applications, of course, so it is important that your chosen platform resolve as many as possible and not interfere with whatever you have to layer on to resolve what remains.
Careful pilot projects critical as WAN optimization takes flight
Once an organization has decided that optimization is the solution to its WAN problems, a set of requirements defined, and an appropriate optimization solution architecture selected, the next step is to develop and launch an effective pilot project.
The basic goal of a WAN pilot project is to test drive optimization techniques that are designed to help reduce the physical volume of information on the network, through caching, compression and so on; or improve the behavior of data on the network through acceleration, traffic shaping and other techniques. Optimizers can be hardware-based (appliances), software only (soft client), virtual, server process, or service-based (managed services and outsourced activities).
Optimization pilot projects are very important for pinpointing gaps in a particular solution. In some cases, an initial solution will resolve acceleration problems but fail to address such things as wide-area file services.
Many of the organizations that Nemertes speaks with end up shopping for a second-generation solution to cover the gaps in their first generation. One way organizations can avoid this situation is through more thorough piloting during the initial deployment and launch phases.
Piloting for packet loss, latency
A complete pilot will involve an ongoing bake-off between at least two candidate solutions. Establishing a solid set of objective criteria is critical to developing a meaningful and successful WAN optimization pilot. This includes a list of known problems and a set of network metrics that will let IT establish not only that a new device or service is helping performance on known pain points but also that it is not hurting performance.
In creating an objectives list, IT should look at everything from bandwidth consumed to packet loss, latency through the devices, and jitter in traffic delivery. Sometimes the solution is no solution and may only add to latency problems, as one manufacturing firm's network architect reported when trialing symmetric optimizers. He ultimately re-architected applications to fix their performance issues.
This checklist of problems to be resolved and of criteria for judging the effectiveness of each solution must be the first and most heavily weighted factor in the ultimate selection at pilot's end. However, it cannot be the only thing considered. After all, where performance improvements are similar, other factors have to be taken into account. These include:
- Start-up and ongoing maintenance costs
- Operating expenses: How hands-on is the solution?
- Vendor overhead: Will a solution mean adding a new vendor, with the associated overhead for maintaining such a relationship?
- Deepening the stack: Will a solution mean adding a box to the closet everywhere you need the optimization? (Do you want to do that, if a similar-performing option doesn't? Do you care if someone else, such as a service provider, has to manage the boxes?)
- Scaling out: If you find you need to add optimization to more locations later, how well will this solution scale out?
- Future problems: Looking ahead to technologies you are not yet deploying (perhaps video conferencing or mobile access to enterprise apps), how well can the candidate solutions help you with problems you anticipate but do not yet have?
Pilots should span some significant business cycles to determine whether problems are associated with such things as a quarter's end, conclusion of a major project, a company audit, start of a new school year, and so on. Devices tested only during "normal conditions" haven't really been tested at all.
Networking with end users
As organizations collect performance data for the candidate solutions, they should also be speaking regularly with end users in a structured and organized way: for example, through conversations guided by a questionnaire to ensure some consistency in data collection -- but not simple fill-in-the-blank Web survey stuff that can miss something not explicitly addressed. This will not only help ensure that IT catches any problems an optimizer might inadvertently cause, but -- when combined with the application traffic visibility that most solutions provide -- it can point to other problems and solutions.
In the case of one defense contractor we encountered, for instance, why were people in Maryland pulling email from California instead of Virginia?
Users sometimes begin to behave differently once performance improves. Improving the performance of file-sharing across the WAN, for example, can reduce the attachment of files to email messages, changing the performance and reducing the disk-usage of the mail servers.
Data collection before and after deployment
While collecting decision data over the course of the pilot, IT should also begin planning the full rollout: how many locations will get appliances or services? In what order should the deployment be conducted? (Hint: start with where the best performance is needed but actual performance is worst, plus wherever the company's funding deployment resides. That way, as soon as the criteria -- quantitative and qualitative -- point to a winner, IT can hit the ground running on the full rollout.)
It is equally important to continue to collect data (perhaps less frequently) after the deployment to ensure that the solution is working as well in production as it did in pilot and to monitor for the development of new problems or new classes of traffic and users.
With proper piloting and a fact-driven selection process, any organization that needs optimization (and can afford it) should be able to select a solution that suits its specific needs and organization. Without them, chances for a misfire and a need to repeat the selection process are high, raising costs and increasing dissatisfaction with IT.
Futureproofing WAN optimization solutions: Keep an eye on activities
WAN optimization is a near-ubiquitous strategy for improving application performance across long-distance network links, whether that network is the Internet or a corporate WAN. It is also an effective way to improve performance without boosting bandwidth, decentralizing resources, or re-architecting the entire WAN infrastructure.
The process does not end once an optimization strategy and solution has been implemented, however. The next step is to plan for the future. For the most part, this means continually monitoring the activities and performance of the WAN long after the system has gone through initial pilot testing and is in general operation.
It is important to keep an eye on both the network and the WAN optimizers (that can provide lots of extra visibility into network and application usage patterns) and continue discussing application and network performance with users. Doing so will let IT discover whether anything on the WAN is still performing badly and whether changes in application usage lead to new performance problems.
Taking a close look at application usage across the network can also help IT spot problems in the making or identify applications that will cause problems if their use spreads. They can then work out a plan to circumvent problems and find a way to protect the performance of business-critical applications.
Avoiding MAPI mishaps
For example, while the messaging application programming interface (MAPI) protocol that is common to email can be effectively accelerated, the more robust applications may lead to an increased use of mail attachments for file sharing and therefore create problems for email systems and the systems administrators who are responsible for their operation.
When designing and implementing optimization or acceleration solutions, you have to be aware of the impact and implications on the network and the users – especially when planning for future upgrades and improvements.
Organizational optimization needs also change and evolve as the result of many factors aside from shifts in work habits. These may include:
- Adding voice and video to a network, or shifting to a virtual desktop delivery model. This can make WAN optimization a necessity for many organizations, since there is simply no other way to guarantee the requisite levels of performance. The optimization required -- with a focus on packet loss and latency mitigation, and graceful traffic shaping -- may be different from what the organization required before, which may have centered essentially on traffic volume reduction.
- Centralizing applications by moving them out of branch or regional offices and into primary data centers. This places applications further from end users, so optimization is often required to ensure continued good performance. However, the type of optimization required may be different and is not just focused on speeding up backups from regional server rooms to central data centers.
- Corporate mergers or acquisitions. This can introduce whole new classes of users and performance issues and further complicate matters by, for example, adding tricky file synchronization approaches to virtual desktop optimization.
- When needs change dramatically, there is always the chance that the incumbent solution will no longer completely meet an organization's needs. A solution that is designed primarily for file synchronizations, for instance, won't be much good for managing video conferencing traffic, and vice versa. If the organization has to find a new solution, it must consider three options:
- Upgrade with the existing solution (if this is possible).
- Replace the existing solution.
- Layer on additional solutions -- an asymmetric traffic shaper to supplement compression, say, or a managed service to supplement in-house deployments.
The file acceleration antidote
If the need for a new kind of WAN optimizer (for example, file synchronization) is limited to a few locations, then an overlay optimization solution from the likes of Akamai may make more sense than deploying an internal solution. If accelerating virtual desktop delivery is the new problem, it may make sense to explore solutions from Wyse, Citrix or Expand that focus sharply on that problem. Using a managed solution may make sense if the need is restricted to locations not currently optimized and the organization wants to keep a tight rein on up-front expense and the burden on IT management.
In order to successfully maintain and fine-tune a WAN optimization deployment, you have to do three things: Watch performance, talk to users, and be consistent. It also helps to be data driven and flexible enough to take the next logical upgrade or improvement step when necessary.
If an organization carefully considers the optimization solutions applied to well-quantified WAN performance problems and then keeps a close eye on the evolution of those solutions, the network will be in good shape to withstand any new changes, applications or user demands that come down the road.
FTP accelerator may be more of a quick than long-term fix
Most companies have scaled down spending on IT and network improvements over the past year, primarily because of the weak economy, although the pace of network activity and demands on the network show no signs of a slowdown.
In fact, WAN traffic at most companies – including FTP file transfers -- has increased an average 65% over the past year or so, according to a study from Aberdeen Research – especially in terms of delivery applications and data to remote users.
One of the top challenges for these companies is transferring large files between network locations, the research company noted in a report released earlier this year. Since adding more bandwidth is not now a viable option because of the expense and unreliable ROI (47% of the companies that increased their bandwidth over the past two years reported no improvement in applications performance, Aberdeen said), many companies are looking at ways to just speed up the transfer of files from one point to another.
For these companies, FTP or file acceleration alternatives not only solve an immediate problem, they may also answer most of the current needs when it comes to network congestion and traffic, according to industry experts. However, applying such fixes may not actually solve the underlying problems and may even complicate things by skirting the entire picture.
"FTP acceleration can help, but you really don't know what's going on, and it may not be that FTP traffic is low, but there is a peer-to-peer problem," noted Ed Ryan, vice president of products for Exinda Networks, a maker of WAN optimization and application acceleration products. More effective and complete acceleration comes from deep packet inspection of TCP activities at the applications layer, as well as a healthy dollop of heuristic and behavior analysis. The result, Ryan said, is an accurate classification of which applications are using what bandwidth.
Looking for more in optimization
The overall objectives in optimization and acceleration also have a lot to do with the types of solutions deployed over a wide area network. When the Providence Engineering and Environmental Group went looking for an optimization solution to link its headquarters in Louisiana and sister site in Texas, the company immediately ruled out basic FTP acceleration, network manager Wesley Corie said, because the goal was disaster recovery and failover.
"We weren't looking at acceleration as much as data redundancy and maximizing access to applications, so FTP acceleration wasn't the answer," Corie explained. The company decided on a more complete optimization solution from Ecessa Corp. and is now looking into supplementing that with an optimization platform from another vendor.
Not all FTP acceleration solutions are alike, however. There are some, in the so-called next generation class, which analyze file attributes, transfer distance and network conditions to quickly adapt file transfers and more fully utilize existing network infrastructures. One of these, available from Aspera Inc., employs the company's patented fast transfer technology (called fasp) and provides a visual dashboard to view file transfers and bandwidth utilization and to control transfer speeds and assign priorities on-the-fly, explained Francois Quereuil, director of marketing at Aspera.
"It's definitely not just a speed issue. You have to add visibility and auditing capabilities to support the movement of files rather than applications," he stated, pointing out that the technology is designed to handle very large files like those common to healthcare, life sciences, government and the entertainment industry.
Still, purveyors of WAN optimization solutions tend to dismiss basic FTP acceleration as a band-aid approach that is fast losing its cachet – especially as more hybrid FTP solutions evolve and the cost of optimization plummets.
"There are a lot of small companies that can make FTP transfers or make TCP run faster," Silver Peak Systems president and CEO Rick Tinsley pointed out. "But the industry has been through enough experiments, with either applications-specific accelerators or protocol-specific accelerators, that haven't really stood the test of time."
- Tim Scannell
About the author: John Burke is a principal research analyst with Nemertes Research, where he focuses on software-oriented architectures and management issues.