Centralization and server consolidation continue to be key initiatives for IT organizations everywhere. The benefits are clear: simplified IT management, increased data compliance and control, and reduced hardware, software and management costs.
Unfortunately, these initiatives are usually accompanied by new and complex application performance challenges. If these challenges are not met, they can and usually do result in project delays or outright failures, additional costs, and dissatisfied end users and business leaders. This is because, when applications and services are centralized, the distance between end users and services increases. When this happens, networks experience additional latency and packet loss increases while the connections between end users and services are constrained by the limited bandwidth of a wide area network (WAN) connection. The best solution is not to buy more bandwidth or increase server processing power.
Instead, the ideal solution is to implement wide area network optimization techniques and to do so at the start -- not toward the end of your centralization project. Below, I will discuss the most common techniques used in wide area network optimization and how each works to optimize traffic.
1. Compression: Reduce the amount of data sent over the network
The role of compression is to reduce the size of a file prior to transmission or storage. One factor influencing the effectiveness of compression is the amount of redundancy in the traffic. Applications that transfer highly-redundant data, such as text and HTML on Web pages, will benefit significantly from advanced compression. Applications that transfer data that has already been compressed, such as VoIP or JPG images, will see little improvement in performance from compression.
2. Caching: Store downloaded network data locally for subsequent retrieval and reuse
A copy of information is kept locally, on the WAN optimizer, with the goal of either avoiding or minimizing the number of times that information must be accessed from a remote site. Caching can take the form of either byte caching or object caching. With byte caching, the sender and the receiver maintain large disk-based caches of byte strings previously sent and received over the WAN link. As data is queued for the WAN, it is scanned for byte strings already in the local cache. Any strings resulting in cache hits are replaced with a short token that refers to its cache location, allowing the receiver to reconstruct the file from its copy of the cache.
Object caching stores copies of remote application objects in a local cache server that is generally on the same LAN as the requesting system. If the cache contains a current version of the object, the request can be satisfied locally at LAN speed and latency. Most of the latency involved in a cache hit results from the cache querying the remote source server to ensure that the cached object is up to date.
3. Buffer tuning: Ensure the network is never idle if data is queued pending transmission
Tuning the flow control buffers for TCP sets them to the optimal size to match a given network's bandwidth. TCP achieves flow control by using the sliding window algorithm that takes two important parameters into consideration: The first one is the receiver advertised window size, which informs the sender of the current buffer size of the TCP receiver; the second parameter is a congestion window that controls the number of bytes a TCP flow may have in the network at any given time. Tuning these buffers sets them to the optimal size to match a given network's bandwidth. More advanced TCP tuning sets timer values for retransmissions in a more aggressive manner than the default values.
4. Protocol spoofing: Reduce the number of round trips necessary for a transaction
This refers to a network optimization technique in which a client makes a request of a distant server, but the request is responded to locally by the WAN optimizer. Common Internet File System (CIFS) and Server Message Block (SMB) are the core protocols used to transfer files and browse remote directory structures. Both are examples of chatty protocols.
For example, CIFS makes zero assumptions about the resiliency or reliability of the underlying transport protocol and therefore incorporates its own end-to-end acknowledgement of data transfers. CIFS allows a maximum of 60 kilobytes of data to be read or written at one time even if the underlying transport protocol uses a sliding window that may be able to accommodate much larger segments of data. These two attributes of CIFS cause the response time for large file transfers to increase drastically as the latency between the client and the server increases. Protocol spoofing algorithms address the chattiness of these protocols at its source by reducing the number of round trips required by employing predictive algorithms that are able to identify files and subdirectories that may be accessed by the client.
5. QoS: Overcome common packet delivery problems inherent in over-subscribed networks
Quality of Service (QoS) is a tool that provides better service to selected packets that belong to a pre-specified class of service (CoS). This assumes that applications have been classified for specific levels of service and that their data traffic can be identified to belong to that class. CoS is done by either raising the priority of a packet or limiting the priority of another packet.
Congestion management tools raise the priority of a packet by providing a unique queue in routers and/or switches for each class separately and servicing the queue for each class in different ways. The queue management tool used for congestion avoidance raises priority by dropping lower-priority packets before higher-priority ones. Policy shaping provides priority to packets by limiting the throughput of other queues. Congestion management, queue management, link efficiency and shaping/policy tools provide QoS within a single network element.
6. Application delivery: Offload computationally intensive communications tasks from servers and clients to application delivery controllers
These solutions are typically referred to as being asymmetric because an appliance is only required in the data center and not in the branch offices. Application delivery controllers (ADCs) perform computationally intensive tasks, such as the processing of Secure Sockets Layer (SSL) traffic, which frees up server resources. ADCs act as server load balancers (SLBs) that balance traffic over multiple servers. Another common network offload technology is TCP Offload Engine. This is a special network interface card that performs TCP and IP protocol stack processing on the card, thus minimizing the workload of the machine's CPU.
When used effectively, each of these wide area network optimization techniques can overcome the challenges introduced by increased latency and bandwidth constraints created by consolidation and centralization efforts.