Optimizing WAN bandwidth for the enterprise

There are several alternatives available to help solve the bandwidth dilemma.

We live in interesting times. Over the past several years, the evolution of technology and economics has illustrated

a fundamental paradigm shift in the way the enterprise must manage ISP bandwidth utilization. New applications require higher throughput; yet the aftermath of Y2K conformance issues, coupled with an economic downturn, has left executives de-sensitized to IT concerns or has prompted a "less is more" attitude. The IT administrator is left with an ever-increasing need for bandwidth, accelerating user demand, and more stringent restrictions on network operating costs.

Despite the current environment, there are several alternatives available to aid the IT administrator in improving this common predicament, and new technologies are just around the corner to help solve the bandwidth dilemma.

Multi-link strategies
Traditionally, the most frequent solution was to over-specify link speed, which allowed bandwidth overhead above the average utilization level. As this model evolved, ISPs responded with "burstable" billing plans, which provide a dedicated access level plus additional bandwidth-on-demand billed at a higher burst rate. However, the intermediate bandwidth gaps between traditional circuit architectures (DS1, DS3, OC3 etc.) -- and the large cost differences between them -- often proved untenable. Companies requiring bandwidth in excess of these circuit types were forced to migrate to the next higher circuit.

An alternative to this trend is multi-link architecture. In this model, n-number of links to a single ISP are provisioned, allowing throughput at the approximate sum of the individual links. This has proven a successful means of adding additional bandwidth without the loop costs involved in migrating to the next circuit tier. For example, an enterprise requiring between 2 and 3 Mbps of bandwidth might choose a 2xT1 deployment, which will yield anywhere from 2.6 to 3 Mbps in most cases. The cost savings between two DS1 loops and a single, barely-utilized DS3 loop is significant.

There is a drawback, however. Varying latencies between the circuits could lead to packets arriving at their destination out of sequence, resulting in re-transmissions and the potential for congestion collapse. The Multi-Link Point to Point Protocol (MLPPP, per RFC-1990) was developed to address the link bundling and packet re-ordering issues inherent in a multi-link architecture. MLPPP allows the collection of links to act as if it were a single circuit. Although it places a substantial load on router performance, MLPPP has proven a popular means of adding additional bandwidth.

Another common multi-link methodology utilizes fair-weighted round-robin addressing, which allows a per-packet load share across n-number of links. Though this method does not allow for packet reordering, it places a much lower load on the router's processor and yields excellent performance when packet sizes remain relatively constant and the service provider has newer facilities in the area. Young copper leads to lower latencies. Also, because each link is conceptually independent from the others, a failure of one link has little possibility of affecting the others. In extremis, the remaining links will continue to function, although the total available bandwidth will be compromised until the downed link returns to service.

With the success of multi-link architecture established, the prescient network administrator will then conclude that provisioning links to several ISPs will yield not only additional throughput, but increased resiliency as well. This is known as multi-homing. When multiple ISP connections are provided by multiple service providers, the foundations have been laid for a high-availability enterprise network. As such, the core principle behind multi-homing is redundancy, versus adding bandwidth in a cost effective model. Announcing the availability of a given address space between multiple ISPs requires the implementation of Border Gateway Protocol (BGPv4). While BGPv4 is quite effective at announcing network availability, it has no ability to dynamically groom traffic utilization across multiple links. Using BGP to artificially groom traffic across multiple links can be a complex and arduous task, and is not for the neophyte.

Finding a "best fit" for the enterprise
There is no single best practice in architecting such a network. The truth of the matter is that some measure of compromise is required, unless cost is not a factor. IT administrators and network architects must begin with a solid conceptual and empirical understanding of several component factors. These include:

  • What is the size and profile of the user base? This will include:
    • Host users
    • Web servers or other resources resident on the LAN
    • Remote application services used by local hosts
    • VoIP or other streaming protocols
  • What is the utilization profile? Are there multiple applications in constant use, or are there variances in traffic patterns based on time-of-day?
  • What quality of service is required? Remember that static content, such as passing Word or e-mail files, offers much more leeway in terms of throughput than does VoIP.
  • Will the existing access routers support the required load? Consider:
    • Number of WAN ports configurable on the platform
    • Processor capabilities
    • Maximum slottable RAM (vendors generally specify 128 Mbps minimum to support full BGP route announcements)
    • Platform vendor's proprietary protocols

When considering the multi-homing option, remember that a single access router constitutes a single point of failure. Adding redundant platforms will not only add to initial deployment cost, but also factor into ongoing operational expenses and the general complexity of the solution. Note that not all ISPs support all protocols; some prefer to maintain "vendor-agnostic" policies. In many cases, this is a benefit, but if your final architecture relies on a proprietary technology, be sure the ISP is aware of this and understands its implications.

Think about other factors specific to the company's business. For instance, is your inbound and outbound traffic requirement asymmetric? Are there periodic spikes in traffic at known intervals? Is your bandwidth demand subject to fluctuation from some outside event, such as weather, advertising campaigns, press releases, etc.? What is your growth plan over 12 to 18 months?

Next, evaluate the ISP. Quality ISPs have quality engineering staff available -- don't hesitate to involve them. They should be happy to discuss the options in depth.

Finally, consider a hybrid architecture. For example, a series of multiple links to multiple providers could be provisioned, such as a 2x2xT1, where two T1s are provided by two separate ISPs. This allows the best of all possible worlds -- increased bandwidth with loop cost savings, resiliency via multi-homing, and flexible options when migrating circuits. The caveat here is that as the circuit architecture becomes increasingly complex, so does bandwidth management and BGP allocation. This can translate into additional administrative workload on the part of your staff or third-party systems integrator.

Market choices for hybrid architectures
Recognizing the need for hybrid architectures, several new technologies have recently become available, enabling least cost routing (LCR) over multiple links. By assigning multiple metrics to each link (such as bandwidth cost, burst cost, hop count, etc.), these systems are capable of dynamically allocating traffic across multiple links. The analogy is the same as an LCR table mapped in a PBX. The system looks at the state of link utilization at any given interval and directs traffic across a particular link based upon the metrics assigned. Administration is typically done via a user-friendly GUI, alleviating the need for on-site BGP expertise.

No two businesses, requirements nor solutions are ever the same, and there will always be a compromise between cost and performance. The proper tools, experience and an innate knowledge of the company's network are key to determining which elements are more critical in the long term. With the sum of these strategies and technologies, the savvy network architect can satisfy both the demand for bandwidth and the corporate CFO.

Teejay Riedl
About the author: Teejay Riedl is the director of corporate training at Telkonet Inc.


This was first published in February 2009

Dig deeper on Bandwidth and capacity planning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchNetworking

SearchUnifiedCommunications

SearchTelecom

SearchSDN

Close