Preparing the WAN for remote backups

Before conducting remote backups across your wide area network (WAN), you must decide whether you need to add bandwidth or use an optimization controller (WOC). This introduction to remote office backups explains how to prepare your network to ensure high-performing backups.

This introduction to remote backups explains how to prepare your wide area network (WAN) to ensure high-performing remote office backups.

 Before you begin remote backups across the WAN… 

Once you've settled on the idea of backing systems up remotely -- and you really should, for purposes of business continuity at least, if not also to simplify a distributed backup infrastructure and reduce associated operating and capital costs -- you have a host of questions to answer. Which tools you will use for backup is of course a critical one, but related to it -- and often neglected in initial planning and purchase -- is the question of how you will make your wide area network (WAN) ready for the new job.

 Preparing the WAN for remote backups 

Remote backup projects succeed or fail based on the performance of the remote backups across the WAN. If bandwidth is lacking, or if quality of service (QoS) requirements are not met, then backups will not complete in their allotted windows.

So, the critical first step is to test remote backups across real WAN connections. Initial testing has to take place during lowest-possible traffic conditions, of course, but after that you must also test during typical traffic periods. If backups will run every evening, test during peak evening traffic periods.

If everything performs well in testing, you may have no further concerns for years. If not -- or if after a period of successful operation, performance begins to suffer -- look at network bandwidth and performance data.

If the problem is mainly one of bandwidth -- you don't see a lot of packet-drops and retransmissions, and it is simply taking too long to push the requisite number of bits through the WAN connection -- there are many options for addressing it. Of course, adding bandwidth is the obvious choice, and a common one, but it is also often the more expensive choice. Reducing demand is the other main choice and can be approached in several ways. Application re-architecture is one option: The organization might shift from a local-mailbox model for email to Web-based access only in order to reduce the amount of data to be backed up in remote sites. In the backup infrastructure, IT might look at client-side data deduplication technology for remote sites with high backup traffic volume.

 What you need to know about remote backups across the WAN 

On the WAN, IT can reduce bandwidth via compression. Using a WAN optimization appliance from Blue Coat Systems, Cisco, Riverbed or others to compress traffic can reduce traffic volume by more than 60% for some backup streams.

If IT decides to increase bandwidth, again, there are many options. Nemertes sees organizations increasingly use carrier Ethernet to get higher and more easily scalable bandwidth to locations that need it. Those with the greatest volumes and the wherewithal may use dedicated fiber, say between data centers in a metro area, to provide the connections. Using dense wave-division multiplexing (DWDM), which partitions the fiber by sending different streams across it using distinct optical wavelengths, an organization can dedicate to backup multi-gigabit links that are completely isolated from other network traffic on the same fiber, ensuring that backup won't interfere with anything else (or be interfered with). Many systems integrators and managed service providers in the backup space use DWDM to pull backup data to their data centers or to help customers move data among their own.

Resources on WAN remote backups
A tutorial on remote data backup technology

Disk and WAN emerging in remote office backups

WAN technologies in disaster recovery

What is a WAN manager's role in storage?

Storage compression and data deduplication tools

Evaluating remote data backup options

Evaluating WAN optimization controller vendors and products

An interesting new possibility is to use cheap consumer-grade Internet connectivity (cable modems, low-end DSL) to supplement existing WAN links. By bringing in high-bandwidth, low-cost links and using an appliance (i.e., from Talari Networks) to multiplex WAN traffic across the whole set, IT can sometimes solve the need for more bandwidth at significantly lower prices than through expanding existing WAN links. This can also provide high-bandwidth connectivity in places where an organization's WAN carrier has no MPLS or Ethernet services.

If the remote backup problems derive more from network performance -- issues like excessive packet drops and retransmits, or effects from latency -- IT has to look at managing traffic priority and behavior. At the most basic level, staff can try using Class of Service (CoS) on WAN routers and/or from MPLS providers to accord backup traffic higher priority on the network. Determining what is lower priority may not be easy, however, and CoS granularity can be very coarse; there may be only three or four classes available.

If CoS setting can't fix the problem, IT should look again at optimization. In addition to compression, WAN optimization appliances can do accelerations to increase throughput or mitigate latency. Most, for example, can do dual TCP termination (faking parts of TCP handshakes in the appliances on both ends to reduce the number of roundtrip delays). Many can also layer on some error-correction logic to mitigate packet loss and minimize retransmits.

When consolidating to a centralized remote-backup strategy, network capacity and performance have to be taken into consideration from the planning phase onward. Ultimately, increasing bandwidth may be necessary but should not be the default solution. It is equally important that IT keep in mind the full range of options for guaranteeing good backup performance.

John Burke, Principal Research Analyst, Nemertes Research
John Burke

About the author
John Burke is principal research analyst with Nemertes Research. With nearly two decades of technology experience, he has worked at all levels of IT, including end-user support specialist, programmer, system administrator, database specialist, network administrator, network architect and systems architect. He has worked at Johns Hopkins University, the College of St. Catherine, and the University of St. Thomas.

This was first published in May 2010

Dig deeper on Network disaster recovery

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchNetworking

SearchUnifiedCommunications

SearchTelecom

SearchSDN

Close