This tip takes a look at various techniques and technologies to protect and maintain data accessibility and data...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
consistency in remote office branch office (ROBO) environments. While disaster recovery (DR) and business continuance (BC) often get the headline-news status associated with catastrophes, timely data protection is relevant for organizations of all sizes on a regular basis to address file corruption caused by virus or program errors, as well as other common threat risks.
Data protection in the scope of this tip refers to ensuring that regular copies or backups of data are made locally or to remote locations including hot or cold sites, as well as managed service providers (MSPs) to ensure data availability and accessibility when needed. Data protection also refers to local and remote mirroring or replication, point-in-time (PIT) copy or snapshots, along with archiving to preserve data for possible future or compliance use.
Various threats risks and issues
Applicable threats are for the most part the same for ROBO and large enterprises. What differs, however, is the relative scale of the environment and the resulting impact and disruption to your business. Business needs and capabilities should be aligned with variable recovery time objectives (RTO) and recovery point objectives (RPO) for different applications, depending on your needs and requirements.
Many DR and BC plans have been built around the possibilities of remote or severe events, such as those in the headline news involving fire, flood, hurricane or other major destruction. However, most businesses are more likely to be hit by some type of disaster that does not make headline news. For example, accidental equipment or software failure, power failure, loss of access to a facility that is intact, mis-configuration, or other chains of events that cascade into a disruption to your business.
Technologies and techniques
There are many approaches incorporating various technologies and techniques to protect data in ROBO environments against various threat risks. The size of your budget, applicable threat risks that you are seeking to protect data against, how much data you have and how often it changes -- along with personal preferences -- will determine how you should go about protecting ROBO data.
Technology options include backing up to local tape or disk, to a remote location (including a hot or cold standby site) or to an MSP. Other technology options and techniques include local or remote mirroring and replication combined with point-in-time snapshot copies (full or partial) depending on your RTO, RPO and budget requirements. Other options include use of magnetic tape, hard disk drives, removable hard disk drives (RHDD) and optical. Disk-based backup, also known as disk-to-disk (D2D), is becoming a popular choice for local and remote backup, replacing or supplementing tape-based backup. A hybrid example is disk-to-disk-to-tape (D2D2T) that could be D2D local or remote combined with local or off-site tape backup.
Local and remote data mirroring or replication can be implemented using host server-, appliance-, network-, or storage system-based solutions. Distance is a friend and a foe of data storage protection. From a positive standpoint, distance enables survivability and continued access to data. The downside to distance for data protection is the penalty in terms of expense, performance (bandwidth and latency), and increased complexity. When you look at networks to span distances, bandwidth is important, but latency is critical for timely data movement to ensure data consistency and coherency. Refer to the SearchStorage.com tip called "Bridging the Gap" to learn more about applicable technologies and issues for spanning distance to support data movement and replication.
Some variations on traditional data replication optimization (DRO) technologies, including SAN extension or channel extension, are wide area file services (WAFS) to facilitate remote access of data. Where DRO technologies exist to accelerate performance and maximize bandwidth for movement of data to support replication, mirroring and remote tape backup, WAFS solutions are focused on improving the productivity of users accessing centralized data from remote offices.
WAFS, also known by vendor-centric marketing names including wide area data management (WADM) and wide area application services (WAAS), are generically a collection of various services and functions to help accelerate and improve access to centralized data. Consequently, for environments that are looking to consolidate servers and storage resources away from ROBO locations, WAFS can be an enabling technology, along with co-existing in hybrid environments to enhance backup of distributed data. DRO technologies, on the other hand, complement remote data replication from NAS and traditional storage, along with D2D2T remote backup.
General tips and considerations
For applications that have an RPO and RTO of zero or near zero, close to real-time or synchronous data communication will be needed, but the enemy of synchronous data transmission is latency, and latency increases with network congestion and distance. The trade-off for spanning larger distances or meeting budget constraints is asynchronous or time-delayed data communications.
Learn more about protecting and maintaining accessibility for ROBO data in the SearchNetworking webcast "ROBO Data Protection for Networking Pros" and the podcast "Emerging Trends and Topics for ROBO Data Protection" along with other TechTarget tips and Ask-the-Expert Q&As. You can also learn more about storage and data infrastructure topics in my book "Resilient Storage Networks – Designing flexible scalable data infrastructure" (Elsevier) to enhance your knowledge around storage-related topics including data protection, management tools, protocols, storage interfaces, SAN, NAS, MAN, WAN, LAN and virtualization in a technology- and vendor-neutral format.
About the author: Greg Schulz is founder and senior analyst with the IT infrastructure analyst and consulting firm StorageIO. A 25 year IT veteran, Greg has worked with applications, servers, databases, networks, storage, DR/BC, performance and capacity planning and associated management tools in IBM Mainframe, OpenVMS, Unix, Windows and other environments. Greg is also the author and illustrator of Resilient Storage Networks and has contributed material to Storage magazine and other TechTarget publications.