A data center interconnect has historically replicated data from a primary data center to a disaster recovery site or backup data center. However, virtualization and cloud computing are transforming the role of a data center interconnect, and wide area network (WAN) managers must adjust their approach to these increasingly critical WAN links.
Traditional data center interconnects have replicated data to idle backup sites that only come online during primary data center failures or traffic spikes. Virtualization and cloud computing will enable enterprises to load balance compute resources across multiple sites dynamically. CIOs now look at backup data centers as additional production sites with a pool of resources for supporting applications and services. The data center interconnect will no longer facilitate a steady replication between sites. Robust and bursty new forms of traffic will traverse these links. WAN managers need to understand the changing environment within data centers and prepare for an increased demand on the WAN links that interconnect multiple data centers.
WAN options for traditional data center interconnect requirements
The minimum requirement of a traditional data center interconnect between the primary and secondary data center is a high-speed wide area network link. Typical WAN solutions offered by service providers include MPLS, Metro Ethernet, Ethernet and VPLS. Each of these solutions provides high bandwidth, low latency and Layer 2 or Layer 3 access between data centers. Enterprises can enhance these fast links further for data center interconnect with WAN optimization appliances on each endpoint. WAN optimization makes transfer protocols more efficient and reduces the volume of traffic through compression and deduplication.
Carrier Ethernet options also work well for disaster recovery solutions because the link between sites is managed within the service provider network rather than within the enterprise data center. In the event of a problem at the primary data center, the remaining enterprise sites can stay connected and traffic can be redirected to a secondary data center as a failover.
Data center interconnect for a virtualized environment
In a truly virtualized environment, secondary data centers are no longer backup sites, but part of the larger pool of computing resources available to the infrastructure manager. WAN managers may find that an existing data center interconnect may not meet the needs of this new cloud architecture.
Virtual machines and workloads migrate across physical servers to optimize utilization and minimize power consumption. Virtual machine mobility has profound implications for both data center design and WAN requirements. Enterprises can move virtual machines across data centers to bring an application or services closer to end users or to offload processing to an off-site cloud service.
Virtual machine mobility requires a Layer 2 network, whether between servers in the same data center or across a data center interconnect. A Layer 2 network is the functional equivalent to a physical Ethernet switch. Operating at this layer, virtual machines can move around to different hardware while retaining the same Layer 3 TCP/IP network information at every hop. Converged storage technologies, such as Fibre Channel over Ethernet (FCoE), also favor a flat, low latency network.
Virtual private LAN service (VPLS) is the common approach to delivering a flat Layer 2 network across large distances. Available from many service providers, VPLS offers a way to deliver WAN connectivity between data centers, while making the disparate data centers appear as a single flat network. VPLS tunnels Layer 2 traffic between data centers across a carrier’s MPLS network, which adds a layer of configuration complexity and overhead into the wide area network. Likewise, to present itself as a flat, any-to-any mesh network, a VPLS tunnel has to be configured between each site of an enterprise’s data center network. “MPLS is very static, while VPLS can get pretty complex in setting up the mesh network,” said Andre Kindness, senior analyst for Forrester Research.
Cisco’s Overlay Transport Virtualization (OTV) is one of the approaches being offered as a way to enable this agility in data center interconnects. OTV creates Layer 2 tunnels between data center switches over existing Layer 3 WAN networks, obscuring the underlying WAN from the data center environments.
Local and wide area network automation is essential to enterprises using multiple data centers as part of a unified, virtualized environment, Kindness said. Most WAN managers make network configuration changes, such as applying quality of service (QoS) and other policies, manually. When a data center interconnect supports hundreds or thousands of daily virtual machine migrations, a manual approach is impossible.
Dig deeper on Bandwidth and capacity planning