Multiprotocol Label Switching (MPLS) can speed up the flow of network traffic and make it easier to manage. MPLS...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
is flexible, fast, cost-efficient and allows for network segmentation and quality of service (QoS). MPLS also offers a better way of transporting latency-sensitive applications like voice and video. While MPLS technology has been around for several years, businesses are now taking advantage of service provider offerings and beginning their own corporate implementations. Get a head start with our technology overview.
Multiprotocol Label Switching (MPLS) is a standards-approved technology for speeding up network traffic flow and making it easier to manage. MPLS involves setting up a specific path for a given sequence of packets, identified by a label put in each packet, thus saving the time needed for a router to look up the address to the next node to forward the packet to. With reference to the OSI model, MPLS allows most packets to be forwarded at Layer 2 (switching) rather than at Layer 3 (routing). In addition to moving traffic faster overall, MPLS makes it easy to manage a network for quality of service (QoS). For these reasons, the technique is expected to be readily adopted as networks begin to carry more and different mixtures of traffic. (Definition courtesy of Whatis.com.)
MPLS is called multiprotocol because it works with the Internet Protocol (IP), Asynchronous Transport Mode (ATM), and frame relay network protocols. The claim to fame of MPLS is "any-to-any" connectivity. This statement generally implies a comparison to permanent virtual circuit (PVC)-based technologies such as frame relay and ATM, where each site has a physical circuit connecting it to the "cloud." Logical circuits are then configured on the physical circuits to create virtual circuits connecting sites together.
If you were to purchase a full mesh of virtual circuits connecting every site to every other site, you would essentially have the same any-to-any connectivity offered by MPLS. Under the covers, of course, it's quite different, because packets are label switched and traffic engineered instead of being circuit-switched and provisioned. (From MPLS -- what voice managers need to know by Tom Lancaster)
Migrating to MPLS: Decision factors
Although most providers are still sticking to basics when it comes to deployment and features, it's a good idea for the engineering groups within organizations to know how they should prepare their current networks for transition.
Like any significant business decision, a number of qualifying factors usually drive a potential migration to MPLS. Several common reasons are:
- Converged services capabilities (voice, video, data).
- Any-to-any connectivity without the high cost of individual circuits.
- Advanced features for ingress and egress routing policies (load sharing, policy routing).
- Secure flexibility of adding future businesses and partners (multiple VPN support).
- Circuit consolidation (frame, T-X, ATM).
These highlight some of the most common criteria, but it is important that you know the drivers behind your company's decision to move toward an MPLS solution because some MPLS or protocol features may or may not be supported by the provider. It's also important because it can determine the overall network design moving forward. (From Migrating to MPLS by Doug Downer)
A service provider view ... Courtesy of Informit
An MPLS-based network consists of routers and switches interconnected via transport facilities such as fiber links. Customers connect to the backbone (core) network through multiservice edge (MSE) routers. The backbone comprises the core routers that provide high-speed transport and connectivity between the MSE routers. An MSE router contains different types of line cards and physical interfaces to provide Layer 2 and Layer 3 services, including ATM, FR, Ethernet, and IP/MPLS VPNs.
In the incoming direction, line cards receive packets from external interfaces and forward them to the switching fabric. In the outgoing direction, line cards receive packets from the switching fabric and forward them to the outgoing interfaces. The switching fabric, the heart of the router, is used for switching packets between line cards. The IP/MPLS control-plane software, the brain of a router, resides in the control processor card. The phrase IP/MPLS control plane refers to the set of tasks performed by IP routing and MPLS signaling protocols. IP routing protocols are used to advertise network topology, exchange routing information, and calculate forwarding paths between routers within (intra) and between (inter) network routing domains. Examples of IP routing protocols include Open Shortest Path First (OSPF), Intermediate System-to-Intermediate System (IS-IS), and Border Gateway Protocol (BGP). MPLS signaling protocols are used to establish, maintain, and release label-switched paths (LSP). Examples of MPLS signaling protocols include BGP, Label Distribution Protocol (LDP), and Resource Reservation Protocol (RSVP). The IP control plane may also contain tunneling protocols such as Layer 2 Tunneling Protocol (L2TP) and Generic Routing Encapsulation (GRE).
Should your enterprise WAN migrate to MPLS?
- Learn about the different classes of MPLS services to find the best MPLS/VPN for your WAN.
- Should your company consider building MPLS networks into its WAN?
- Learn how to prepare enterprise WANs for MPLS/VPN integration.
Because redundant network elements add to the overall network cost, service providers typically employ different levels and types of fault tolerance in the edge and core network. For example, the core network is generally designed to protect against core router failures through mesh connectivity. This allows alternative paths to be quickly established and used in the face of a failure. In the core, additional routers and links are used to provide fault tolerance. In contrast, on the edge, often thousands of customers are connected through a single router, and the edge router usually represents a single point of failure. The edge router is what most service providers consider the most vulnerable point of their network after the core is protected. On the edge, instead of using additional routers and links as in the core, redundancy within the edge router via redundant control processor cards, redundant line cards, and redundant links (such as SONET/SDH Automatic Protection Switching [APS]) are commonly used to provide fault tolerance.
Once the decision has been made to move toward MPLS, the next step is designing your network to support the change and prepping your infrastructure to handle it. There are typically four ways a client can communicate with an MPLS VPN provider: BGP, OSPF, RIPv2 and static routing. Of these choices, BGP is recommended for most organizations because it provides the most flexibility and control of prefixes within the VPN. (From Migrating to MPLS: Decision factors by Doug Downer)
MPLS transport options
Multi-Protocol Label Switching (MPLS) transport is a funny thing -- what it means depends on who you are talking to at the time. If you are talking to an engineer who is responsible for designing and developing MPLS services for a carrier, he or she will more than likely discuss MPLS in terms of MPLS backbone transport.
MPLS backbone transport is analogous to both frame relay and ATM WAN circuits, in that MPLS, frame and ATM all use the concept of virtual circuits. Frame relay uses permanent virtual circuits (PVCs) between the WAN routers, ATM uses VPI/VCIs, and MPLS uses label-switched paths (LSPs).
There is a major difference, however. The LSPs on the MPLS backbone are built between the provider's routers (called PE or provider edge routers). With traditional ATM and frame WAN backbones, you had to build these PVCs between all of your WAN routers or use a hub-and-spoke topology to enable traffic flows from remote site to remote site. With MPLS transport as the WAN, a customer can connect one interface to the MPLS cloud and have access to all of the remote WAN routers over one single physical and logical interface. The concept of sub-interfaces that are found in most ATM and frame WAN architectures goes away.
So, understanding a little about how the carrier transport has changed, it only makes sense now to discuss how this affects the access (transport) links that connect to the MPLS backbone. The access circuit now becomes the transport between your sites and the carrier's MPLS router sitting on the MPLS backbone.
Interestingly enough, you can (in theory) connect your sites with multiple types for WAN access circuits because a router sits between the access circuits and the MPLS backbone. For example, let's say you have a three-site WAN where each site has a local circuit option to the MPLS cloud. You could provision one circuit as ATM, another as frame relay and another as Ethernet, and each of the sites could talk to the others over the carrier's MPLS core using IP. Legacy WANs required like interfaces because the virtual circuits were built from customer router to customer router, not from customer router to provider router. You could do this, but it is not recommended. The point is that the transport options remain the same for legacy WAN and MPLS WANs, but one is carried over routers (MPLS) and the other over frame and ATM backbones. (From MPLS transport options by Robbie Harrell)
Labeled transport MPLS
Labeled transport or carrier-of-carrier's MPLS solutions allow one provider to utilize another provider's backbone for transport only. Carriers have been selling transport to other ISPs and carriers for years so this is not a new concept. However, the difference is that the labeled transport service allows one carrier to transport another carrier's private IP traffic via MPLS labels. This allows for the extension of MPLS Layer 3 VPNs across geographically diverse locations without the purchase of expensive long haul transport. In the old days, if a carrier wanted to expand its geographic presence, they would have to purchase backbone transport from other carriers or build out the infrastructure themselves. In addition, availability requirements created the need for diversity and redundancy. This can be very costly. For instance, if you wanted to expand geographically to Europe or South America from the United States and do so with diverse links, the costs would be very high to lease or build out the infrastructure required to interconnect the IP PoPs in each area.
With the advancements of MPLS technology in the form of label transport, carriers can sell IP transport via MPLS label exchange. The service is essentially an extended label switched path across a carrier's backbone that transparently delivers IP traffic from other carriers. The benefit of this service is that one carrier can purchase backbone transport across another provider's backbone that already has the redundancy and resiliency built in at the transport and IP layer. This can be significant in terms of expanding geographically to areas outside of the current provider's footprint. (From Labeled transport MPLS by Robbie Harrell)
A true benefit of MPLS technology is the ability to provide Quality of Service (QoS) guarantees over an IP backbone. QoS on an MPLS backbone is used to provide predictable, guaranteed performance metrics required to transport real time and mission-critical traffic. The providers have an overall QoS architecture that is used to deliver a subset of QoS services to each customer. The provider will have multiple classes of service that the customers must align with in order to leverage the providers' MPLS offering. The providers must have an MPLS QoS architecture that provides end-to-end guarantees for each class of service for each customer. Cisco, for instance, has defined two models that can be used independently or together to provide the end-to-end guarantees for each customer. Cisco defines these as the point-to-cloud and point-to-point models.
The point-to-cloud or "hose" model allows the provider to provision an ingress committed rate (ICR) and egress committed rate (ECR) for each VPN. The ICR and ECR dictate how much bandwidth is allowed to enter and exit the service provider backbone within a VPN. The best way to describe the scenario is with an example. Let's say we have a VPN with four sites. Each of these sites can communicate with any of the other sites, forming an any-to-any mesh. The sites are labeled "Site 1" through "Site 4." I will use Site 1 as the basis for discussion. The ICR can be set to only allow Site 1 to inject 50Mbps of traffic into the cloud. This traffic can be destined to any of the other three sites. The ECR can be set to only allow 30Mbps traffic to exit the cloud to Site 1 from the other three sites. These parameters can be set for each class of service. The provider's backbone will provide bandwidth and delay guarantees for the traffic thresholds as configured for the ICR and ECR. This model allows the provider's QoS to be transparent to the customer. The customer can dictate the amount of traffic that is sent to each of the provider's classes of service. The customer does not have to match the provider's classifications and the customer markings are preserved.
The point-to-point or "pipe" model allows the provider to build virtual QoS pipes between the customer edge routers that are used to provide bandwidth and delay guarantees. This is analogous to legacy ATM and Frame Relay PVC meshings. However, the provider is responsible for the virtual mesh. Once the virtual QoS tunnels are established the provider can offer traffic engineering across the virtual mesh. Each tunnel would have its own QoS characteristics so that the CE-to-CE QoS guarantees are established prior to transmission of data. This is a more granular approach to QoS and adds complexity to the provider's configuration. The pipe model is not as scalable as the hose model as it requires CE-to-CE pipes for each customer. (From MPLS QoS models by Robbie Harrell)
Best practices for MPLS interoperability of customer QoS with provider QoS
When planning an internal QoS architecture that will utilize an MPLS VPN as the WAN backbone, it is beneficial to consider the provider's architecture. Service providers have enabled QoS with their MPLS VPN services. This allows the provider to offer multiple classes of service with SLA guarantees. The service provider's QoS architecture dictates how a customer's applications are serviced over their backbones. It doesn't matter what your QoS architecture is, it will have to be modified to match the provider's ability to deliver within their service parameters. The reality is that the provider has one QoS architecture and the customers can have many different ones. The provider is not going to adapt their architecture to the client's -- therefore it makes sense to understand what the providers offer before engaging in an internal QoS plan.
- Most providers support only three or four classes of service. Some provide five classes of service, but the majority fall into the first group. Limit your number of classes of traffic to 3-4. You can put any applications you want into these three or four classes, the provider doesn't care. They will tell you what the performance guarantees are for each class. It is up to you to decide which applications require what guarantees.
- Providers use DSCP for classification and marking. In most cases when you deploy a MPLS VPN WAN, you will need to classify and mark your QoS traffic before you hand it off to the provider. Some providers offer managed services and they will handle the edge router configurations, others provide templates. The customer is responsible for identifying important traffic and initiating a classification scheme. Use DSCP markings rather than IP precedence at the WAN router edge. Most providers follow some form of assured forwarding markings for their classes and you can easily match the providers. If you don't know the provider's markings beforehand, use the DSCP standards. (From MPLS: Interoperability of customer QoS with provider QoS by Robbie Harrell)
Meeting QoS guarantees that ensure reliable voice transmissions can be the biggest challenge of deploying voice over IP (VoIP) traffic. Implementing MPLS can help enterprises rise to that challenge because the protocol offers network engineers a great deal of flexibility and the ability to send voice and data traffic around link failures, congestion and bottlenecks.
MPLS is useful as an enabler for VoIP because it provides ATM-like capabilities with an IP network. Unlike the expensive ATM links that would be required to support VoIP, MPLS provides guaranteed services utilizing IP quality of service on the carrier's backbone. This service and the ability to converge VoIP onto the data network present a tremendous opportunity to reduce operational costs and consolidate infrastructures.
Real-time services are application services that are susceptible to delay, packet loss and jitter. VoIP and video over IP are considered real-time applications. While other applications such as SAP are vulnerable to delay, VoIP and video over IP (with corresponding voice) are the real focus, because if you cannot deliver these applications with a high degree of confidence that the packets will not be dropped, experience delay or jitter, then you cannot deploy these application services. (From Carrier MPLS support for VoIP by Robbie Harrell)
Before VoIP came along few companies ever created a full mesh because there was a cost associated with each permanent virtual circuit (PVC). Instead, companies required traffic in a hub-and-spoke topology to pass from one remote office to another through the hub site, using both its inbound and outbound bandwidth. This was generally OK, because one remote office never talked to another before VoIP.
MPLS security analysis
MPLS Security Analysis, Chapter 3 of the Cisco Press book CCNP Self-Study: Building Cisco Multilayer Switched Networks (BCMSN), 3rd Edition by Richard Froom, Balaji Sivasubramanian and Erum Frahim, describes how MPLS provides security (VPN separation, robustness against attacks, core hiding, and spoofing protection), how the different Inter-AS and Carrier's Carrier models work, and how secure they are compared to each other. It discusses which security mechanisms the MPLS architecture does not provide and how MPLS VPNs compare in security to ATM or Frame Relay VPNs.
However, packets don't fly directly between any two sites. In reality, your packets are probably riding some good old-fashioned frame-relay or ATM network to get from your office to your carrier's closest MPLS POP. As someone with a keen interest in low-latency, you'll want to ask some pointed questions about where the MPLS POP actually is located and what the Layer 2 path to get there looks like, because your packets could be taking a scenic route that won't show up on traceroutes.
Another significant difference between PVCs and MPLS is that the technology really provides no mechanical equivalent of committed information rate (CIR), Committed Burst Size (BC) or Excess Burst Size (BE), etc. However, your WAN provider may like those concepts so much that it tries to replicate the functionality with some truly bizarre policing and remarking configurations. Make sure you understand how they've implemented QoS and exactly how they treat your packets that exceed the committed data rate (CDR) and what the thresholds are.
Finally, while it was possible for a provisioned PVC or SVC to change its path through the carrier's backbone, it's much more likely to actually happen with traffic-engineered paths in MPLS. You should keep track of how much delay your circuits have, and if you see this number suddenly and mysteriously change, there's a good chance your packets are taking a different path as a result of a failure or the availability of a better path. (From MPLS -- what voice managers need to know by Tom Lancaster)