Voice and video application deployments are booming. Organizations reporting the most success with their voice and multimedia deployments possess two key characteristics: They put sufficient resources into up-front network design, and they pay careful attention to building monitoring and management capabilities that enable them to proactively predict problem areas, while also being able to react quickly to unforeseen problems.
In the first part of this expert lesson, we address the key requirements for the successful engineering of an enterprise network to support voice and multimedia applications. In the next section, we discuss best practices for a management strategy that meets typical requirements for fast troubleshooting and a high level of end-user satisfaction.
In this guide:
|Building the multimedia network|
Two applications that characterize multimedia networks are VoIP and video. Each requires particular consideration when designing networks. VoIP is unlike any other application ever deployed over data networks for one simple reason: For the last 40 years or so most people have become accustomed to high call quality when using a landline telephone. Thanks to the reliability of the public switched telephone network (PSTN), as well as enterprise digital PBXs, users have an expectation that when they pick up a telephone handset, they will get a dial tone, the call will go through, and the call quality will be excellent. Thus, when replacing digital PBXs with VoIP, success is based primarily on the ability to recreate the current user experience. All the new features in the world won't make users happy if call quality is worse than it was with the previous system.
Delivering a high level of voice quality in a VoIP world depends mainly on three factors: latency, jitter and echo. Meeting all of these challenges requires a network infrastructure that can reliably deliver multimedia packets with a minimal amount of delay.
Video conferencing presents an even larger challenge. Not only do you have to ensure that latency and jitter concerns are met, but video bandwidth requirements range from 128 Kbps, for simple desktop conferencing, up to 6 Mbps per screen for immersive telepresence. A three-screen, two-room telepresence session could require up to 18 Mbps of available bandwidth per location. From a provisioning perspective, room-based and telepresence systems are fairly easy to engineer, just make sure you have enough bandwidth available to support all fixed locations. The introduction of desktop video presents more of a wild-card because user locations, and thus bandwidth demands, can vary. In addition, many desktop video applications are based on a peer-to-peer model allowing direct connectivity between users, further complicating network architecture plans because of the lack of predictability of bandwidth requirements and traffic flows.
One-way video is more forgiving, since the impact of jitter and latency can be masked by buffering at the receiving end. Still, even streaming or surveillance video can require large amounts of bandwidth, depending on desired quality.
Above the network layer, many organizations face compliance and governance requirements that require recording and archiving of voice and video sessions, meaning additional needs for storage as well as systems for cataloguing and indexing stored files. Video sessions alone can require huge amounts of disk space, with an average one-hour video session needing anywhere from 300 to 600 megabytes, depending on quality.
Supporting video and VoIP
Fortunately, a number of technologies and architectural approaches exist to enable network managers to support video and VoIP on both the WAN and the LAN.
In the WAN, the standard of choice is MPLS (multi-protocol label switching), with more than 74% of participating organizations in the recent Nemertes benchmark Advanced Communication Services relying on it as their primary WAN technology. MPLS enables network architects to leverage class-of-service features to ensure that priority is given to latency-sensitive traffic types. MPLS's any-to-any architecture means that every site on an MPLS cloud is one hop away from any other, eliminating hub and spoke network designs that are ill-suited for peer-to-peer traffic. Finally, MPLS offers cost savings for many organizations versus existing dedicated line, frame or ISDN services. Many organizations also deploy WAN optimization products to provide granular control over application performance.
Many MPLS service providers support IP multicast, a technology useful for large one-way video applications to limit the need for all session participants to establish their own connections with the video server. Instead, using multicast, the network replicates traffic from a single source and limits traffic flows to only those users subscribing to a particular feed. Enterprises with large-scale video distribution needs can take advantage of content-delivery services offered by vendors such as Akamai, AT&T and Savvis, to name a few. These services take video streams and replicate them across a network of global servers, allowing receivers to obtain streams from their closest distribution point, thus reducing the need for enterprise video servers to support large numbers of individual connections.
The LAN gets a bit less attention, since many still assume that Ethernet provides ample bandwidth and prioritization isn't necessary. However, given that many uplinks are over-subscribed, and given the aforementioned growing bandwidth requirements for applications such as HD video and telepresence, even in the LAN, a class-of-service strategy is necessary to meet performance requirements. Most approaches rely on Layer 2 prioritization using standards such as 802.1P/q to separate voice packets into their own virtual LAN. Ethernet switches prioritize voice and/or video VLANs over other non-latency-sensitive traffic. Layer 2 prioritization is matched to Layer 3 prioritization schemes (typically based on DiffServ code point markings to tag packets with their appropriate prioritization level).
Resiliency approaches for multimedia traffic vary by individual organizational requirements. For voice, the typical approach is a local survivable gateway that may be embedded in a branch office router or delivered as a standalone appliance. This device enables the remote site to continue to place and receive calls even in the event of a WAN failure. Within larger sites, resiliency approaches generally involve provisioning of backup power sources for switches, routers, gateways, and video and voice servers, as well as redundant servers for immediate failover response.
|Managing the multimedia network|
Once a network has been designed to accommodate multimedia, what then? One of the overlooked areas of network design is ensuring that the network can be adequately managed so that it delivers -- and, more importantly, continues to deliver -- a minimum acceptable level of service to end users. Ideally, that level of service exceeds the expectations of the user, but at the very least it must deliver the services that the user wants at a level of quality that ensures that the services are usable.
However, managing a multimedia network is not as simple as slapping on some special-purpose tools for application performance management and monitoring. Managing for multimedia requires management in depth, with particular attention given to ensuring a stable platform of functionality for the higher levels of service required by multimedia.
Management in depth begins with element management. Element management depends on telemetering every network-attached device so that the performance of that device can be monitored. Tools to do this are generally provided by the infrastructure vendors such as Cisco and Juniper, and each of these tools provides good telemetry on vendor-specific devices and basic telemetry on devices from other vendors.
Information and control at this level is essential in a multimedia environment where the condition of a network element such as a server or router can determine the rate at which it is able to transmit a multimedia data stream. Using the appropriate management tools allows IT to determine whether a degrading element will adversely affect a data stream and correct the problem before the user is aware of it.
Probably even more important than simple element management is network management. Management tools at this level provide insights into complex problems that arise when devices communicate over a network, such as the issues introduced in router buffers by noise or packet loss along transmission paths. Technology to do this has been around for nearly 20 years now and can be found in management platforms like HP Openview, BMC Patrol, IBM Tivoli, EMC Smarts and CA EITM.
Although such management platforms offer a "single pane of glass" to manage the networked infrastructure, they often fail to provide a customized look at infrastructure particular to multimedia. For this reason, the platforms are often augmented with tools designed specifically to manage multimedia.
IP telephony monitoring and management
In the case of VoIP, switch vendors -- among them Alcatel-Lucent, Avaya, Cisco, Mitel, NEC, Nortel, ShoreTel and Siemens -- provide varying levels of IP telephony monitoring and management. The majority of enterprises rely upon their IP telephony switch vendors to monitor their voice performance.
In addition, IP telephony specialty tools are often added to the vendor offerings and are made from the ground up to monitor, manage and troubleshoot IP telephony as an application. Vendors in this space include Brix Networks, Infovista, Integrated Research (better known by its product name, Prognosis), NetIQ, and -- most recently -- EMC, through a partnership with Integrated Research to use its Prognosis products.
Important concepts in network management are fault isolation and root cause analysis. Fault isolation can be far from trivial when elements contend for network resources. Simple slowdowns in application performance can often be traced through several elements and involve a great deal of data collection.
Fault isolation is built on root cause analysis. Root cause analysis provides automation that determines why a fault may be occurring. Major platforms all contain powerful root cause analysis engines. HP OpenView, for example, contains a root cause analysis engine that was originally obtained from Riversoft, while EMC incorporates an engine originally developed by SMARTS. Root cause engines are highly compute-intensive and require substantial servers to run effectively.
A principal issue associated with both fault isolation and root cause analysis is the overhead that they introduce into the network. Both capabilities call for highly telemetered infrastructure. In some cases, this requires the deployment of active agents or appliances to gather telemetry. This can impose a significant overhead on network investment. The result is that most networks are under-telemetered.
Managing application performance
Once the network is adequately monitored and managed, tools can be added to manage application performance. These typically take the form of application performance monitoring tools and bandwidth and performance management applications.
Performance monitoring sounds as if it should be fairly simple. It turns out, however, that in any mesh wired internetworked infrastructure, monitoring can be very hard to do. Typically, tools approach performance monitoring from two directions: modeling and synthetic transactions. Modeling just means that a model of the network is constructed in the tool's memory and then telemetry is taken at specific loading points, and the measurements are applied to the logical model. When the model of the network chokes, it is a safe assumption that the network itself is also experiencing difficulties. The model can even be operated on as if it were the network to assist in troubleshooting and corrections.
Synthetic transactions involve putting transactions on the network that exercise specific parts of the network to see what kind of response is generated. These transactions are synthetic, in the sense that they do not represent real user demand activities, but they are real in the sense that they use the actual applications under consideration and ride the same network that real transactions use.
Above performance monitoring lives application performance management. This is the level at which multimedia applications are managed. While conventional transaction data may require some performance management, however, multimedia applications are highly likely to require this kind of oversight. The reason is that multimedia can be very sensitive to delays introduced by network loading. Application performance management can be delivered as a standalone product or embedded within a WAN optimization platform from vendors including BlueCoat, Cisco, Coyote Point, Expand Networks, F5, and Riverbed.
One approach that these vendors use to manage performance is to give priority to multimedia applications for network resources. Such application acceleration tools read the headers on data packets to identify traffic types, then give preferential treatment to the ones with headers that correspond to delay-sensitive traffic. Typically, application acceleration tools require that some thought be given to the status that various kinds of traffic will have so that priority tables can be constructed and applied to applications as they cross the network.
The downside of application acceleration is that there is a certain amount of overhead associated with predefining application priority. Priorities for multimedia are rarely an issue, but priorities for non-multimedia traffic can be. For example, is email really a lower priority than database access? Who will scream if certain applications run slower in the presence of VoIP traffic?
As can be seen, managing for multimedia involves much more than applying some special-purpose tools to video or voice applications. Most of the tools necessary to ensure quality of service depend on the foundation of good element and network management. Consequently, the investment required to adequately manage multimedia can be considerable. Research from Nemertes shows that costs to manage VoIP can range from $25,000 for small companies, to $50,000 for midsized companies, to several hundred thousand dollars for large companies, and as much as $2 million for global enterprises.
The bottom line is that enterprises must manage at every level of the networked infrastructure to ensure service quality. Management in depth is a requirement and will ensure that multimedia works as intended.
About the authors:
Irwin Lazar is the principal analyst and program director for unified communications and collaboration at Nemertes Research. His background is in network operations, network engineering, voice-data convergence, and IP telephony. A Certified Information Systems Security Professional (CISSP) and sought-after speaker and author, Mr. Lazar is a columnist for No Jitter and Collaboration Loop and the late Business Communications Review magazine. He is regular speaker at events such as Interop, VoiceCon, and Enterprise 2.0. Mr. Lazar serves as the conference director for FutureNet (formerly MPLScon), the chair for Network World IT Roadmap Web 2.0 track, and is on the advisory board for the Enterprise 2.0 conference.
Mike Jude is a research analyst with Nemertes Research, where he advises enterprises, carriers and vendors, conducts research and delivers strategic seminars. His area of expertise at Nemertes is wireless technologies and mobility strategies. Dr. Jude brings 30 years of experience in technology management in manufacturing, wide-area network design, intellectual-property management, and public policy. Jude holds degrees in electrical engineering and engineering management respectively, and a doctorate in decision analysis. Jude is a respected author of numerous industry-defining studies and has written columns for Business Communications Review, eWeek, TechTarget and Network World Fusion. He is the co-author of The Case for Virtual Business Processes: Reduce Costs, Improve Efficiencies and Focus on Your Core Business.
This was first published in November 2008