Network monitoring tool makers have recently touted the addition of NetFlow technology to their wares, despite...
the fact that the technology has been around for several years.
Within the past month, folks like Netreo and Network General have made key product announcements surrounding their forays into NetFlow, a data collection feature built into routers and switches by Cisco and others. Apparent Networks, Coradiant, NetQoS, NetScout, Network Physics and others have been preaching the benefits of flow data for months -- in some cases, years.
Network monitoring, in general, determines whether network devices are working properly and whether packets are traversing the network at an acceptable speed. Adding NetFlow data to the mix allows networking admins to determine packet round-trip time, and loss and delay, while also showing how well applications and services are being delivered. The new tools monitor flow data, which is tied to specific applications for flow-based network monitoring.
According to Yankee Group senior vice president Zeus Kerravala, the addition of flow-based monitoring, despite its wide availability for some time now, illustrates a shift in the way networking pros want to monitor their networks and what information they want from the tools they use.
"What you're seeing is a drive for network management vendors to not just tell you that there's a problem but actually tell you what the problem is and then help you solve it," Kerravala said.
He estimated that roughly 90% of troubleshooting time is spent isolating a problem, and the use of flow metrics can chop that time to fractions.
"More intelligent management systems can cut the mean time to identification way down," he said. "So use of NetFlow can provide more intelligence."
Last month, Netreo released the latest version of its OmniCenter monitoring tools with the addition of NetFlow analysis, dubbed OmniCenter Flow. Netreo CEO Kevin Kinsey said the addition of NetFlow traffic monitoring lets IT gain visibility into network traffic patterns, tweak infrastructure, allocate resource and costs, and monitor traffic volumes.
OmniCenter Flow is made in appliance and Software-as-a-Service form.
According to Jon Friese, senior network specialist at Mitsubishi Motors, the ability to see flow traffic has given better insight into how his network is running, but it can often be complicated.
"NetFlow monitoring offers a lot of value for our IT organization, but it can be a tremendous hassle to deploy and manage," Friese said.
For the past few months, he said, Mitsubishi has used OmniCenter Flow.
"NetFlow is so verbose it's often difficult to readily identify the data we need," he said. "We were looking for something that would give us a clear view into our network traffic yet be easy to install and virtually maintenance and admin free."
Kinsey said the goal of OmniCenter Flow is to add a level of simplicity that was lacking from other NetFlow network monitoring tools. He said that in many instances, networking pros are confronted with interface and data overload.
OmniCenter Flow is controlled through an AJAX-enabled Web interface, and metrics are offered in graphical reports. Users can define criteria to visualize certain applications, sites, subnets, interfaces, conversations and user-traffic volumes. Using site-based reporting gives visibility to remote location traffic and applications, while long-term trending graphs can show the mix of applications over time, with a three-year history.
Mitsubishi's Friese said he uses NetFlow monitoring daily to find bottlenecks and investigate bandwidth issues. In a short time, he was surprised to see in detail the traffic on his network.
"There's a lot more Microsoft network traffic than I thought," he said. "Why are some machines broadcasting that shouldn't be."
Friese said that being able to see spikes on an interface and the traffic on certain VLANs and subnets helps him find problems. He can also pinpoint traffic by user or application, or by using other metrics.
"It brings things to light," he said, adding that he can use historical comparisons to determine whether or not a traffic spike should be occurring. For example, at the end of the month, servers and interfaces are "really getting pumped," but if that level of activity happens at the beginning of the month, he said, he knows there's some kind of trouble. "Is this a normal spike or an abnormal spike?"
Network General last month also integrated NetFlow data into its network architecture, networkDNA, to aggregate network info from various vendor data sources. The vendor also released new APIs for its NetVigil and Visualizer lines for access to application and network performance data by pulling information from its networkDNA performance management database (PMDB).
NetFlow data added to the PMDB can be compared with business-level IT service availability and performance using baselines and trends, allowing a single workflow for monitoring and managing data but also giving IT a solid understanding of application and network performance relationships across the infrastructure.
James Messer, Network General's director of technology marketing, said the ability to interpret NetFlow gives insight into application performance. It provides a context for isolating and solving capacity-related issues without the complexity of managing varied sources of flow data. Users can review historical performance, find patterns, analyze anomalous trends, and better predict future network and application behavior, he said.
Kevin Watts, manager of voice and data services for the University of Alberta, said, "NetFlow delivers the granular information needed, allowing me to see how various applications impact my network performance as a whole."
Kerravala concluded that this visibility and granularity is essentially what networking pros are looking for when they turn to flow-based data.
"You can see down to a flow level," he said. "So you can get the same level of info as agent-based systems without the agents."