In my tip on QoE benchmarks, we looked at some metrics prioritized by respondents in EMA research from last year. The rankings topped with availability, then response
In this tip, we're going to take the bull by the proverbial horns and look in more depth at metrics truly targeted at QoE. The first thing to say is that in terms of customer priorities, our respondents got it wrong. Any number of studies have shown that end users care more about degraded response time than intermittent availability issues. This has more to do with human psychology than network engineering. End users typically believe that a complete failure in availability will soon be remedied, whereas they feel isolated and unsure that any action will be taken if their response time is degraded. Moreover, degraded response time tends to persist far longer than most availability issues, especially in 2008. So, in this case, their perception really is reality.
And by the way, end-to-end network latency is of course only an approximation of response time. In the next tip, we'll dive more deeply into unique technologies -- most notably synthetic transactions and observed transaction response. But for now, suffice it to say that while both apply, for true QoE it's important to capture observed response time from the end-station out.
Yet response time can also be problematic in other ways. Averaged response time over a day or a week or a month may not be very meaningful in itself. Inconsistent response time metrics, even with faster overall averages, can be far more troublesome to rhythms in working and communicating than somewhat slower but more consistent service delivery. And those terrible spikes that alienate users can occur within a single minute -- spikes that may not only help to catch alienated users but also help to provide insight on where the problems lie.
And response times, as central as they are, are just one metric for QoE. For instance, setting the appropriate response-time goal should be predicated not on pure-play speed but on appropriateness and cost. This, of course, is where SLAs can come in -- and they should be based on what you know to be true about the needs of your customers, not what you presume to be true. For some applications, such as email, your customers may care more about flexibility in accessing their mail between wireless and tethered environments than about absolute response time.
Even availability can be a challenge. The availability of the "network" is in itself a far from obvious discussion. In Figure 1, below, you can see a number of components ranging from servers, hubs, routers and database transactions that can all affect availability and, of course, performance. The math is easier in availability -- as availability tends to aggregate across components as Figure 1 demonstrates. Performance metrics can be more complex, and -- depending on the specifics in timing and parallel activities -- they may or may not aggregate.
Figure 1: End-to-End Availability
And yes, MTTR and MTBF do affect end-user experience -- statistics that you need to understand whole-cloth through your service organization if not directly through your own internal metrics. In other words, if you really care about QoE, you should understand MTTR and MTBF as they affect the service consumer, not simply as they are relevant to one of the components in the network.
But other metrics will come into play with various degrees of relevance. These include flexibility and choice of service -- something in which network planning plays a role. Data security is another core value that people may not think of in QoE, but for certain applications and certain information, it can be a prime customer concern -- one for which, when cost is a factor, your customers may be willing to pay more, for more absolute guarantees. And speaking of cost -- well, cost effectiveness and even visibility into usage and cost justification are of increasing interest to business clients who may themselves be expected to contribute to the value of their service. Mobility is another QoE attribute, more important for some applications than others, as I've already indicated. And frankly, the list goes on.
The main point to remember is that each application and each customer set may suggest different QoE parameters. This means dialog (either direct or indirect through your service organization), and that dialog should be iterative, as business demands and requirements change. You can save yourselves a lot of time, money and grief just by making sure up front that you've invested in listening to your customers' top requirements, and then instrument to support those, versus scattering your efforts in an introverted and uninformed manner. In this way, QoE is a little bit like being a good partner in a marriage -- doing what's right for the two of you, not just doing what you believe to be the right thing without asking.
About the author: Dennis Drogseth is the vice president of Enterprise Management Associates (EMA), an IT management research, analysis and consulting firm. Having joined EMA in 1998, Dennis currently manages the New Hampshire office. He has been a driving force in establishing EMA's New England presence. Dennis brings 24 years of experience in various aspects of marketing and business planning for systems and network solutions. He directs a team of analysts who focus on the development of the Networked Services Management practice areas that span performance availability and service management across enterprise and telecommunication markets. His team also addresses accounting, billing, QoS, outsourcing and other disciplines related to these markets.
This was first published in February 2009