Do Your Metrics Measure Up?
by Michael Perla
June, 2003
At a recent networking function, I had a discussion with someone who worked in the finance area for one of the Big 4 accounting firms. Since he specialized in the valuation of companies, I asked him about the models he used and he brushed me aside to tell me it was all about the assumptions, not the models, per se. I couldn’t agree more with his statement. I have often termed the rise of measuring everything to be the deification of numbers. It’s not that I disagree with the well-known axiom that you can’t manage what you don’t measure—it’s just that I have to ask how, and why, you’re measuring it.
That premise is the basis of this article. Although putting each of your metrics through the paces may not seem exciting, it is those measures that are the basis for any business case you construct or objective you set.
Each metric or measure—marketing or otherwise—should be vetted in light of its reliability, validity, sensitivity, responsiveness, cost/benefit, comprehension, and balance.
Reliability
In research, reliability means repeatability or consistency. A measure is considered reliable if it would give you the same results over and over again, given that what you are measuring is not changing.
If you weigh yourself on a scale that is always 10 pounds off (and your weight is stable), you would have a reliable measure, it’s just not valid for the intended purpose. But I’m getting ahead of myself. Each measure in your company should be tested to determine if it’s measured in a reliable and tenable fashion.
In other words, are people measuring the area using the same methods, processes, decision models, assumptions, and so on?
If the way you measure something is not clearly outlined, leaving room for little interpretation, it will have more “noise” or error and is less likely to be considered reliable. The fundamental basis for standardized processes is the push for reliability, which is a necessary but insufficient condition in determining whether a measure is valid.
For example, in sales, pipeline analysis (e.g., I’m in the Qualify or Develop stage) is meaningless without a clearly defined sales process and stage gate criteria.
Test: What evidence is there that the measure is reliable and is being measured in a consistent, repeatable, and defendable manner?
Validity
Although validity is often thought to mean accurate or reasonable, it must be put in context to be meaningful. A measure is valid for a particular purpose, and it must be reliable in order to be valid. Something may be valid for determining market share, but not for the valuation of the business. Within research, there are a number of ways to test the validity of a particular measure.
Two tests that are pertinent to business would include content validity, which looks at whether other content or subject matter (finance or marketing personnel) experts would agree on the use of the measure for its intended purpose.
The other, predictive validity, a form of criterion validity, looks at whether the measure is predictive of the construct it’s intended to provide guidance about. The latter test would be more applicable to a key performance indicator (KPI) as predictive of another macro measure, such as revenue.
Test: Assuming the measure is reliable, what evidence is there that it is measuring what it’s intended to measure?
Sensitivity
This criterion focuses on the sensitivity of the measure to changes in the underlying construct. For example, how much does the underlying construct (like campaign costs) need to change (1%, 10%, and so on) before it registers or is detected?
Test: How sensitive is the measure to changes in the underlying construct?
Responsiveness
This criterion focuses on whether the value of the measure changes immediately when the underlying construct changes. Today’s lingo would link this criterion to the real-time (or zero-latency) enterprise (RTE) concept.
Many executives would like to be able to pull up a dashboard of metrics on their business and have it be updated in real-time, with each metric being highly responsive to actual changes in their operating reality. How long does it take for your “check engine” light to come on?
Test: Does the value of the measure change immediately when the underlying construct changes?
Cost/Benefit
This criterion focuses on the costs and benefits of measuring a particular construct. For example, how much time does it take to collect, input, and analyze the requisite information (to determine sales cycle time or cost per lead, for example)? What benefits (better resource allocation, for example) accrue from measuring the construct?
Given the alternatives, do the benefits outweigh the costs given the time, focus, and analysis required to measure the construct?
For example, in the sales arena, numerous sales managers ask their sales representatives to collect and input information that they do not follow-up on or use in their assessment of the business. The costs of this scenario to employee productivity, focus, and trust far outweigh the benefits given that the information is not utilized or paid attention to.
Test: Do the benefits of the measure outweigh all the costs involved in the collection, input, analysis, and reporting of the measure to the requisite stakeholders?
Comprehension
This criterion revolves around how easy it is for the requisite stakeholders to comprehend the measure. The measure may meet all of the other criteria, but if few people can understand it, it is unlikely to take root.
This issue has happened with some companies that have instituted an Economic Value Added method (EVA: a value based approach that charges managers a cost of capital for “renting out the balance sheet”). This is because it is more complicated (a matter of degree) than other approaches and it can be difficult to reliably and accurately assess changes from quarter-to-quarter. Additionally, if you look at incentive compensation in the sales area, most plans today have been re-configured to be simpler, easier to understand, and more pragmatic.
The watchwords for today are simplicity, clarity, and priority.
Test: Can the measure be easily described and comprehended by the requisite stakeholders?
Balance
This criterion focuses on the portfolio of measures that you use in evaluating your business, unit, and/or functional area. The popularity of the Balanced Scorecard approach (based on a 1996 book by Robert S. Kaplan and David P. Norton entitled The Balanced Scorecard: Translating Strategy into Action) is in part driven by the need to identify other measures besides financial ones in evaluating the health of a business, hence the term “balanced.”
The balanced scorecard approach, for example, looks at measures that are oriented around the customer, the financials, the internal workings, learning and people development, and some other pertinent variants of the four.
On the whole, a balanced set of measures should include those that are leading and lagging, causes and effects, in-process and end-process, and inputs and outputs, in each relevant area of the business.
Test: How balanced is the portfolio of measures used to assess the health of the business, unit and/or functional area?
Summary
The above criteria can help you to better understand the pros and cons of each measure that is employed to monitor and track the health of a business, unit, and/or functional area. Each measure needs to be evaluated in the context of the portfolio of measures to be useful, as any single measure, just as any single asset class, may be unbalanced or risky in isolation.
The criteria mentioned above may seem like common sense, but they are not common practice. Many companies try to roll-up or trend “apples and oranges” or inconsistent data, leading to erroneous conclusions and risky strategies.
At the top level, a measure is a proxy for something, such as an IQ score being a proxy for one’s intelligence or employee turnover percentage being a proxy (a lagging one) for employee satisfaction (e.g., see the map is not the territory concept).
As these examples illustrate, most measures need to be triangulated, or combined and analyzed along with other measures, in order to present a useful and complete view of a construct or gap.
The essence of measurement is how you operationalize the measure, in other words, how you define and measure it in practice, as well as how you triangulate measures to accurately reflect a certain phenomena, trend, or divergence.
Lastly, given that measuring is a tool that creates change, each measure should be tested rigorously so as not to dilute focus or waste time and energy on measures that provide little direction, are not actionable, and/or are not closely aligned with the goals and objectives of the organization.