What a recent HBR article gets wrong (and right) about innovation metrics

A recent HBR article (here) claims that use of innovation metrics (especially financial metrics) too early will “suffocate” your most promising innovations. The author, Scott Kirsner, asserts that measuring a new product with the same metrics used for established products (e.g., return on sales) leads to implicit (or explicit) benchmarking by business leaders (e.g., the CFO)—and that results in an unfavorable perception of the innovation.

The suggestion that using the wrong metrics can stifle innovation isn’t new. Financial metrics (e.g., forecast NPV) have long been blamed for stifling innovation—particularly early-stage innovation. Christensen and colleagues1 called them “innovation killers” in a 2008 HBR article and earlier work2 highlights similar issues. And most innovation practitioners we’ve worked with can provide plenty of anecdotal evidence.

Kirsner’s primary recommendation to avoid stifling innovation with the wrong metrics is to create a transition phase between the “garage” (R&D) and the “highway” (core business). During this transition phase, Kirsner recommends beginning with activity metrics (e.g., “number of prototypes created and tested”) and then eventually moving to metrics that demonstrate the impact or value of your innovations.

We have several issues with the article:

  1. It seems to indicate measurement primarily informs one-way communication to those outside the innovation function (e.g., the CFO). Casting measurement as “fine-tuning a spreadsheet” belittles its role when done well: it’s the means for developing a shared basis of judgment between your team and your stakeholders. Good measurement informs and debiases key decisions amid the uncertainty and ambiguity that characterize innovation.
  2. It recommends using activity metrics (e.g., “number of customers interviewed”) in the “…early days of developing an idea…”. At best, activity metrics simply show you’ve been “busy.” At worst, they give the (false) impression that innovation activities aren’t adding value to the organization. They can also create perverse incentives3.
  3. It identifies specific metrics as the problem, when the issue is typically not the metrics themselves but how those metrics are used (as highlighted by several of the innovation professionals quoted in the article).

That said, we’re glad to see innovation measurement be featured prominently, and see points we agree with in the article. Measurement for innovation is a balancing act and improving it can be challenging. And it’s critical that firms get better at measurement, if they are to improve innovation performance. So, to that end, here’s our advice on how to ensure innovation metrics help innovation performance, rather than suffocate it.

1. Don’t use forecast outcome indicators at the front end of innovation. And don’t use activity metrics.

The front end of innovation is notoriously ambiguous—too ambiguous to enable estimates of outcomes. Definitions of what constitutes the “front end” vary. We typically describe it as the set of activities you undertake to generate hypotheses for: a target customer, a target need, and a new product / service or other innovation. 

Fairly obviously, it’s fundamentally impossible to estimate an outcome metric (e.g., forecast NPV from an innovation) until you have generated these hypotheses. Until the point at which you can estimate outcomes, the shared basis of judgment for a given project or portfolio should be the learning generated. So rather than resorting to distracting activity metrics (as proposed in the HBR article), you need to measure learning—or as close a proxy to learning as you can get. If you’re using a consistent front end process (which you should be!), learning proxies can be developed for each step in this process. The table below provides an example.

2. During innovation project selection evaluate a broad set of criteria and dig into the assumptions and hypotheses underlying quantitative outcome metrics.

The most common criticism leveled at financial metrics is for the role they sometimes play in selection—decisions on which new products, services, or other innovation to fund. To understand why, consider a common approach to selecting innovation projects: estimate each proposed innovation’s risk-adjusted NPV (or rNPV to cost ratio), rank them in descending order and fund all those for which you have budget. This is precisely how to not use financial metrics, but note that the problem here is not the metrics themselves, but how they are being used.

Firstly, forecast rNPV should never be the sole basis for selecting projects. A plethora of other factors should inform decision makers judgement about whether an innovation project is attractive—strategic fit, technical feasibility, strength and uniqueness of the value proposition, etc. Some organizations have had success using innovation scorecards (see this example from the people behind the business model canvas) to help reduce the focus on financial metrics.

Secondly, even when other factors are considered, innovation project selection processes are dominated by an “advocacy” mindset—as highlighted by Gary Pisano4. Teams proposing innovations (and their supporters) are focused on building the case, so they emphasize what is known and positive over what is unknown and uncertain. In this context, the team and reviewers are working with different sets of information. When forecast outcome metrics are presented, the reviewers tend to focus on the number itself—not the analysis behind it. However, the greatest value is not in what the number is, but in the underlying analysis—which is replete with assumptions and hypotheses. Understanding the qualitative information underlying the most important of these assumptions (including the quality of the supporting evidence) can be extremely valuable for developing that shared basis of judgment.

3. Change the norms you benchmark against, not necessarily the metric.

Newly launched products or services should not be compared with well established “cash cows,” but with analogous innovations at similar points in their journey. The issue here is, again, not the metric, but how it’s used—in this case the norm or benchmark against which it’s being compared. As with many other aspects of innovation performance reporting, a small amount of education (e.g., of senior leaders) and expectation setting may be necessary. In an example from an analogous context, a global manufacturing firm was just starting to track their product vitality index (PVI). They knew their current product vitality index (PVI) would be low and were unsure how quickly they could increase. Instead of setting a specific target to benchmark against, they decided to focus their leadership on the directionality of their PVI. Once they’ve established how quickly they can move the needle, they are well positioned to establish an ambitious (but credible and achievable) target.

1—Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.

2—C.M. Christensen, S.P. Kaufman, and W.C. Shih. (2008). Innovation Killers Harvard Business Review, 86/1: pp. 98-105.

3—Kerssen-van Drongelen, I.C. and Cook, A. (1997). Design principles for the development of measurement systems for research and development processes. R&D Management, 27/4: pp. 345-357.

4—Pisano, Gary P (2019). Creative Construction: The DNA of Sustained Innovation. New York, NY, Public Affairs Books