Planview Blog

Your path to business agility

Product Portfolio Management, Project Portfolio Management, Strategic Planning

Product Forecasting and the Planning Fallacy – Enrich Consulting

Published By Dr. Richard Sonnenblick

Experience shows that what happens is always the thing against
which one has not made provision in advance
.

— John Maynard Keynes

[1]

An old cliché in forecasting is that when it comes to the single-valued forecast, the only thing you can say with certainty is that it’s wrong. More than just plain wrong numbers, a reliance on single-valued forecasts suffers from these problems:

  • Communicates over-confidence about the forecast and your knowledge of the future
  • Discourages a team from seriously considering events that may increase or decrease project value
  • Fosters complacency about the need to develop options and contingency plans

I have a nagging suspicion that none of what I’ve just written is a surprise to you, even if you’re one of those single-valued forecasters. It’s ironic that many people avoid the topic of uncertainty because they are…uncertain about what to do about it. The popular literature on decision-making has only raised awareness of how challenged we all are at estimating uncertainty and risk, and it’s easy to rebuff Monte Carlo simulation as a lot of statistical mumbo-jumbo. We in turn are not surprised when we speak with R&D teams about forecasting and hear a variant of “we don’t estimate uncertainty because we just don’t know enough.”

This brings me to “the planning fallacy,” a phrase coined decades ago by behavioral psychologist Daniel Kahneman.[2] Kahneman identified the “inside view” as detrimental to forecasting and decision-making. While the phrase “inside view” might be unfamiliar, I’m sure you can recognize the behavior: While in the thick of planning a project, the tendency is to focus on what should happen, rather than what could happen. Kahneman and Amos Tversky observed that a reliance on the “inside view” led to an under-representation of risk in decision-making. Forecasting from the complementary “outside view” uses past data on similar projects to inform the current exercise, providing a more realistic frame for future unpredictable events. The key to establishing an effective “outside view” is the use of “reference class forecasting.”

Reference class forecasting involves compiling historical information on projects similar to your current project, and calculating the historical range for critical forecast variables. In Thinking, Fast and Slow, Kahneman called reference class forecasting “the single most important piece of advice regarding how to increase accuracy in forecasting.” Using the historical range puts you in a more informed position to estimate input values for your current endeavor’s forecast. Your input should only deviate considerably from the historical range if your current project is substantially different from those in the historical record.

Here are some examples of how we’ve used reference class forecasting to improve the accuracy of clients’ project forecasts:

  • Project: Market share for future product introductions
  • Reference class: Historical market share uptake curves
    While working with this chemical manufacturer’s project teams, we regularly encountered estimates of sales growth that would capture a majority of a market within just a few years of product launch. We collected past sales data from the company financial system and compared distributions of historical sales and market share estimates to the project teams’ forecasts. Many project teams quickly tempered their sales forecasts when they realized their initial estimates would make their product “the fastest growing launch in company history.”
  • Project: Risk of future drug candidates
  • Reference class: Clinical drug trial failure rates
    At a pharmaceutical company, we used historical data from both within and outside the company to develop a range of failure rates for drug candidates in different disease areas at different phases of development. Forecasters who deviated substantially from these ranges in their forecasting had to present a compelling case for the difference. While management didn’t accept all of these justifications, they always led to productive conversations about risk and opportunity.
  • Project: Future product price reductions
  • Reference class: Historical price declines for a set of high-technology products
    Managers at a microprocessor company witnessed many of the project teams forecasting three and five year sales cycles for their chipsets. This didn’t seem right, but they lacked the evidence to back their suspicions. We undertook a price/compute unit assessment for a ten-year look at price erosion across their product line. As forecasts were adjusted, this led to a complete rethinking of the product line strategy across an entire division. In each of these cases, the aggregated historical information was a revelation, helping forecasters to temper their optimistic forecasts by grounding them in the reality of the past.

How is Reference Class Forecasting Different From Benchmarking?

If you are a seasoned forecaster, you might be thinking by now that reference class forecasting is merely benchmarking gussied-up in a new suit. You are partially right, in that both benchmarking and reference class forecasting rely on historical information on similar projects to provide an “outside view” to the analyst or forecaster working on the project. The difference is that reference class forecasting focuses on the distribution (or range) of outcomes, whereas benchmarking usually focuses on the average value from the historical record.

By focusing on the distribution, reference class forecasting acknowledges that the truth of this new project is uncertain, but is likely to fall somewhere within the range of past estimates. The analyst can select the median value from historical data, or can justify a higher or lower value based on a comparison of the current project to the historical projects used, adjusting for inflation or other factors as needed. Alternatively, the analyst can use the reference class distribution as an input to the forecast, and use Monte Carlo to propagate the uncertainty in that distribution through the forecast model. The resulting project value forecast will be a range as well—a direct reflection of the outcomes of past projects.

Where to Start? An Example

Our best plan is to plan for constant change and the potential for instability, and to recognize that the threats will constantly be changing in ways we cannot predict or fully understand.

— Timothy Geithner[4]

As you might expect, the biggest challenge with reference class forecasting is compiling sufficient historical data to inform all of the uncertain parameters used by your current forecasting methodology.[5]
The good news: You probably have a long history of forecasting sales for products in the R&D pipeline. The bad news:

  • Those forecasts are probably not consolidated in one place
  • Those forecasts have not been reconciled against actuals at the net sales level
  • Those forecasts have not been reconciled against actuals at the parameter level (market share, addressable population, pricing, etc.)

We recommend starting modestly, with a review of net sales forecast error across the historical portfolio. The sales variance (actual vs. forecast) can be expressed as a distribution, introducing the R&D team to their first reference class: Sales forecast errors. Below is an example of one companies’ foray into this type of analysis, expressed as a set of probability bands.

Errors in Product Sales Forecasts: These probability bands summarize the forecast error across over 130 products, some launched at the time of the forecast and some still under development. Isolating just the under development forecasts results in an even larger forecast error.

Errors in Product Sales Forecasts: These probability bands summarize the forecast error across over 130 products, some launched at the time of the forecast and some still under development. Isolating just the under development forecasts results in an even larger forecast error.

In the chart above, the 10th percentile and median forecast errors only look respectable because they are being dwarfed by the 90th percentile error. The 10th percentile error varies between -20% and -65% and the median varies between 15% and 40%. The forecasting team gleaned several important insights from this analysis:

  • We have a tendency to dramatically overestimate the sales potential of our new products. This is partly due to optimism, and partly due to motivational biases inherent in new product development.
  • Outsized sales estimates lead to a sense of complacency around our pipeline and may mislead executives in strategic discussions about new product planning
  • If we had more realistic forecasts it would motivate teams to discuss contingencies and back-up plans more often, which is a good thing: We would become more responsive to feedback during development and product launch
  • We should explore what aspects of each forecast led to the overall errors shown in the chart above, so we can better understand what we think we know, but apparently don’t

In summary, these insights motivated the firm to become more honest, more risk-aware, and more responsive as problems appear during development and product launch. This analysis was just one step in their portfolio journey, but it was an important step because it began an honest investigation into uncertainty (and error) in their new product forecasts.

If you are interested in more examples of forecasting variance, check out this blog post.

Don’t Count on the Bullseye

bullseye_onlySingle-valued forecasting is like using only the center of the dartboard. A bullseye is great, but more likely than not, you’re going to land in a range around the center. Don’t delve into Monte Carlo or other techniques for considering uncertainty until you’ve done your homework, and understand the plausible range for sales, market share, pricing, etc. in your forecast domain. Reference class forecasting is a great place to start your journey into considering uncertainty, helping you base your uncertain estimates on the relevant history of products most similar to those in your R&D pipeline today.

If you think the idea of a reference class forecast is compelling, but you aren’t sure where to start, drop us a line and we can help you get started.

1Letter to Jacob Viner, June 9, 1943, Collected Writings of John Maynard Keynes, ed. Donald Moggridge, vol. 25. London: Macmillan, 1980.
2Daniel Kahneman, 2011, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux), p. 251
3Flyvbjerg, B., 2006, “From Nobel Prize to Project Management: Getting Risks Right.” Project Management Journal, vol. 37, no. 3, August, pp. 5-15.
4“Letter from the Chair, 2011 Financial Stability Oversight Control Annual Report, U.S. Department of the Treasury, August 2011.
5Flyvbjerg, B., 2008, “Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice.” European Planning Studies, 16(1), 3-21.

Related Posts

Written by Dr. Richard Sonnenblick Chief Data Scientist

Dr. Sonnenblick, Planview’s Chief Data Scientist, holds years of experience working with some of the largest pharmaceutical and life sciences companies in the world. Through this in-depth study and application, he has successfully formulated insightful prioritization and portfolio review processes, scoring systems, and financial valuation and forecasting methods for enhancing both product forecasting and portfolio analysis. Dr. Sonnenblick holds a Ph.D. and MS from Carnegie Mellon University in Engineering and Public Policy and a BA in Physics from the University of California, Santa Cruz.