Planview Blog

Your path to business agility

Product Portfolio Management, Project Portfolio Management, Strategic Planning

The “Good Enough” Business Model – Enrich Consulting

Published By Dr. Richard Sonnenblick
A product manager and an analyst are discussing the revenue forecast for a product in development. Poring over the financials, the manager asks:
     “Have you considered how SUPR-3 will impact sales of our other SUPR products?”
     The analyst replies confidently: “Yes! Right here you can see we are estimating SUPR-3 to take a 20% cannibalization of the existing SUPR product line.”
     “But a dollar of revenue from our earlier SUPR products is much less valuable, since we have fatter margins on the new product. Has that been factored in?” asks the manager.
     “Oh no, I didn’t think of that,” replies the analyst, taken off guard. “I’m on it.”  So away goes the analyst with a job to do: Add another input and another algorithm to The Model.
     Two days later the analyst meets with the manager once again: “You were right about the difference in profit margins–it reduced the impact of cannibalization by 20% and increased the net present value by 4%.”
“I thought so,” says a satisfied manager.” We are getting closer with this business case.”
big_hill_ahead

Another factor to roll up with the model

They were “getting closer,” but the model still wasn’t “good enough.” Variations on this theme play out every few days for the next month, until there are a dozen more business factors explicitly considered in the forecast. Over a quarter or a year, they might add hundreds of factors as they think of questions, but never remove them as they find answers. Taking out a business factor goes against instinct, as if they were removing value, or intelligence, from the model.


Their goal is to make the model comprehensive, which is admirable but Sisyphean. We humans are a creative bunch and can always think of another factor to add to our models. This is how monster models are born. Monster models are challenging to feed and care for, and decision makers have a distrust of them due to their layers of complications.

The “Good Enough” model

The actual goal should be a good enough model; one that provides enough confidence for management to fund the product and enough detail for the project team to develop the product to the next milestone. The key to crafting a good enough model is to use a modeling framework that accounts for uncertainty.

When we build uncertainty into the model, we use simulation methods to represent the uncertainty of the business forecast. Uncertainty in modeling means that each input is not known precisely, and so instead of single values, we characterize critical inputs as a range of values with upper and lower limits.

 Communicate forecasts with a wedge, and come clean about what you do, and don't, know

Communicate forecasts with a wedge, and come clean about what you do, and don’t, know

In the example here, the forecast for revenue is no longer a thin, sharp line. Instead, the forecast is represented as a wedge, defined by the plausible upper and lower bounds for revenue based on the simulation results. Each contingency/scenario that the team faces is evaluated in the context of the wedge. For each consideration, ask the question: Does it significantly move the boundaries of the wedge? If not, then you can remove the contingency/scenario from the model. Because you built the model’s shape as a wedge and not a line, you are no longer distracted by relatively small movements of the base value within the larger forecast wedge. Only factors that appreciably affect the wedge are retained.

Excuses not good enough

We sometimes encounter teams who build monster models even when they know they could be using uncertainty analysis to more precisely represent what is known. Here are some of their excuses, and our thoughts on each:

It’s a challenge to effectively communicate a simulation result.
This is true! Using probability bands and distributions to communicate business value will take some getting used to. Keep in mind that using single values may be easier, but that doesn’t make it effective or sufficient. If you can, dig up a few old spreadsheets and compare the forecasts with the actual business outcomes. Demonstrate how using a single-valued estimate was not good enough.It’s a burden to enter the extra data for the uncertainty analysis.
There is no question that it is more work to enter three numbers (lower estimate, base case, high estimate) than one. If you must take a shortcut, one tactic is to express just a scant handful of key inputs with uncertainty, starting with those at the top of your tornado diagram.It’s futile to estimate uncertainty because it always seems to be underestimated anyway.
We do see many in the industry “phoning-in” their forecast uncertainty. Much to our dismay, it is common to add/subtract 10% from the base case and pass that off as the plausible range for any given input. They are missing an important opportunity: When a project team actively discusses sources for uncertainty, the outcome is more than just a set of business inputs. The team discovers methods to mitigate that uncertainty and improves the project plan in the process. Additional insights and value are created when teams review their forecasts retrospectively, and learn whether their estimates of revenue captured the actual revenue.It’s our record of what we’ve learned about the business, so the forecast model has to be big.
We see this quite often: If I don’t leave it in, you won’t know I thought about it, even though I did, and it didn’t matter, so I left it out… With a good enough model that includes uncertainty, this is no longer necessary. Documentation saved alongside the model can record each ‘business situation exploration’ that was tested within the model, and explain why that consideration did, or did not, add value to the forecast. The decision-making behind each and every consideration doesn’t necessarily contribute to the mechanics of the forecast model.

I hope you’ve found these suggestions helpful. The path to actively managing uncertainty in project development is long and steep. Remember that surmounting cultural and organizational inertia takes time, and be ready to use the inaccuracy of past forecasts to get the ball rolling toward better modeling.

For more on this topic, have a look at the Keeping Things Simple post as a case study on how to simplify simplify simplify.

Happy Forecasting!

Related Posts

Written by Dr. Richard Sonnenblick Chief Data Scientist

Dr. Sonnenblick, Planview’s Chief Data Scientist, holds years of experience working with some of the largest pharmaceutical and life sciences companies in the world. Through this in-depth study and application, he has successfully formulated insightful prioritization and portfolio review processes, scoring systems, and financial valuation and forecasting methods for enhancing both product forecasting and portfolio analysis. Dr. Sonnenblick holds a Ph.D. and MS from Carnegie Mellon University in Engineering and Public Policy and a BA in Physics from the University of California, Santa Cruz.