I was speaking with a client at a large government-supported research lab the other day, and he reminded me of the success we enjoyed deploying the Enrich Portfolio System to support their annual portfolio process. When this group approached Enrich, they asked for help picking the winners within their portfolio, and justifying budget increases to their sponsors in the federal government. The work was done in two six week phases.
In the first phase, we imported their current portfolio into the Enrich Portfolio System, and classified the projects across a number of dimensions. It turned out that the most important dimension was project type, which included
- contractually obligated maintenance,
- optional maintenance (leading to risk reductions and efficiency improvements),
- applied research (leading to performance improvements),
- and fundamental research.
Project type was critical because, as it turned out, 90% of their budget was tied up in contractually-obligated maintenance. Their strategic mandate was to spend 50% of their budget on new research, pushing the frontiers of science. The lab had projects in this area, but they were all being woefully underfunded because of the maintenance obligations. With the clarity of current spending realities provided by the Enrich Portfolio System, the management presented a compelling case for more funding in basic research to their sponsors.
The second phase of the deployment focused on the rating and ranking of all activities in the portfolio. The team had used a scoring model in the past, but there were too many questions (over 30!) and all projects seemed to score around 7 out of 10, making project prioritization difficult. Our team analyzed their old scoring system and made several changes:
- Assess which questions had responses that were highly correlated, and cull the redundant questions.
- Cull questions which were too vaguely worded, or did not link directly to group strategy.
- Add explanatory text to every question that explained both the question and the requirements for each possible answer, so that each response is grounded in project realities and the responders are held accountable for each answer they provide.
- Use a different set of questions for maintenance programs and basic research programs, and do not compare these programs directly.
- Remove the complex weighting and scoring scheme that had been used, and replace them with a simple, transparent set of equal weights for all questions, question categories and respondents.
The result of all this was a set of less than a dozen questions that measured each project’s direct impact on strategy. Decision makers were skeptical about the removal of the weighting schemes, until we used visualizations like the one shown here to demonstrate that project ranking was largely invariant to the different weighting schemes proposed.
With a more transparent and more relevant scoring system in place, management found that project scores were more representative of project value. There was significant spread in the scores, and it was easier to identify the new projects worthy of funding. Management still had to go back to the sponsors and request additional funds, but now they had a much clearer story about the value that sponsors would receive for the funds invested.
The Enrich Portfolio System is an effective cornerstone for the type of largely qualitative portfolio assessment called for here. However, the tool must be combined with knowledge of how to effectively value projects, and how to keep your portfolio discussions and portfolio spending strategically relevant. If you are interested in seeing for yourself how to use the Enrich Portfolio System to perform a pipeline assessment, please contact us for a hands-on, interactive demonstration.