With more projects successfully reaching late-stage development, where the resource requirements are greatest, the demands for funding were growing. Major resource-allocation decisions are never easy. For a company like SB, the problem is this: How do you make good decisions in a high-risk, technically complex business when the information you need to make those decisions comes largely from the project champions who are competing against one another for resources? A critical company process can become politicized when strong-willed, charismatic project leaders beat out their less competitive colleagues for resources.
|Published (Last):||1 January 2014|
|PDF File Size:||17.9 Mb|
|ePub File Size:||19.12 Mb|
|Price:||Free* [*Free Regsitration Required]|
With more projects successfully reaching late-stage development, where the resource requirements are greatest, the demands for funding were growing. Major resource-allocation decisions are never easy.
For a company like SB, the problem is this: How do you make good decisions in a high-risk, technically complex business when the information you need to make those decisions comes largely from the project champions who are competing against one another for resources? A critical company process can become politicized when strong-willed, charismatic project leaders beat out their less competitive colleagues for resources. That in turn leads to the cynical view that your project is as good as the performance you can put on at funding time.
What was the solution? Some within the company thought that SB needed a directive, top-down approach. But our experience told us that no single executive could possibly know enough about the dozens of highly complex projects being developed on three continents to call the shots effectively. In the past, SB had tried a variety of approaches.
One involved long, intensive sessions of interrogating project champions and, in the end, setting priorities by a show of hands. Although the approach looked good on the surface, many people involved in it felt in the end that the company was following a kind of pseudoscience that lent an air of sophistication to fundamentally flawed data assessments and logic.
The company had also been disappointed by a number of more quantitative approaches. It used a variety of valuation techniques, including projections of peak-year sales and five-year net present values.
But even when all the project teams agreed to use the same approach—allowing SB to arrive at a numerical prioritization of projects—those of us involved in the process were still uncomfortable. There was no transparency to the valuation process, no way of knowing whether the quality of thinking behind the valuations was at all consistent. As we set out in to design a better decision-making process, we knew we needed a good technical solution—that is, a valuation methodology that reflected the complexity and risk of our investments.
At the same time, we needed a solution that would be credible to the organization. If we solved the technical problem alone, we might find the right direction, but we would fail to get anyone to follow. That is typically what happens as a result of good backroom analysis, however well intentioned and well executed it is. But solving the organizational problem alone is just as bad.
Open discussion may lead to agreement, enabling a company to move forward. But without a technically sound compass, will it be moving in the right direction? SB needed a solution that was technically sound and organizationally credible. The easy part of our task was agreeing on the ultimate goal. In our case, it was to increase shareholder value. In particular, the traditional advocates in the process—the project teams and their therapy area heads—would have to believe that any new process accurately characterized their projects, including their technical and commercial risks.
Those who were more distant—leaders of other project teams and therapy areas, regional and functional executives, and top management—would require transparency and consistency. Most organizations think of decision making as an event, not a process. They attach great importance to key decision meetings.
But in most cases, and SB is no exception, the real problems occur before those meetings ever take place. Phase I: Generating Alternatives One of the major weaknesses of most resource-allocation processes is that project advocates tend to take an all-or-nothing approach to budget requests. At SB, that meant that project leaders would develop a single plan of action and present it as the only viable approach.
Project teams rarely took the time to consider meaningful alternatives—especially if they suspected that doing so might mean a cutback in funding.
Then it would brainstorm about what it would do under each of the four funding alternatives. The numbers in this example are not actual figures. When the team working on the project was asked to develop alternatives to the current plan, they balked.
Their project had always been regarded as a star and had received a lot of attention from management. They agreed, however, to look at the other alternatives during a brainstorming session. Several new ideas emerged. When the value of those alternatives was later quantified, a new insight emerged. Suddenly it occurred to one team member that by selecting only the most valuable combinations of products and markets, they might both increase value significantly and reduce costs in comparison with the current plan.
That insight led the team to a new and even more valuable alternative than any they had previously considered: target all three markets while cutting back to just one product form in each market. Although such results are not an inevitable result of insisting on alternatives, they are unlikely to occur without it.
A Framework for Effective Communication This three-phase process allows SB to find more value in its portfolio of development projects. Considering alternatives for projects had other benefits. First, ideas that came out of brainstorming sessions on one project could sometimes be applied to others. Second, projects that would have been eliminated under the previous all-or-nothing approach had a chance to survive if one of the new development plans showed an improved return on investment.
The team could then focus its development work accordingly. Near the end of this phase, the project alternatives were presented to a peer review board for guidance before any significant evaluation of the alternatives had been performed.
Members of the review board, who were managers from key functions and major product groups within the pharmaceuticals organization, tested the fundamental assumptions of each alternative by asking probing questions: In the buy-down alternative, which trial should we eliminate? Should a once-a-day formulation be part of our buy-up alternative? The discussion session improved the overall quality of the project alternatives and helped build consensus about their feasibility and completeness.
The project teams then revised their alternatives where appropriate and submitted them again for review, this time to the group of senior managers who would, at a later point in the process, make the final investment decisions on all the projects. The group included the chairman of the pharmaceuticals business; the chairmen of the U. In many organizations, investment alternatives are presented together with an evaluation; in others, the alternatives are evaluated as soon as they are put forth, before they are fully fleshed out.
Although it takes discipline to keep from debating which of the project alternatives are best, it is critical to avoid doing so at this early stage in the process. Premature evaluation kills creativity and the potential to improve decision making along with it. At SB, we wanted to be sure that we had developed a full range of project alternatives before starting to judge their value.
To accomplish that end, the role of the project teams would be to develop the initial alternatives, and the role of management would be to improve the quality of the alternatives by challenging their feasibility, expanding or extending them, or suggesting additional possibilities. Phase II: Valuing Alternatives Once we had engineered the process that took us through phase I, we needed a consistent methodology to value each one of the project alternatives.
We chose to use decision analysis because of its transparency and its ability to capture the technical uncertainties and commercial risks of drug development. For each alternative, we constructed a decision tree, using the most knowledgeable experts to help structure the tree and assess the major uncertainties facing each project.
How was a large, complex organization going to overhaul its investment process? The answer was: gradually. Before the new process was introduced to the entire project portfolio, it went through two pilot tests—first, on a single development project for migraines and, second, on a subsection of the portfolio that included 10 projects.
It was important to address the anxiety people felt at the prospect of such a major change. Would the new process, project team members wondered, mean a cutback in their funding? Would it mean termination of their projects? In an industry like pharmaceuticals, where a project leader may work on as few as five or six projects in an entire career, such anxieties could not be taken lightly. To manage the issue, Nicholson made a commitment to the teams that during the initial pilot no investment decisions would be made.
The only objective would be to develop the new approach and gauge its viability. During the pilot test, we worked to build a valuation methodology that would win the confidence of its future users. The core of the methodology that prevailed was decision analysis. That approach includes problem framing, the creation of alternatives, the use of decision trees to represent risk, options analysis, sensitivity analysis to represent different viewpoints and focus attention on value drivers , and key output measures of risk-adjusted return.
Although such tools are in widespread use, they cannot be applied in cookie-cutter fashion. SB spent considerable effort tailoring the tools to its specific applications. The pilot phase served to develop consensus about all dimensions of the new process, from its general philosophy to the specific templates and formats that were designed.
The project teams and therapy areas were satisfied that the new process could accurately represent their projects, even in complex areas such as the risk associated with product development and commercialization. Regional and functional management as well as decision makers could see that transparency and consistency were built into the process. The pilot was followed by a full-scale rollout of the new process in The acid test of whether a valuation methodology has won credibility is this: it is credible if it no longer attracts attention to itself.
When that happens—as it did at SB by late —the attention of the organization shifts from How should we measure value?
We developed six requirements for achieving credibility and buy-in to the valuation of each alternative: First, the same information set must be provided for every project. We developed templates that are consistent in scope but flexible enough to represent the differences among the projects and their alternatives. Second, the information must come from reliable sources.
Experts from inside and outside the company must be selected before anyone knows what their specific inputs will be regarding the major uncertainties the development team faces.
Third, the sources of information must be clearly documented. The date and place of each interview with an expert must be recorded along with the key assumptions that were made and any important insights that came up in the conversation. Fourth, the assessments must undergo peer review by experienced managers across functions and therapeutic areas.
Those managers can then make comparisons across all projects and gauge, for example, if the project teams are being consistent in assessing similar uncertainties. They can determine, for instance, if the teams are using similar assumptions when assessing the probability of passing key development milestones. Fifth, the valuations must be compared with those done by external industry observers and market analysts to establish that the numbers are realistic.
Doing so gives management and the project teams a clear understanding of the key value drivers so that they can focus decision making and implementation in ways that add the most value. We agreed early on that the project teams would use ranges rather than single-point forecasts to describe future possibilities. Using ranges enhances credibility by avoiding false precision.
To the relief of many project team members, a neutral group studied the valuations. We increased transparency and consistency in yet another way by having a specially designated group of analysts process the valuation information and draw preliminary insights.
Having this work done by a neutral group was a relief to many project team members, who were rarely satisfied with the previous approaches to valuation, as well as to the top management group, who were tired of trying to make sense of widely disparate types of analysis.
This step was designed to ensure that no surprises would emerge when the decisions were being made.
How SmithKline Beecham makes better resource-allocation decisions
In this paper we present the results of an expert elicitation on the prospects for advances in battery technology for electric and hybrid vehicles. We find disagreement among the experts on a wide range of topics, including the need for government funding, the probability of getting batteries with L We find disagreement among the experts on a wide range of topics, including the need for government funding, the probability of getting batteries with Lithium Metal anodes to work, and the probability of building safe Lithium-ion batteries. Averaging across experts we find that U. Show Context Citation Context The Li Public policy response to global climate change presents a classic problem of decision making under uncertainty.
How SmithKline Beecham Makes Better Research-Allocation Decisions
With more projects successfully reaching late-stage development The beginning of the film highlighted how based on science, it shows that the decisions we make are bad. We assume that we have a strong awareness of our choices and the reason why we do certain things. I agree with that because every time I make a decision, I always think that I know what the outcome would be, but really I am only making a decision based on my emotions and I am only assuming to think the outcome will be great. Make Better Decisions. Harvard Business Review, 87 11 , Karin Peterson Benedictine University Abstract That the current era of economic uncertainty may have been ushered in through a series of poor government and corporate decisions is implied through the rear view mirror.
How SmithKline Beecham Makes Better Resource-Allocation Decisions