PMBOK speaks about quantitative risk analysis in Chapter 11. There are several issues with this section of PMBOK that I'll present below. But first, why is this is interest to me and potentially of interest to others?
- The management of technical and programmatic risk is in the domain of project management.
- The tools and processes suggested in PMBOK and syndicated in many other areas have some serious flaws that are at the root of project difficulties.
Opportunities are NOT Positive Risks
PMBOK states in 11.5.2.1.2 that positive risks are opportunities. Harry Jabagchourian and Robert Cvetko of The Boeing Company, Rocketdyne Propulsion & Power, Canoga Park, California have something to say about this in Risk & Opportunity Management: Program & Project Management Success Factors, Fourth National Symposium on Space System Risk Management. This paper provides a introductory overview to the issues of managing both risk and opportunity.
Here Risk and Opportunity are defined - not as opposites - but as two independent domains that must be managed concurrently.
PERT and Probabilistic Risk Management
Like a previous posting, the use of the PERT formula (a+4B+c)/6 is both naive and likely produces wrong results for any non-trivial network schedule. The primary issues with the PERT formula are:
- It assumes statistical independence of the work activities - this is almost never the case
- It assumes a symmetric probability distribution so the mode of the distribution is almost the "most likely" estimate.
So What's the Way Out?
The current "best practice" in the defense and space domain is a Monte Carlo simulation of the Integrated Master Schedule. But this IMS must start with being credible. This means:
- All work is defined as producing an "outcome" that is tangible
- The "health" of the schedule has to be all green. This means no widows or orphans, no long tasks, no leads or lags. essentially the schedule must have "all green" according to the DCMA (Defense Contract Management Agency) IMP/IMS 14-Point Assessment.
With a credible schedule, the statistics of the task variances must be build. The classification of risk is the best way to do this. DO NOT ask people what the pessimistic or optimistic values are. There is a large body of research showing this produces a highly biased result. Instead build a matrix of risk ranges and their descriptions.
For example:
- Class A: Been done before - low=-5%, high=+8%
- Class B: Some new development - Low = -10% High = +12%
- Class C: New development - Low = -10% High = +15%
- Class D: New technology - Low = 10% High = +17%
- Class E: New physics - Low = -5% High = +30%
These classification are needed for each major category of development - software, hardware, integration, etc. The use a Monte Carlo tool (Risk+ and @Risk for project are my favorites).
The result is a credible model of the variances and programmatic risk areas for the schedule.Use these to mange the project, engage in conversations about the "hot spots" in the schedule and how you're going to "cool down these hot spots." The simulation is not the solution, it simply points to where solutions are needed.