When I hear what are the odds of success for a project, it tells me there is an undertstanding gap in how statistical processes are used in projects to produce probabilistic estimates of three things:
- The probability of completing on or before a need date?
- All projects have a need date for the delivered value.
- If there is no need date, then either the project is a R&D effort or the stakeholder of the project outcomes cares little about putting that project value to work to recover the investment.
- The probability of completing at of below a needed cost?
- All projects spend money, usually someone elses money.
- How much will the project spend?
- This is NOT the same as the budget for the project.
- The Estimate to Complete (ETC) and the Estimate At Completion (EAC) are two numbers management uses to assure the project will show up at or below the Budget At Completion.
- The probability of the delivered technology meets the required specifications?
- When the Technical Performance Measures are written down, we can assess our progress toward meeting them in a probabilistic manner as well.
- Will our database server farm be fast enough, big enough, reliable enough to meet the business need?
- Waiting till we're done is not a good idea.
First let's echo Tim Lister's advice...
Risk Management is How Adults Manage Projects
All the World's a Statistical Process
First let's look at a network of work activities. These are tasks with dependencies, whose durations have naturally occurring variances. These durations can never be an exact number, since the work is emerging or simply varying naturally. Each activity has a unique Probability Distribution Function, which may be similar, but also unique. This is the case as well when there are no dependencies. The probabilistic processes are still in place. Even some place like the Toyota production line, no work process takes exactly the same duration. So if these natural variances are unaccounted for, you're going to be late, likely over budget, and your favorite gadget may not work. This concept is the basis of Statistical Process Control.
For each process, the upper and lower ranges, along with the Probability Distribution Function can be used to model the range of possible outcomes for duration - for example. In the picture below the probability of completing on or before for the IOC (Initial Operating Capability) for Friday October 23, 2020 is 80%.
When we hear the bet we'll be successful, this is the number. It's not the right term the bet but this is the term. There is an 80% confidence of completing on or before 23 Oct 2020.
We can now connect the dots between individual activities and a network of activities with the next chart. This shows the dependencies, each of their variances and how those drive the variances in the outcomes.
In the End the Discussion is About Domain and Context
When we hear about some new approach to making decisions in the absence of estimating the impact from those decisions, ask in what domain can that be possible. By possible I mean how can we make a decision by ignoring the principles of Microeconomics.
There may be domains in which that is completely possible. Below is a scale of projects I built awhile ago when working on an overall Program Governance engagement. From family gardening to building the USS Virginia there is a huge spectrum of techniques, processes, governance, tools, and approaches to increasing the probability of success. Having any discussion about the applicability of any idea has to start with what domain are we in.