There is a post that references a concept I've come to use that puts uncertainty into three classes. This post it not exactly what I said, so let me clarify it is bit.
First some background. I work on an engagement that provides advice to an office inside the Office of Secretary of Defense (OSD). This office, the inside, is responsible for determining the Root Cause of program performance for ACAT1 (Acquisition Category 1) programs.
These are large programs. Larger than $5B. In most domains outside the ACAT1's this numer is ridiculously large. But inside the circle of large defense programs, $5B is really not that much money. Joint Strike in a Congressional Quarterly and the Government Accountability Office indicated a "Total estimated program cost now $400B," nearly twice initial cost. DDG-1000 is $21,214 Million, yes that $21,214,000,000.
No IT or software development project would come within a millionth of that. If you're interested there are reports at Rand and IDA for the current issues. There are certaintly multi-million dollar IT projects. The ACA web site is probably going to be in the range of $85M to several 100 million. The facts are still coming in. So anyone who says they know and doesn't work directly in the program, proably doesn't know and is making up numbers. GAO will get to the real numbers soon we hope.
Principles Rule, Practices Follow, Everything Else is BS
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
- We didn't know
- We didn't do our homework
- We ignored what others have told us
- We ignored the past performance in the same domain
- We ignored the past performance in other domains
- We just weren't listening to what people were telling us
- Our models of cost and schedule growth were bogus, unsound, did not consider the risks, or we just made them up
- We couldn't know
- We didn't have enough time to do the real work needed to produce a credible estimate
- We didn't have sufficient skills and experience to produce a credible estimate
- We didn't understand enough about the problem to have our estimate represent reality
- We choose not to ask the right questions
- We choose not to listen
- We choose not to do our home work. or worse choose not to do our job
- Since we're spending other peoples money we've decided it's not our job to know something about how much and when you'll be done to some level of confidence. We'll let someone else do that for us and we'll use their estimates in our work.
- We didn't want to know
- "You can't handle the truth," as Jack Nicholson character Col. Nathan Jessep's so clearly stated below in the clip for A Few Good Men.
- As the politcal risk and conseqeunces of the project increase this process becomes more common.
The soliloquy in the movie makes a good point -handling the truth is actually very difficult for almost everyone outside the domain - in many instances.
We want the simple answer. We want it all to be fine. We really don't want to do the heavy lifting needed to come up with an answer. We want the simple answer. Many times we don't want an answer at all, we want to just do our job and ignore the fuduciary responsibility to tell others what the cost and schedule impacts are, or even to do our job of discovering that DONE looks like before we start spending other peoples money.
So here's the way out of the trap of at least (1) and (2)
- We didn't know
- Do your homework. Look for reference classes for the work you're doing.
- Come up with an estimate based on credible processes. Wide Band Delphi, 20 questions, lots of ways out there to narrow the gap on the upper and lower bounds on the estimate
- We couldn't know
- Bound the risks with short cycle deliverables.
- This is called agile
- It's also called good engineering as practiced in many domains, from DOD 5000.02 to small team agile development
- We don't want to know
- Well there's no way out of those short of being King.
But the words used in the original post that referenced my post are not my intent, nor are they part of any process I work in.
- We don't pretend, we can't pretend, we must not pretend to know about the future. Instead of pretending, we use well developed and field proven statistical estimating processes. These are documented in guidance, developed through professional societies (also listed in the link). These are calibrated with Cardinal values that are themsleves statisticvally adjusted.
- You can only not know the future if it is unknowable. This may be the case. When it is the case, you need a fail to safe approach. Incremental development is one. Failsafe systems is another when the machine has been produced. Fault-Tolerant System Reliabilty in the Presence of Imperfect Diagnostic Coverage is a small component of a much larger body of knowledge of how to build systems that a robust and do not fail to danger.
- The political I don't want to know is far above my pay grade.
Here's a list of other posts on this topic. It's a crtically important topic. One that deserves deatiled analysis. One that we're obligated to know and use when it's not our money we're spending. It's called Governance.
Here's some more discussion on Estimating for fun and profit.