It is popular to use several references to the estimating problem that are three to four decades old
- A Software Metrics Survey
- Analysis of Empirical Software Estimation Models
- SOFTWARE ENGINEERING Report on a conference sponsored by the NATO SCIENCE COMMITTEE, Garmisch, Germany, 7th to 11th October 1968
Much has happened in the last 3 to 4 decades to increase to accuracy and precision of software development efforts.
- Databases - of past performance
- http://www.nesma.org/
- http://www.isbsg.org/
- http://www.cosmicon.com/
- Modeling algorithms - Monte Carlo simulation, method of moments
- Reference Class forecasting - from past performance and systems engineering models in sysML. Design Structure Matrix with Monte Carlo Simulation
- Parametric models - calibrated parametric models of work adjusted
So when we hear there is a problem with estimating the the basis of that claim is 30 to 40 year old reports, we need to be skeptical at best. When those claims are used to sell a book, a workshop, and entire idea, then some serious questions need to be asked.
Any understanding at all of the current software estimating techniques as applied with tools and databases to modern systems, not 40 year old FORTRAN systems?
While there are huge issues with estimating any complex emergent system, identification of the the root cause of the problem has not been done by those conjecturing that Not Estimating is the solution. This Root Cause Analysis has been done for modern complex systems and it has been found to be one of three sources.
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
- We didn't know
- We didn't do our homework
- We ignored what others have told us
- We ignored the past performance in the same domain
- We ignored the past performance in other domains
- We just weren't listening to what people were telling us
- Our models of cost and schedule growth were bogus, unsound, did not consider the risks, or we just made them up
- We couldn't know
- We didn't have enough time to do the real work needed to produce a credible estimate
- We didn't have sufficient skills and experience to produce a credible estimate
- We didn't understand enough about the problem to have our estimate represent reality
- We choose not to ask the right questions
- We choose not to listen
- We choose not to do our home work. or worse choose not to do our job
- Since we're spending other peoples money we've decided it's not our job to know something about how much and when you'll be done to some level of confidence. We'll let someone else do that for us and we'll use their estimates in our work.
- We didn't want to know
- "You can't handle the truth," as Jack Nicholson character Col. Nathan Jessep's so clearly stated below in the clip for A Few Good Men.
- As the political risk and consequences of the project increase this process becomes more common.
But here's the way out of the trap for at least (1) and (2)
- We didn't know
- Do your homework. Look for reference classes for the work you're doing.
- Come up with an estimate based on credible processes. Wide Band Delphi, 20 questions, lots of ways out there to narrow the gap on the upper and lower bounds on the estimate
- We couldn't know
- Bound the risks with short cycle deliverables.
- This is called agile
- It's also called good engineering as practiced in many domains, from DOD 5000.02 to small team agile development
- We don't want to know
- Well there's no way out of those short of being King.
So take care when you hear about problems in the past, the long ago, possibly longer before those conjecturing the problem and the solution were born.