I've started reading Vasco's book #NoEstimates and will write a detailed deconstruction. I got the Kindle version, so I have a $10 investment at risk. Let's start with some graphs that have been around and their misinformation that forms the basis of the book.
The Chaos Report graph is the 1st one. This graph is from old 2004 numbers. That's 12-year-old numbers. Many times the book uses 12, 16, even 25-year-old reports as the basis of the suggestion that Not Estimating fixes the problems in the reports. The Chaos reports have been thoroughly debunked as self-selected samples using uncalibrated surveys for the units of measure for project failure. Here are a few comments on the Standish reporting process. But first remember, Standish does not say what the units of measure are for Success, Challenged, or Failure. Without the units of measure, the actual statistics of the projects and the statistical ranges of those projects for each of the three categories, the units are essentially bogus. Good fodder for selling consulting services or for use by those with an idea to sell, but worthless for decision making about the root cause of Failure, Challenged, or even Success. Any undergraduate design of experiments class would have put all that information in the public.
So the 1st thing to read when you encounter data like this is Project Success: A Multidimensional Strategic Concept, Aaron J. Shenhar, Dov Dvir, Ofer Levy and Alan C. Maltz. Only then start to assess the numbers. Most likely, like the numbers in the book, they're not credible to support the hypothesis. Which by the way, there is no hypothesis for you can make decisions in the presence of uncertanty without estimating
So let's look further at the difficulties with Standish and why NOT to use it as the basis of a conjecture
- Standish Report
- Standish Report and Naive Statistics
- Finally a Challenge to the Standish Report
- Standish Report and Naive Statistics
- Project Failure Rate
- "The Standish Report: Does it Really Describe a Software Crisis?" Robert Glass, Communications of the ACM, August 2006
- "How Large are Software Overruns? Critical Comments on the Standish Group's Chaos Report," SIMULA Research Laboratory, 2006-03-21
- "Software Cost Overruns: How Large are They and How Should They be Measured?" SIMULA Research Laboratory, 2005-08-30
- The nonexistent Software Crisis - BTW Vasco quotes a NATO report about the software crisis for 40-year-old process using FORTRAN 77 code developed using batch processing systems. I worked programs like that just out of grad school for missile defense systems. It was a crisis, but that was fixed when IDE's, workstations and other modern tools came along.
- Go read Scott's assessment to see actual data with actual statistics and stop listening to the bad data, poor research, and unsubstantiated conjectures.
- How Successful Are IT Projects, Really?
- 2010 IT Project Success Rates
- Software Development Success Rates
- Defining Success
A simple google search would have found all this research and many many more. I get the sense V didn't do his homework. The bibliography has very few references to actually estimating, no actual estimating books, papers, or research sites. Just personal anecdotes from a set of experiences as a developer.
The Standish Report failure mode is described in Darrell Huff's How to Lie With Statistics - self-select the samples from the survey. Standish does not provide any population statistics for their survey.
- How many surveys were set out?
- How many responded?
- Of those responding, how many were statistically significant?
- What does it mean in terms of actual measures of performance to the troubled?
- If the project was over budget, was the needed budget estimate correct?
- If the project was late, was the original schedule credible?
None of these questions are answered in Standish reports. No Estimate picks these serious statistical sampling errors up and uses them as the basis of the pure-conjecture that Not Estimating is going to fix the problems of poorly planned, poorly managed, and poorly margined projects. This would garner a high school D is the Statistics class.
Next, comes a chart that makes a similar error. This reference is from Steve McConnell's book but is actually from another source. The No Estimates book does a poor job of keeping the references straight. It is common to misattribute a report, a graph, even a phrase. The book needs a professional editor.
The graph below is used to show that estimates are usually wrong. But there is a critical misunderstanding of the data. The category error is repeated by others supporting the No Estimates paradigm. I'll save the embarrassment by not using his name here.
- It starts with a straight line called perfect accuracy. There is no such thing, nor can there be such a thing. First, there are two attributes of any estimate. Accuracy and Precision. There is no such thing as perfect accuracy in the estimating business. An estimate is an approximation of reality. All project work operates in the presence of uncertainty. Aleatory uncertainty and Epistemic uncertainty. The first (aleatory) can be handled with margin. The second (Epistemic) can only be handled with redundancy, prototypes, fail-safe, fault tolerance, uncertainty buydown or similar approaches.
- The sample projects show they did not meet the perfect accuracy - whatever that might have been. This knowledge can only be obtained after the work has been done. Either at the end or during the project - cumulative to date.
- But there are two sources of the variances of estimates and actuals:
- The estimate was in error.
- Work was not performed as needed to meet the estimated completion date.
- The original author of this graph does not say which is the case, or if both are the case.
- The Root Cause analysis of the variances between Estimate and Actual is not available.
- The graph shows a symptom but not the Cause. This is a simple and simple-minded mistake when knowledge and experience in Root Cause analysis are missing. Here's a start for fixing that gap.
I'm in the early parts of the book and already have a half dozen pages of notes for either fallacy, incorrect principles, 30-year-old references, and other serious mistakes of understanding how decisions are made in the presence of uncertainty. My short assessment is ...
No Estimates is a concept built on sand.