There is a popular graph showing project performance versus the estimated project performance in "Schedule Estimation and
Uncertainty Surrounding the Cone of Uncertainty," Todd Little, IEEE Software, May/June 2006.
This chart (above) shows data from samples of software development projects and is used by many in the agile community and by #NoEstimates advocates to conjecture that estimates are usually wrong. In fact estimates can easily be wrong for many reasons. But knowing why they are wrong is rarely the outcome of the discussion.
This approach is missing a critical understanding about assessing the effectiveness of any estimating process. It starts with the notion of an ideal estimate. That is a post hoc assessment of the project. The idea estimate is only known after the project is over.
Next is another critical issue.
Did the projects (in the graph above) overrun the estimate because the estimate was wrong or because the project was executed wrongly?
In our domain of complex projects, many of which are Software Intensive System of Systems, there are four primary Root Causes of Unanticipated Cost and Schedule Growth, shown below.
The top graph shows samples from the programs in the business. But it does not show WHY those programs ended up greater than the initial estimates. The graph is just a collection of data after the fact. What was the Root Cause of this unanticipated cost (and schedule) growth. The paper goes on to speak about a Estimating Quality Factor but that is only One of the four core Root Causes of cost growth from the PARCA research. As well the paper mentions other causes of growth, similar some PARCA causes.
But each project in the graph is not assigned a specific (maybe even more than one) cause. Without that assignment it is not possible to determine if estimating was the Root Cause, or as stated above one or more of the three other causes was the reason the project was above the Ideal line.
In the notion of Ideal I will assume the estimate was Ideal if the project actuals matched the estimate.
The problem here is those advocating estimates are flawed, a waste, not needed, do harm, or are even evil use this chart to support their claim. Without stating what the Cause for the deviation from the Ideal is. Just that it IS. Not Why. There is no basis to make any claims about the issues with estimating.
Without the Why, NO corrective action can be taken to improve the performance of both the project or any of the four listed (and many other) processes including the estimating process for the project. This is the fundamental basis of Root Cause Analysis. And it comes down to this ...
Without the Root Cause for the undesirable outcome, no corrective action can be taken. You're just treating the Symptom. Those conjecturing a Cause (say Estimating is the smell of Dysfunction) have no basis on which to make that statement. It's a house built on sand. Same for the Cone of Uncertainty - also popularly misused, since the vertical scale is rarely calibrated for the specific domain, instead some broad and uncalibrated range provided for notational purposes.
And as a notional concept the Cone of Uncertainty is useful. As a practical concept, only when the vertical scale is calibrated (Cardinal versus Ordinal) can it be used to assess the uncertanty of the estimates during specific periods of the project. This is another misuse of statistical data analysis.
There are databases that provide information needed to calibrate that chart as well. But that's another post.
For now, take care when seeing the first chart, to ask do you know the cause for the projects that is above the Ideal Line? No, then all you can say is we don't know why, but there are a lot of projects in our organization that have a Symptom of cost overrun, but we have no way to know why. And therefore we have no way to know how to fix this symptom. Just that we've observed it. But can't fix it.
To start to apply Root Cause Analysis, here is an introduction to the method we use on our Software Intensive Systems.