Todd Little posted a comment on "How To Lie With Statistics," about his observations on the chart contained in that original post.
As Todd mentions in his response
The Cone of Uncertainty chart comes from the original work of Barry Boehm, "Reducing Estimation Uncertainty with Continuous Estimation: Assessment Tracking with 'Cone of Uncertainty.'" In this paper Dr. Boehm speaks to the lack of continuous updating of the estimates made early in the program as the source of unfavorable cost and schedule outcomes.
As long as the projects are not re-assessed or the estimations not re-visited, the cones of uncertainty are not effectively reduced [1].
The Cone of Uncertainty is a notional example of how to increase the accuracy and precision of software development estimates with continuous reassessments. For programs in the federal space subject to FAR 34.2 and DFARS 34.201, reporting Estimate to Complete (ETC) and Estimates at Completion (EAC) is mandatory on a monthly basis. This is rarely done in the commercial world with the expected results shown in Todd's chart for his data and Demarco's data.
The core issue from current research at PARCA (http://www.acq.osd.mil/parca) from Root Cause Analysis (where I have worked as a support contractor) shows many of the issues are poor estimates when the program was baselined and failure to update the ETC and EAC with credible information about risks and physical percent complete
The data reported in Todd's original chart are the results of the projects based on estimates that may or may not have been credible. So the analysis of the outcomes of the completed projects is Open Loop ...
... that is the target estimate measured against the actual outcomes May or May not Have Been Against Credible Estimates. So showing project overages doesn't actually provide the needed information the correct this problem. The estimate may have been credible, but the execution failed to perform as planned.
With this Open Loop assessment it is difficult to determine any corrective actions. Todd's complete presentation "Uncertainty Surrounding Cone of Uncertainty," speaks to some of the Possible root cause of the mismatch between Estimates and Actuals. As Todd mentions in his response, this was not the purpose his chart. Rather I'd suspect just to show the existence of this gap.
The difficulty however is pointing out observations of problems, while useful to confirm there is a problem, does little to correct the underlying cause of the problem.
At a recent ICEEA conference in San Diego, Dr. Boehm and several others spoke about this estimating problem. Several books and papers were presented addressing this issue.
-
Software Cost Estimation Metrics Manual, Bradford Clark Raymond Madachy (Eds.)
-
The 2nd Edition of Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey, CRC Press.
Both these resources , and many more, speak to the Root Causes of both the estimating problem and the programmatic issues of staying on plan.
This is the Core Problem That Has To Bee Addressed
We need both good estimates and good execution to arrive as planned. There is plenty of evidence that we have an estimating problem. Conferences (ICEAA and AACE) speak to these. As well as government and FFRDC organizations (search for Root Cause Analysis here PARCA, IDA, MITRE, RAND, and SEI).
But the execution side is also a Root Cause. Much research has been done on procedures and process for Keeping the Program Green. For example the work presented at ICEAA "The Cure for Cost and Schedule Growth" where more possible Root Causes are addressed from our research.
While Todd's chart shows the problem, the community - cost and schedule community - is still struggling with the corrective action. The chart is ½ the story. The other ½ is the poor performance on the execution side IF we had a credible baseline to execute against.
To date both sides of the problem are unsolved and there for we have Open Loop Control with neither the proper steering target nor the proper control of the system to steer toward that target. Without corrections to both estimating, planning, scheduling, and execution, there is little hope in improving the probability of success in the software development domain.
Using Todd's chart from the full presentation, the core question that remains unanswered in many domains is
How can we increase the credibility of the estimate to complete earlier in the program?
Meaning
- In the feasibility stage what is a credible estimate, and how can that estimate be improved as the program moves left to right?
- What are the measures of credibility?
- How can these measures be informed as the project progresses?
- What are the physical processes to assure those estimates are increasing in accuracy and precision?
By the way the term possible error comes from historical data. And like all How to Lie With Statistics charts that historical data is self selected, so a specific domain, classification of projects, and most importantly, the maturity of the organization making the estimates and executing the program.
Much research has shown the maturity of the acquirer influences the accuracy and precision of the estimates. Our poster child is Stardust, with on time, on budget, working outcomes due to both government and contractor Program Manager's maturity for managing in the presence of uncertainty. Which is one of the source of this material
[1] Boehm, B. “Software Engineering Economics”. Prentice-Hall, 1981.