In the estimating business, like many things in project management, there is confusion about principles, practices, and processes. And sometimes even outright misinformation.
Here's an example used by the #NoEstimates advocates. Starting in 1986, there is a sentence that says more or less what the slide says below.
A good estimation approach should provide estimates that are within 25% of the actual results, 75% of the time
The book this statement comes from is Conte, S. D., H. E. Dunsmore and V.Y. Shen. Software Engineering Metrics and Models. Menlo Park CA: The Benjamin/Cummings Publishing Company, Inc., 1986. Looking at the statement is on page 172-175 open on my desk right now. Steve McConnell abstracted the original page content into those words.
And the words on page 172 to 175 speak about the Magnitude and Mean Magnitude of relative Error. The term within 25% is the Mean Relative Error, that is the estimate is within 25% of the actual value - the real value compared to the estimated value.
So if the actual value - after we are done - is $25,000 then is the estimate is within 25% of that estimate - $18,700 - then that's a good start.
In other words, if the error of our estimate is less than 25% of the actual outcome, 75% of the time, we're doing pretty well early in the project - possibly on day one. In our NASA Software Intensive System of Systems business, we need an 80% confidence basis of estimate in the proposal - A 20% MRE. Conte, Dunsmore, and Shen's number is a 75% confidence level in 1986.
We use Monte Carlo Simulation tools and Method of Moments algorithms from very large historical - the holy grail of empirical forecasting - databases and apply analogous and parametric models for work that is new to get these numbers. None of that was available in 1986. As well having punched holes in stiff cards by the millions for FORTRAN 77 Missle Defense radar software (Cobra Dane), the estimating processes were VERY Crude compared to today. So using 31-year-old statements, like used about the 34-year-old NATO Software Crisis, is just bad research. Another example of Doing Stupid Things on Purpose.
The notion used by #NoEstinates advocates does NOT mean that the estimate is within 25% of the actual. But the Mean Relative Error of the estimate is with within 25%. They would know that if the Read the Book and stopped echoing someone else's poorly translated mathematics.
This is a serious error in understanding the principles of estimating, and this error is repeated throughout the #NoEstimates community. It's time to put it right.
Please go buy Software Engineering Metrics and Models, it's cheap and packed full of the mathematics needed to actually perform credible estimating on software intensive systems. And download the paper that followed "A Software Metrics Survey." While you're at it buy Estimating Software Intensive System of Systems and you to can start debunking the #NoEstimates hoax that Decisions can be made in the presence of uncertainty without estimating the impact of those decisions.
The only way this can happen is if there is no uncertainty, the future is like the past, there is no risk - reducible or irreducible and nothing changes.