Pat Richard wrote the blog I wanted to. I've had a conversation of sorts with one of the "leaders" of the #NoEstimates movement who talked in circles when asked where is this method applicable outside your examples of a 5 week project and a cleanup of bugs on another project? The answer back was always if you're serious ask me better questions. This pseudo-Socratic method is tiring at best.
Pat's blog has some interesting ideas
The main argument behind #NoEstimates is that estimating is hard, imprecise, some even say wasted time.
I'd suggest this is because estimating is not well developed in the domains these people work. We provide estimates every month for software intensive programs. Software that has emergent requirements, and sometimes even No requirements that can be validated. The INTEL business does this. The enemy writes the requirements, not the buyers of the system.
The precision argument is a red herring. The wrong and mostly wrong headed assumption is that an estimate has to be precise. This is simply naive.
I personally believe that any estimation method, Agile or not, that assumes that past behavior is an indication of future behavior is very faulty.
From a deterministic point of view this is likely correct. From a probabilistic point of view the past is the forecast of the future. The challenge is to determine the model of the past that can forecast the future with the degree of precision needed to make a decision. I can forecast the future toss of a dice with ease. There is a 1 in 6 chance that a 4 will appear when I toss the dice a single time. Toss two dice a bunch of times and I can get a probability distribution of the values of both pairs. This is high school statistics.
Now estimating how long it will take to develop a piece of software requires a modeling process tailored to software development. There are many of these. COCOMO II, Price, and Seer come to mind. I just got off a seminar where SEI was speaking to the issues with estimating software intensive defense programs. QUELCE is their approach. We do this all the time too in the embedded system world. There is a reference design, a set of reference class forecasts for past projects. Tune the model to generate an estimate. Continue to tune the model to reveal hot spots that need more investigation.
There are issues with this approach of course, none the least of which is the buyers don't like the number and ask for a better number. This is the source of many Nunn McCurdy breaches for defense programs.
The bigger question asked by one of the more obstinate poster on #NoEstimates is what is the value of estimating. My gut reaction is this is a question coming from a programmer. It's not his money, he doesn't like doing estimates - probably because of some trauma of a past project where they held him to the estimate - and firming believes they provide no value.
Estimating the cost and schedule on projects where you are spending other people's money is called Governance. You may not like doing it, then that posters strategy was to find customer who didn't ask for an estimate. Nice work if you can get it.
We need to learn to estimate better. We need better estimating processes. We need to remove the political processes from estimating. All these provide a simple value to the project.
It's not our money and we are obligated to spend the money provided by the owner in a responsible and credible manner.
That conversation went nowhere. It seems there is a refusal to engage at that level. Instead the approach is like that of early XP (before Agile). Well you're simply asking too many questions, you need to try it and learn how I see the world. There was one poster that loved to use quotes from Yoda around our questions about XP. This sounds all too familiar.
Here's another good suggestion from Pat
One of the points made resonated with me and I believe would get the nod from most experienced project managers; estimate in small chunks (my wording).
This has a statistical basis. The variance on a long duration activity has lower confidence than on short duration activity. The inverse is true as well. If I have high confidence of the duration as a percentage, than the resulting variance on a long duration activity will be greater than on a short duration activity. 15% of 200 days is larger than 15% of 20 days.
Estimate small blocks of work
This result is a statistically better forecast.
At the end of the day we need to face up to an important problem in the project business. There are cost overruns, schedule overruns, and products or services that fail to meet their intended goals. But we also have to ask and answer a critical question - what is the value at risk? This means what am I willing to invest to find out how much I might loose if my estimate is wrong? If I have no estimate of the cost, I can't even start to have that conversation.
The #NoEstimate advocates have yet to answer a simple question - where is this idea applicable?
Until then, it's likely they will be a small community talking themselves into thinking this is a really good idea, but nobody is listening. On our large, complex, high risk programs, we need better estimating process. #NoEstimates is not the answer without first answering the question where is this applicable?