Peter Kretzman has a nice post in his series on #NoEstimates. Peter and I share a skepticism of "making decisions in the absence of estimating the cost and impact" of those decisions. In Peter's current post there is a quote that is telling.
Let’s use Ron Jeffries’ statement as an example of this stance:
“Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. “
One of my 3 half time jobs is working in the space and defense program performance management domain, both embedded systems and enterprise IT systems. DOD is the largest buyer of ERP on the planet. In this domain we have a formal process for determining what went wrong. The department looking after this is called Performance Assessment and Root Cause Analysis (PARCA). PARCA provides Root Cause Analysis for programs that have gone Nunn McCurdy as we would say.
When you read the reports from Rand and Institute for Defense Analyses on N-M breaches, requirements instability is in the top 5 as root causes.
It seems to me - in my narrow minded program performance management view of the world - that unstable requirements being used as the reason for vague estimates is so obvious a problem that has been completely ignored by the #NoEstimates advocates. It's like the olde saw
Doctor, Doctor it hurts when I do this (make estimates in the presence of vague requirements). Then stop doing that!
The notion of Capabilities Based Planning is missing in many software organizations. So having vague requirements is a natural outcome of not having definitive understanding of what Capabilities the system must provide, in units of measure meanigful to the decision makers. These units are:
- Measures of Effectiveness - are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. MOE's are stated in units meaningful to the buyer, focus on capabilities independent of any technical implementation, are connected to the mission success.
- Measures of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. Measures of Performance are attributes that assure the system has the capability and capacity to perform, and assess the system to assure it meets design requirements to satisfy the MoE.
Without these requirements have not home, are vague, and therefore create the root cause of bad estimates.
So what would a logical person do when working on a project that spends other peoples money, sometimes lots of other peoples money? Not Estimate? Does that sound like the corrective action to the root cause of the problems with software project success shortfall?
Not to me. It's the doctor, doctor this hurts paradigm. So until the root cause is determined, the corrective actions identified and applied, there can be no credible solution to the estimating problem. And there is a huge estimating problem in our domain, just read the N-M reports at RAND and IDA (Goggle nunn-mcurdy Rand or IDA to find them). Similar assessments of root causes can be found for enterprise IT from many sources.
The #NoEstimates advocates are attempting to solve the wrong problem with the wrong approach. They've yet to connect with the core process of writing software for money - MicroEconomics of software development. Here's a starting point to address the root casue rather than the symptom. Fixing the symptoms does nothing in the end. It just spends money, with no actonable outcomes. And that woudl be very counter to the principles of Agile.