Chris Chapman has a nice post addressing some of the issues of #NoEstimates. This tries to explain why we should consider the approach of developing software without an estimate of the duration or cost. Let's look at these concepts in light of a software development project that is spending someone else's money. Commercial money or government money.
Hopefully these questions are not a surprise to anyone writing software using other people's money. So let's see if I can work through the concepts that Chris has presented, without completly pissing off the angry voices of #NE. Here's Chris's direct quotes.
- It starts from a premise that you don't explicitly need estimates to deliver quality software if you are capable of developing and shipping into production small slices of functionality. Predictability of output emerges from the teams fast-learning of the problem domain that in turn comes from doing the work in small batches.
- If we build small pieces of functionality, we'll have a reference basis of knowing how long it takes to build similar pieces. Or even how long it takes to build collections of small pience. This is called Reference Class Forecasting.
- Certainly the process of building small pieces, getting them verified and into the hands of the users is the best approach for all software projects. No one doubts that. Our complex world does this. Simple worlds do it better only because they have fewer moving parts and fewer interfaces to manage.
- So the last sentence provides the answer to the how much and how long IFF we know something about the project's scope.
- If not, we have no answer to how long and how much and we'll just be spending our customers money until we reach the end of the needed functionality, run our of time, or run out of money.
- This critical idea here is the construction of the predictability of outcomes. This is worth repeating. With the predictability of outcomes, we have the basis of estimating. If you choose not to estimiate, then you're throwing away good information and the ability to be a hero when your customer asks the right questions.
- #NoEstimates teams actively measure their output to determine if they are slicing story features small enough so that they can be rapidly implemented in working software. To use a weak metaphor, it's like a baker learning to cut balls of dough from a larger blob so that they turn into uniform loaves of bread or buns. Over time, skill increases and the baker becomes a predictable, stable system.
- Outputs are the only measurable. DOD IMP/IMS and 5000.02 measures outputs, using Measures of Effectiveness (MoE), Measures of Performance (MoP), Technical Performance Measures(TPM). This is then the basis of forecasting the future performance from the past, using the reference class of this past performance.
- We can now make estimates to complete (ETC), estimates at completion (EAC), the capacity for work - throughput - assuming there is a steady state process in place. This is done in other domains using the assessment of these measures within the upper and lower bounds and the defined compliance levels within these bounds. This is also the basis of Reference Class Forecasting.
- This builds on the long-observed improvements that are gained from applying Lean and systems thinking to knowledge work. In turn, this requires having an understanding of queuing theory, flow, constraints, batch sizes and complexity thinking. All this said, however, it isn't inaccessible: It just takes some diligent, hard work. No magic or silver bullets, I'm afraid.
- For simple queuing, Little's Theorem produces an estimate of how long you will wait in a queue. This is a simple but effective way to estimate the performance of the system, since the system not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else.
- The flow and constraints provide us with the ability to forecast throughput and the resulting completion times for the input queue to be empty - the definition of done, assuming no rework.
- Oh by the way, the diligent, hard work, is what the professional Program Planning and Controls staff does every single day on the programs we work. It's called being accountable for our customer's money.
So Now For the Punch Line
Let's assume we work for a customer that has a governance process where budgeting for projects is part of everyday life. With the processes described above from Chris's #NE post we can easily answer: how much budget will we need to allocate for this project? And once you've got the budget authorized and allocated to the project, when can we expect to start returning value to those who funded the project?
This is one of those WTF result. The #NE paradigm, as described in the post, is standard incremental development, on fine grained boundaries, with sufficient reference class forecasting calibration to establish a basis of future estimates. Just like you'd find in any IMP/IMS, rolling wave, work package, Earned Value Management 0/100 Earned Value Technique (EVT) program we work for DOD, DOE, or NASA. WTF, This is how we do things. We calibrate the capacity for work - within each software intensive reference class, e.g. Avionics, Life Support, Communications, Rendezvous and Dock - then use those calibrated capacities, using a model, to construct the estimates for our future.
You can't have a forecast to completion - the estimate of ETC or EAC - without knowing the underlying capacity for work (assuming no rework). Get this measurement and you're all set to forecast the future completion data and cost (assuming constant dollars) IFF you know the number of items in the queue. These items are of course the Stories in the queue, if you follow Vasco's advice. They can also be Function Points, sysML features, Interfaces, even SLOC from memory constrained flight avionics Handel-C FPGAs
In the End
This has been a tortuous journey, exacerbated by some who poorly defined the very purpose using more platitudes than I've encounter in some time. One of these is the common platitude of simple minded agile that deliver early and often. Which is only the case when the receiver of the software - from the queue - can actually accept the software, AND the software doesn't age while waiting for it to be consumed. While some myfind this strange, in complex, interconnected system like those found in ERP, embedded processes, SW/HW integrated system, this is common. The order of assembly is critical.
A much better approach, using exactly the same processes, is to deliver as planned. The plan states the need date for the software and the order of the software. The whole notion of priorities of features is the basis of Capabilities Based Planning and systems engineering processes that are mandated by our procurement process.
But I can just here a few voices talking If this is working for you just move on. Which of course is complete BS, since every project, especially software projects, on the planet is troubled in some way. So it doesn't work for us, or for anyone. No one has a lock on the solution. Especially those without a sufficient understand that what they are saying is nearly identical to existing processes.
What has been will be again, what has been done will be done again; there is nothing new under the sun. - Ecclesiastes 1:9
- What has been - reference class forecasting for the capacity of work, based on actual performance of the work.
- What has been done, will be done again - relabeling the approach described by Chris in the post and Vasco's YouTube as #NoEstimates, is actually using Stories as the Basis of Estimate to calibrate the reference class.
- There is nothing new - of course there appears to be new, but without doing the needed homework - again guided by Chris's post and Vasco's YouTube - the vocal objection to questioning would not have resulted.