Vasco Duarte has a nice presentation about his notion of No Estimates. It's clear and concise and answers the mail for what the heck is No Estimates all about. The answer is - there good ideas for flow based project where work chunks similar sized and arrival rates equal exit rates for service provider (development engine). This is the basis of Little's Law (link below). Can't good wrong here. This by the way is how many processes in many domains work. Workpackages with internal activities performed in the order needed to produce outcomes. Planned and executed by the Work Package team. With the Work Packages on baseline for the Rolling Wave 6-9 months, Work Packages crossing only one accounting period. This of course is not the domain Vasco works in - not surre what that is - but much shorter cycles can be found in many places where agile processes are also found.
Everything in the talk is more or less viable in many domains - decomposing the work into same sized chucks of work, putting this work in a queue of work, servicing the work load in a steady manner. Assuming the capacity for work is constant, AND the number of arrivals on the queue is the same as the number of leaving the queue, then Little's Law holds and you can forecast how long it will take to empty the queue at any point in time.
Notice you can ESTIMATE how long it will take to finish the work in the queue, knowing the length of the queue, the arrival rate and the exit rate. Some in the #NE community assert they only use Little's Law to forecast throughput, but they're leaving 2 of the 3 atributes off the table. Maybe because if they use LL to forecast WIP, exit rates and varainces in service times they'll be doing estimating, and that's not allowed in the No Estimates community. Smily face goes here!
Little's Law say
Using a queuing system consisting of discrete objects (stories), objects arrive at some rate to the system. The system of objects form into a queue (backlog) and receive service (development of the stories) and exit (when story is complete and 100% working).
Little's Law says that under steady state conditions, the avergae number of objects in a queuing system equals the average number of objects leaving the queuing system multiplied by the average time each object spends in the system.
This is one of the obvious statements that can be used to estimate how long it takes to do something.
If we know the service rate, which Vasco has shown later, the arrival rate, which may or may not be under our control, we can know how much Work In Progress there is. And we can forecast how long it will take to complete all the objects waiting in the queue.
The Statistical Process Control notion mentioned on the talk, is by its name Statistical in nature, meaning the arrival rate and the exit rate (as a result of the work being done) each have a probability distribution. As well, SPC assumes - and this must be the case - that a conforming outcome is present. That is no, rework. Or any rework goes back on the queue, and since it is likey to be of different size, while spoil the uniformity. Agile does this well, since working software is one of the conditions for success. But of course agile doesn't speak to the impact of rework, failed quality since that work simply goes back on the queue as another story. This dilutes the performance measures and one of the reasons pure agile is not well matched with Earned Value based performance management processes.
With Little's Law - the structure is laid out with the similar sized work, steady arrival and steady departure, words from my Six Sigma Course work book says one of the uses is:
Estimating Waiting Times: If are in a grocery queue behind 10 persons and estimate that the clerk is taking around 5 minutes/per customer, we can calculate that it will take us 50 minutes (10 persons x 5 minutes/person) to start service. This is essentially Little's law. We take the number of persons in the queue (10) as the "inventory". The inverse of the average time per customer (1/5 customers/minute) provides us the rate of service or the throughput throughput. Finally, we obtain the waiting time as equal to number of persons in the queue divided by the processing rate (10/(1/5) = 50 minutes).
Notice the term estimating - again. The math Vasco is describing is used for Estimating waiting times, throughputs, and departure rates. Here's the first example of redefining a term in common use and then saying we're no Estimating in No Estimates.
The core concepts of the talk are sound, but there are other gaps at the second level:
- Measuring the performance of the service in the queuing system (developers) and the arrival rate of similar sized work packages ignores the question of how long will the project take IF we don't know the items in the queue are the "all in" requirements.
- If, and this is a big if, you can break down the needed capabilities. I'll avoid calling them requirements, since stories are similar to the elements of the Concept of Operations and Use Cases, then the Little's Law queuing system will be a good forecast - read estimate - of the total time needed to complete.
- There is mention of the Standish report, but failure to connect the root cause of the failures. It may or may not have been bad estimates. Since Standish has self-selected samples, it's statistical weak and been discussed many times. But without the root cause, Vasco's statement is just speculation. The notion of self selected uncontrolled samples is simply bad statistics and while Vasco like to use statistics when referring to Deming. It's the Root Cause Analysis that is missing from most populist descriptions of the problem. It is just assumed that the problem can be solved by the solution in hand. In the absence of an actually root cause analysis, your hammer is looking for a nail to hit. This appears to be the role of #NoEstimates. Estimate making is poor, let's not make estimates.
- Black swans are misrepresented. BS's are NOT statistical variances, but unknown and likely unknowable probabilities of occurrence. If we have the statisics, we don't have a BS. This is a common misunderstanding of statistical processes found in several communities. BTW BS's are very common in Australia.
- When Vasco states In my view it's quite impossible to get projects on time using estimates he may want to look for evidence of how those of us who do show up on time, on budget, with working software deal with the uncertainties and variances that are natural and event based. But that's much like Standish, using self selected ancedotal examples to make a point. In Standish terms to sell consulting. In Vasco's term to sell an idea in the absence of evidence outside his our experience.
- Rational Unified is NOT waterfall. Another misstatement. Self selected projects is again simply weak statistics. This is an example of why estimating is done poorly and what Vasco rails again. It's called selection bias. It's in play in talks and estimates of work effort. It's everywhere. It's a natire process of humans and requires serious effort to avoid. Anchoring and Adjustment is the formal description.
- Just as an aside, the mention of Barry Boehm seems a bit off base. Barry is well known in the aerospace, defense, academic circles. Because Vasco hasn't heard of him, means he doesn't work in those domains. Barry was in Building O6 TRW the same time I was. I didn't work with him, I was a lowly software engineer on a radar system, but his reputation echoed in the hall ways. Barry moved to SEI the same time my boss did, when A&D crashed in the 80's. Barry is currently at USC (my alma mater for graduate school - Systems Management), and here's a seminal paper possibly useful for the discussion of estimating in the presence of uncertainty - "Reducing Estimating Uncertainty with Continuous Assessment," Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm, Center for Systems and Software Engineering, University of Southern California Los Angeles, CA 90089
- The factual accuracy of the Columbus story of "challenging a group to stand an egg on its end" is in serious doubt. 15 years earlier a similar story was used to describe the construction of a large dome at Santa Maria de Fliore by Filippo, as told by Martin Gardner (May-June 1996). "The great egg-balancing mystery". Skeptical Inquirer 20(3). By the way, the Skeptical Inquirer is a good place to start when using any popular notion of how something works in any public presenations or writting. We are so condition to actually beleive what we have learned, when in fact it is not true or is a version of the truth that has become truth.
- The flat earth analogy is also misinformed. It is the modern misconception that the prevailing cosmological view during the Middle Ages saw the Earth as flat, instead of spherical. According to Stephen Jay Gould, "there never was a period of 'flat earth darkness' among scholars (regardless of how the public at large may have conceptualized our planet both then and now). Greek knowledge of sphericity never faded, and all major medieval scholars accepted the Earth's roundness as an established fact of cosmology." This use of analogies - wrong analogies - undermines the actual usefulness of Little's Law based schedule estimating that is used many places. Vasco needs to a bit more googling before basis the meassage on this notion.
- The software crisis quote was from a 1968 NATO report used in the early days of computer science. Edsger Dijkstra's used the term in his 1972 ACM Turing Award Lecture. At that time there were no development tools, no real high level langauges, no formal processes, very weak understanding of the connections between requirements, size, complexity, and poor outcomes. It's like using 45 year behaviour in the public health domain - smoking e.g. - as the basis of today policy.
So while these last few quotes and story serve as a reminder of the failings of the human thought processes, they are likley not atuallly true. Like David Anderson's (Kanban) statement used in the same manner for the Frog siting in a slowly heating water. The frog will jump out when the water reaches a temperature too hot. He will NOT get bolied. I attribute these approaches to the well placed need to tell a story everyone can relate to. But the story rarely has any basis in fact.
This is harsh criticism and likely very annoying to those using this approach to make a point, but if you're going to give advice on how to spend other peoples money, at least get the underlying basis of the reasons straight.
So what should we think of #NoEstimates as explained here?
- Understand a bit about queuing theory, Little's Law and beyond. If the size of the work has a variance larger than can be absorbed by the Service, the work will start to become late.
- If the Service has a variance larger that the arrival rate variance of the similar sized work, the work will start to slow down and its delivery will be delayed.
- So Little's Law is very dependent of knowing with some level of confidence that the arrival rate, size of work, and the capacity for work are capable of sustaining the stability needed in Vasco's talk.
- Knowing this request that we can control the arrival rate and size and we can control the throughput of the Service. And if not, be able to estimate the variances to assure the system will remain stable under changes in these three variables.
- The numbers used late in the talk don't have variances on them. It may well be that those varainces are within the normal deviation for work of this time and are simply the aleatory uncertainty for the process. This means you can't do anything about and you need schedule and cost margin to protect the delivery date. This is completely lost on many of the software intensive programs we work, especially in ERP. This is why a little bit of math is dangerous.
WE NEED TO ESTIMATE THESE VARIABLES AND THEIR VARIANCES TO HAVE CONFIDENCE THE SYSTEM WILL BE STABLE
Historical data is fine to making the determination of the variables, but then we need to control the arrival rate, size, and capacity for work.
So in the end Vasco's talk is informative, useful, and can be applied in several domains. With the assumptions that arrival rate, size, and work capacity can be defined and the variances known. Good work. But is that the same of Not Making Estimates? Anyone working in the process control business would say either Vasco is redefining the term estimate or pushing the estimating process back upstream to assure the work consumed by the processes meets the constraints of Little's Law.
Here's the Trouble with These Approaches
They're pseudo mathematical - voodoo math some might say - they're based on weak if not wrong analogies - they igniore the condition under which they must perform - and most critically they ignore the mandatory need to have some not to exceed estimate of the funding for the project before it starts. Little's Law based planning system work very well on production lines or development process like production lines. PayPal for example. Maintenance systems for example.
Since there is never a domain, context, or discussion of project authorization processes based on committed funding in governance, we can't really make a determination of the applicability outside of personal anecdotes.
How to move beyond the limitation of No Domain, No Context
The problem poor software development performance is for the most part a bad management problem. A Dilbert boss who has failed to understand the five core tenants of successful projects - (1) what does done look like, (2) do we know how to get to done, (3) do we have enough of what we need to reach done, (4) what impediments will we encounter along the way, and (5) how can we measure phsyical percent complete.
If we don't know what done looks like, we need to start with something we do know about. This is incremental and iterative development of anything. In SW it can be called agile. In building spacecraft (which are software intensive) it is called increasing maturity of the deliverables through progressive elaboration of these deliverables, in construction there is Lean Construction, in Pharma there in progressive development of efficacy of the drug, etc. etc.
If we don't know what our past performance was, either through reference class forecasting or actual past performance, we can't calibrate the needed variables for the queuing system. If we don't have some notion of the underlying statistical processes and the resulting probability distrbutions, we can't have insight into the behaviour of the system and will be surprised by the result. We'll call that a Black Swan, but in fact we were just to lazy to go do out homework.
But we must remember for the approach suggested by Vasco, arrival rate, work size, and capacity for work must be stable and the arrival rate cannot exceed the capacity to exiting the Service. When that is the case, the estimate at completion is available from the system.
So if #NoEstimtes is Vasco's description in the talk, this is very understandable. But estimating the cost before starting is still a need in many domains and estimating the cost at completion requires the stability of the queue and service system. Now we can start to find domains and contexts where this is applicable.