The presentation "Quantifying the Impact of Agile Practices," Larry MacCherone at the RallyOn 2013 Conference, presents some results on estimating impacts. The chart below shows 4 estimating types, including No Estimates, the sample sizes for each type and the components that make up the estimating types.

The *Software Development Performance Index* (SDPI) scale on the left ranges - by eyeball measurement - from 46 to 55.

The Higher the number the *better* the performance of the process. The presentation speaks to the components of the index further.

But first another piece of information ...

Teams doing Full Scrum have 250% better Quality than teams doing No Estimating

**But are these differences meaningful statistically?**

Let's start with several reading assignments, before answering

*How to Lie with Statistics,*Darrell Huff - this is a must*have*book for anyone working in an environment where numbers are used to make decisions.*Statistics: A Very Short Introduction,*David J. Hand, Oxford University Press - this is a short summary of all the other books on statistical processes sitting on my office shelf.*The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty,*Sam Savage - another*must have*book to learn that those tossing around numbers are likely unaware of the flaws in their logic.

**Let's start with the numbers from the chart**

Since the raw underlying data is not available, we can't do any p-Factor assessment from the population samples, but there is a simple question that can be asked.

Are there any statistical differences between the 4 SDPI's? If you look below at the *quick and dirty* assessment of the only data available, it looks like all 4 approaches are within a single digit variances of each other. Not that useful actually.

So the critical question still remains

How can you make a decision in the presence of uncertainty without estimating the impact of that decision?