To continue the previous Trouble with Charts post, here's a quick overview of what was missing from the CA report. The chart below is from Larry MacCherone's presentation titled The Impact of Agile Quantified Follow Larry's link back for other presentations on this topic
I picked a sample to show how a point value is of little help when it is produced from a collection - in this case, 10's or 1,000's of samples - without the variance bands for sample space.
The error bars for each column are in Larry's charts, I put a distribution curve - assuming the shape to be symmetric and normal - on those bars, and marked what I thought would be the 2nd standard deviation of the curve. Here's what that means. The 2nd SD includes 95.4% of all the samples from the sample space.
What this tells us when we line up the five classes, in this case, Iteration length, is that the performance index for Quality is pretty much the same for 4 of the 5. The 1-week iteration is lower, but the 2, 3, 4, and 5 all fit inside the same range - more or less - that range the spread from the Mean to the 2nd SD.
This data for the study was drawn from actual projects using Rally. But as a user of Rally on a large Federal program, there are some issue with the data in general from hands-on experience, that bring into doubt the credibility of the report.
- Our teams vary widely in their maturity - so their performance varies as well
- The PO's vary widely in their maturity - so their ability to definitize the needed Features varies
- The funding profiles vary widely - so staffing availability varies as a function of time
- Pretty much everything else varies as well
Having a sample space of 60,000 projects may or may not be of any use unless the underlying drivers of the data are normalized in some way.
Never use a single point number for any decision making without first determining the variance of that number. It may be there is statistically no difference between any of your considered options.