Thanks to Mike Griffiths for his post about Lies, Damn Lies, and Statistics around the Standish Chaos reports and the PMI's introduction of an Agile PMP certification.
So we're starting out again with the Standish numbers and all the inherent "bad statistics" they represent. Here's the familiar starting point for deconstructing the Standish numbers. Adding to this "pile" is the figure from another report on "agile adoption."
I'm continually amazed (not really, since this has been going on for awhile) at the naive and simple minded approaches to reporting numbers.
- The choices for agile are the common ones.
- CMMI is not a method, never has been, never will be. Are the people at Forrester and Dr. Dobb's really that lame?
- ISO 9000 is not a method, same question?
- What the [email protected]#$ do they mean by Waterfall?
- Is "iterative development" agile? They don't say.
- "No method" got the largest choice by a 3 times margin over any agile method!
So everyone go right now to your favorite book store and buy the paperback copy of Darrell Huff's How to Lie With Statistics. In it you'll see how Forrester and Dr. Dobbs committed several major - meaning stupid - blunders in their very weak attempt to make a point. Yes, this annoys me beyond rational reason, since this is a complete failure to recognize even rudimentary statistical sampling concepts.
- No sample size - it could be all the samples are statistically the same, can't tell.
- Aggregating numbers of independent items until they get the right sum that makes their point.
- Mixing Apples and Corgie's and calling it a survey is the best way to confuse everyone into buying your message.
- Picking ANY method might increase performance, no way to tell, since the authors clearly have a planned message.
- Not baseline statistically analysis - analysis of variance - so you can make up anything want around the numbers, and the un-informed (I wanted to use the word ignorant) reader will believe you.
Make the madness stop
So here's the killer question, noit answered by the survey or by Mike. With a known variance for project success or failure year-on-year, is any improvement or lack there of, with the normal statistical variance of the sample space? Without knowing the sample space, the sample size, or even the sampling method, there is ABSOLUTELY no way to derive any useful information what so ever from the Standish report, the Forrester / Dr. Dobbs, and any projection of project improvement from the PMI initiative. These reports and the decision making from them is just - well how should I say this - complete nonsense.
If this is as good as it's going to get in the attempts at "increasing the probability of project success," I'll repeat Eric's quote from South Park - "da [email protected]#$". I'll continue to use my favorite word around these types of charts and reports (not Mike, Forrester and Dr. Dobbs) - LAME. It would get a D in the High School statistics class. And that would be generous.
If you're on the agile side of the conversation please don't use this approach. Tell PMI to adhere to their ethics statement and stop encouraging false data. If you're on the side where the specific software development method has impact, but is not the real problem, shout as loud as you can - Make It Stop.
The profession of software development and the management of software development is NEVER going to improve if we don't stop falling for these distorted - did I mention LAME - approaches to improving our lot in life. I know Bill Nichols will be chiming in soon, along with Kailash Awati, John Goodpasture, Pat Richard, Ron Rosenhead, Dan Ward and other virtual colleagues who earn their living "by the numbers." For all those "agile thought leaders" who make use of this crap, please don't encourage Dr. Dobbs or anyone else to continue.