Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.
The structure of this checklist is taken directly from Scientific American's essay on scientific baloney, but sure feels right for many of the outrageous claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.
How reliable is the |
Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated. In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population. |
Does this source often make similar claims? |
Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature of early development in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations. But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed. They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right. |
Have the claims been verified by another source? |
Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information. We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development. |
How does the claim fit with what we |
Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea. Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics. In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes. |
Has anyone gone out |
This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid. It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical. When self-selected communities see external criticism as harassment or you're simply not getting it, or those people are just like talking to a box of rocks, the confirmation bias is in full force. |
Does the preponderance of evidence point to the claimant's conclusion or to a different one? |
Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner. |
Is the claimant employing the |
Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school is quantitative methods for experiments. This course sets the ground rules for conducting research in the field. |
Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation? |
This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism. Show us your data, is that starting point for engaging in a conversation about a speculative idea. |
If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did? |
This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance. |
Do the claimant's personal beliefs and biases drive the conclusions, or vice versa? |
All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice? Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected. In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias. |
So the next time you hear a suggestion that appears to violate a principles of business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.
The core question is how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility.
A final recommendation is Ken Schwaber's talk and slides to think about evidence based discussions around improving the business of software development. And the book he gave away at the end of the talk Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management