There's the normal wave of un-informed opinions floating around again. I was reminded to look back a paper that explains how to sort out the claims that would be considered baloney.
How Reliable is the Source of the Claim?
Self pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, or occasionally even fabricated. In many instances the statistics, numbers, or supposed facts used to support the claims are weak or poorly formed. Relying on surveys, small population samples, classroom experiments, or worse anecdotal evidence, the expert extends personal experience to a larger population.
Does this source often make similar claims?
Self pronounced experts have a habit of going well beyond the facts and generalizing the claim to a larger population of problems or domains. Many proponents of method make claims that cannot be substantiated within a broader framework. This is the nature of early development in the method world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.
Have the claims been verified by other sources?
Typically self pronounced experts make statements that are unverified or verified only by a source within their own belief circle, or who's conclusions are based primarily on anecdotal information. We must ask, Who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good science as it is crucial to good methodology development.
How does the claim fit with how we know the world works?
Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their methodology. Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.
Has anyone gone out of the way to disprove the claim, or has only supportive evidence been sought?
This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid. It is why the methods of science that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.
Does the preponderance of evidence point to the claimant's conclusion or to a different one?
Evidence is the basis of all scientific theory confirmation. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the theory, fit the theoretical model, or somehow participate in the theory in a supportive manner.
Is the claimant employing the accepted rules of reason and tools of research, or have these been abandoned in favor of others that lead to the desired conclusion?
Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not scientifically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first course taken in graduate school is quantitative methods. This course sets the ground rules for conducting research in the field.
Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation?
This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.
If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did?
This concept is usually lost on "innovative" researchers. The need to explain previous results is mandatory. Without this bridge to past results, a new theory has no foundation for acceptance.
Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?
All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice? Usually during the peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.
These guide lines come from a Scientific American Essay in November of 2001. IN the current domain and time such baloney claims include:
- I'm doing earned value with a system that could be validated by the DCMA, but I've got my own way of doing this using Excel.
- Lots of the agile baloney claims have disappeared, but my favorite is earned value is worthless for IT projects.
- You can manage projects using social media like FB and Twitter, you don't need all those fancy and annoying charts and tables.
- And of course my favorite - the Standish Survey. No sample size numbers, no non-respondent numbers. No variances within the sample bins. In many cases such "variances" show large overlaps.
- And another favorite how to lie with statistics