Evaluating the evidence of good project management processes is critically important to both the developers, management and those paying for the software.
These 12 points help separate the principles, practices, and processes from the pseudoscience, bad mathematics, willful ignorance of management principles and governance to the outright hoaxes.
1. Sensational Headlines
|
7. Unrepresentative Samples
|
 |
Headlines are designed to entice readers into clicking. When we read estimates are the smell of dysfunction most people click through to find out why.
Headlines as clickbait are common #Noestimates approaches to attracting readers.
Don't fall for the Bait
|
 |
Selecting samples from a representative larger population is needed to draw a valid conclusion.
If the sample is different from the population as a whole, then the conclusions may well also be different. Personal anecdotes are not statistical samples.
Demand Statistically Sound Data
|
2. Misinterpreted Results
|
8. No Control Groups
|
 |
Distortion and misrepresentation of the findings for the sake of a headline and a good story is misrepresenting the results.
This is common when personal anecdotes are presented or the data used is not visible or is masked.
Demand evidence beyond personal anecdotes
|
 |
Without a control group, the observation is just observations, not data.
The observations may not be connected to the cause and all that is observed is the symptom, without a cause.
Demand root cause of observed results, either positive of negative
|
3. Conflict of Interests
|
9. No Independent Testing Used
|
 |
When exploring and researching, look for the motivation of the author.
If the corrective action is stated first and then data is sought to support that answer, there is a conflict from the start.
Follow the money
|
 |
To prevent bias, the researcher needs a control group – a system that behaves properly and used to compare to the system under study that is misbehaving.
Trust But Verify
|
4. Correlation and Causation
|
10. Cherry Picked Results
|
 |
Do not confuse correlation and causation.
Correlation between two variables doesn’t mean one causes the other.
Correlation and Not Causation. Causation requires reasons
|
 |
Cherry Picking is selecting data which support the conclusion while ignoring those that do not.
If the suggestion draws a conclusion from a selection of its results, not all the results, this is cherry-picking.
Show me your raw data and its source
|
5. Speculative Language
|
11. Unreplicated Results
|

|
Speculations from the research are just that – speculations.
The words may, could, might and others show the work is missing hard evidence of any conclusions.
Data supports the language
|
 |
Results of any investigation need to repeatable by others and tested over a range of conditions and domains to ensure they are generalizable.
Extraordinary claims require extraordinary evidence – this means more than one independent study.
Personal anecdotes are not case studies, nor are they data
|
6. Sample Size Too Small
|
12. Lack of Citations
|
 |
When samples are used from personal anecdotes and contain 15 to 30 values, lower confidence is the result.
The conclusions drawn are considered with this in mind.
This cases suspicion if a larger sample was possible but was avoided.
Do the math for sample size - Pearson's Coefficient
|
 |
Research into any topic will have references and citations for similar work.
The conjectured outcome should be evaluated with these references in mind.
Shown me the peer-reviewed results and references
|
This poster is derived from www.compoundchem.com, A rough Guide to Spotting Bad Science and adapted to software development processes.