When we hear *I know a CEO that uses this method and she's happy with the outcomes*, has several core fallacies wrapped into one.

The first is the self-selection problem of statistics. This is the Standish problem. Send out a survey, tally the results from those that were returned. Don't publish how many surveys went out and how many came back. Or most critically, what the population of potential respondents was for the surveys and if the numbers of responses represents at proper statistical sample to assure high enough confidence on the returned surveys to draw any credible conclusions.

An anecdote is a

sample of onefrom an unknown population

The next is the Anecdotal sample. *I know a guy that...* in support of the suggestion that by knowing someone that supports you're conjecture, your conjecture is some how supported.

These are both members of the *cherry picking* process. The result is lots of exchanges of questions to the original conjecture that have not basis in evidence for the conjecture.

When you encounter such a conjecture, apply the Sagan's BS detection kit

- Seek independent confirmation of alleged facts.
- Encourage an open debate about the issue and the available evidence.
- In our domain and most other there are no authorities. At most, there are experts.
- Come up with a variety of competing hypotheses explaining a given outcome. Considering many different explanations will lower the risk of confirmation bias.
- Quantify whenever possible, allowing for easier comparisons between hypotheses' relative explanatory power.
- Every step in an argument must be logically sound; a single weak link can doom the entire chain.
- When the evidence is inconclusive, use Occam's Razor to discriminate between hypotheses.
- Pay attention to falsifiability. Science does not concern itself with unfalsifiable propositions.

When there is push back from hard questions, you'll know those making the claims have no evidence and are essentially BS'ing their constituents.