I had a few conversations over the past weeks about "cause and effect" processes. "If you do this (X), then this will result (Y)." "X causes Y," "X and Y are related in some way," "when X appears, we should see Y,"
Really, how can we tell? Is there something else that might have caused Y? Maybe Y occurs randomly a number of time, and what you saw was a random occurrence. This situation is common in many areas:
- Process improvement
- Health supplements
- Quality improvement programs
- Consulting services
- Performance tuning of hardware
- Peddling style changes from our new cycle coach
So how can we tell that the cause, X was somehow connected with the effect, Y?
Measures of Statistical Significance, that how
Remember those high school statistics classes. Now you can put that memory to work. OK, maybe not. But try to remember what kind of questions were asked so we can sort out those trying to convince of that X caused Y, when in fact they don't have a clue what caused Y, other than they are trying to sell you X.
The notion of inference statistics tells us how likely a given result occurred by chance alone. This doesn't actually tell us what the cause was - not by itself. But if we eliminate random results, then at least we can now go looking for the real cause.
One way to start is the state the inverse of the cause and effect. This is called the Null Hypothesis. It is a proposal that there is no association between the cause and the effect. If the Null Hypothesis is TRUE, then there is no connection between cause and effect. Any study of cause and effect should conduct an experiment that gathers data used to test the Null Hypothesis. That is, we want a study that "rejects the Null Hypothesis."
The measure of this null hypothesis is called the Level of Significance. It is the probability of incorrectly rejecting the Null Hypothesis. That is saying there is a difference between two groups of things (cause and effect) when in fact there is not. The Level of Significance is represented by a letter - p. p is the probability of a Type I error - we rejected the Null Hypothesis when should have not rejected the Null Hypothesis.
By convention (convention of statistical analysis for general population of random occurrences), p < 0.5 is considered significant. That is there is a 1 in 20 chance that the finding from our experiment was due to chance alone. The lower p, the lower the chance the result is from chance.
Why Do We Care About This in Project Management?
Because there are lots of people out there selling snake oil. Making claims about their "magic beans" and how those beans can improve your life, reduce cost, increase productivity, extend your life and grow hair where there was no hair before.
When you encounter these people it is good to ask them a few questions:
- What is the null hypothesis for the assertion you are making?
- What is the confidence level for this assertion?
In other words
Can you show me there is a statistical connection between what you want me to do, buy, change, or otherwise do differently, and an actual tangible measurable outcome from this purchase or change?
If not, you still may have something worth looking at, but please stop speaking to me in term of cause and effect, because there may be no cause and effect - just a random (but desirable) occurrence of your effect.