Dan Ward posted about Gray's Paradox. When Dan describes his experience at the Baltimore Aquarium:
The truth is, the animals have it right and we've got it wrong. When our math indicates one thing and observation reveals something else, um, that means our math is wrong. Rather than saying things like "Bumblebees shouldn't be able to fly" or "dolphins shouldn't be able to swim that fast," we should go with something like "Our understanding of physics is wrong / incomplete / etc." But let's not blame the animals when their behavior doesn't comply with our "laws."
Dan demonstrates classic "anchoring and adjustment." What was missing from Gray's assessment of the Dolphin's swimming speed was the understanding of how much force the Dolphin could actually produce while swimming. Gray "assumed" the Dolphin had some special "anti-drag" property. This was needed to explain why the Dolphin could swim at the observed speed, using only the hypothesized force.
Of course the hypothesis was wrong, and Gray failed to completely question his hypothesis in the presence of the "facts" of the Dolphin swimming at the observed speed. Gray was "anchored" in the perception that the Dolphin did not have enough muscle force to swim at the observed speed.
This "anchoring and adjustment" process (first developed by Tversky and Kahneman) is persistent in everything we do. It is part of how our brains work. Anyone claiming not to be subject to "anchoring and adjustment" in any process involving making decisions in the presence of uncertainty - project estimating, gas field capacity estimates, or financial performance estimates for example - is fooling themselves and those around them.
This is the reason that 3 point estimates collected from subject matter experts (the most likely, optimistic, pessimistic) are seriously flawed, not matter who says they are not. Examples are all around us:
- The original estimate for the cost of Joint Strike Fighter are now know to be off by a factor of two, even though those with calibrated Rayleigh distribution models showed this would be the case. Anchored by past performance and un-adjusted by the complexities of the software systems.
- The risk associated with drilling in the Gulf was known to be many times higher than the "subject matter experts" stated. Anchored by the past performance in near-shore experience and un-adjusted by numbers for the North Sea.
So NO 3 point estimates gathered by asking people Low, Most Likely, High. Bounds for the variance are ONLY gathered using calibrated ordinal values derived from models that connect the non-linear drivers of cost and schedule in a network assessed through Monte Carlo. Consider your project schedule a Bayesian Stochastic network and treat it accordingly. As well these models can be "tuned" to match the emerging relaity of performance. Individual P-ML-O data is a one shot deal, lost to analysis once gathered. And if that's not enough to discourage you, when you come back to collect the personal opinion data again, the author will have moved from the past anchoring to an adjustment phase with no way to connect to correlation of past performance to future forecast.
Just Say NO to personally gathered 3 point estimates
Then start reading John Good Pasture's series of Thomas Bayes. Then go to Kailash Awati posts on probability and statistics in project management, and then visit articles on probability theorems. I'd suggsest when you get through these sources, read the Against the Gods book and google the Tversky and Kahneman papers, you'll have a different view of the 3 point problem and thoses suggesting "it's good enough or it works for me."