When there is a discussion around PERT, the Central Limit Theorem, estimating durations, efforts, and costs, or any probabilistic or statistical aspect of a project, there is usually a fundamental flaw in the thought process.
When we estimate, we are ALWAYS subject to anchoring and adjustments. It is built into our nature. No matter how well you rationalize the processes for capturing estimates, you cannot avoid anchoring and adjusting. Recognition and correction for anchoring and adjusting is the only way out. The estimates are still biased, they are now recognized as biased.
The core of the Kahneman and Tversky thesis is the contention that people frequently form estimates by starting with a given, easily available reference value — which could be arbitrary — and adjusting from that value. An estimate, therefore, would be "anchored" to that value.
Most estimators are in fact "irrational beings" in the sense of risk taking, bounding of outcomes, and their deviations for the "norm," what ever that "norm" is. This is the basis of "models," calibrated models.
This takes us back to core issue - How do we construct a credible Performance Measurement Baseline? If you're not using estimating models, built and validated by the subject matter experts, and calibrated against past performance, then the credibility question is in question.