The Planning Fallacy is well documented in many domains. Bent Flyvbjerg has documented this issue in one of his books, Mega Projects and Risk. But the Planning Fallacy is more complex than just the optimism bias. Many of the root causes for cost overruns are based in the politics of large projects.
The planning fallacy is ...
...a phenomenon in which predictions about how much time will be needed to complete a future task display an optimistic bias (underestimate the time needed). This phenomenon occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias only affects predictions about one's own tasks; when outside observers predict task completion times, they show a pessimistic bias, overestimating the time needed. The planning fallacy requires that predictions of current tasks' completion times are more optimistic than the beliefs about past completion times for similar projects and that predictions of the current tasks' completion times are more optimistic than the actual time needed to complete the tasks.
The critical notion here is about ones own estimates. This is the critical reasons for
- Reference class forecasting
- Parametric cost modeling
- Monte Carlo Simulation using Probability Distributions Functions from past performance data
- Independent Cost Evaluations (ICE)
With all that said, there still is a large body of evidence that estimating is still a major problem.†
I have a colleague who is the former Cost Analysis Director of NASA. He has three reasons projects get in cost, schedule, and technical trouble:
- We couldn't know - we're working in a domain where discovery is actually the case. We're inventing new physics, discovering new drugs that have never been discovered before. We're doing unprecedented development. Most people using the term "we're exploring" don't likely know what they[re doing and those paying are paying for that exploring. Ask yourself if you're in the education or actually the research and development business.
- We didn't know - we could have known, but we just didn't want to. We couldn't afford to know. We didn't have time to know. We were incapable of knowing because we're outside our domain. Would you hire someone who didn't do his homework when it comes to providing the solution you're paying for? Probably not. Then why accept we didn't know as an excuse?
- We don't want to know - we could have known, but if we knew that'd be information that would cause this project to be canceled.
The Planning Fallacy
Daniel Kahneman (Princeton) and Amos Tversky (Stanford) describe it as “the tendency to underestimate the time, costs, and risks of future actions and overestimate the benefit of those actions”. The results are time and cost overruns as well as benefit shortfalls. The concept is not new: they coined the term in the 1970s and much research has taken place since, see the Resources below.
So the challenge is to not fall victim to this optimism bias and become a statistic in the Planning Fallacy.
How do we do that?
Here's our experience:
- Start with a credible systems architecture with the topology of the delivered system:
- By credible i mean not a paper drawing on the wall, but a a sysML description of the system and its components. sysML tool can be had for free along with commercial products.
- Defining the interactions between the components is the critical issue to discover the location for optimism. The Big Visible Chart from sysML needs to hang on the wall for all to see where these connections take place.
- Without this BVC, the optimism is It not that complicated, what could possibly be the issue with our estimates.
- It's the interfaces where the project goes bad. Self contained components have problems for sure, but when connected to other components this becomes a system of systems and the result is an N2 problem.
- Look for reference classes for the components
- Has anyone here done this before?
- No,? Do we know anyone who knows anyone who's done this before?
- Is there no system like this system in the world?
- If the answer to that is NO, then we need another approach - we're inventing new physics and this project is actually a research project - act appropriately
- Do we have any experience doing this work in the past?
- No, why would we get hired to work on this project?
- Yes, but we've failed in the past?
- No problem, did we learn anything.
- Did we find the Root Cause of the past performance problems and take the corrective actions?
- Did we follow a know process (APOLLO) in that Root Cause Analysis and Corrective actions?
- No, you're being optimistic that the problems won't come back
- Do we have any sense of the Measures of the system that will drive cost?
- Effectiveness - are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
- Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
- Key Performance Parameters - represent the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
- Technical Performance Measures - determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal
- All the ...ilities
- Without understanding these we have no real understanding of where the problems are going to be and the natural optimism comes out.
- Do we know what technical and programmatic risks are going to be encountered in this project?
- Do we have a risk register?
- Do we know both the reducible and irreducible risks to the success of the project?
- Do we have mitigation plans for the reducible risks?
- For reducible risks without mitigation plans, do we have Management Reserve?
- For irreducible risks do we have cost and schedule margin?
- Do we have a Plan showing the increasing maturing of the deliverables of the project?
- Do we know what Accomplishments must be performed to increase the maturity of the deliverable?
- Do we know the Criteria for each Accomplishment, so we can measure the actual progress to plan?
- Have we arranged the work to produce the deliverables in a logical network - or some other method like Kanban - that shows the dependencies between the work elements and the deliverables.
- This notion of dependencies is very underrated.
- The Kanban paradigm assumes this up front
- Verifying there are actually NO dependencies is critical to all the processes based on having NO dependencies.
- It's seem rare that those verifications actually take place
- This is an Optimism Bias in the agile software development world.
- Do we have a credible, statistically adjusted, cost and schedule model for assessing the impact of any changes?
- I'm confident our costs will not be larger than our revenue - sure right. Show me your probabilistic model.
- No model, we're likely being optimistic and don;t even know it
- Show Me The Numbers.
So With These And Others...We can remove the fallacy of the Planning Fallacy.
This doesn't mean our project will be successful. Nothing can guarantee that. But the Probability of Success will be increased.
In the end we MUST know the Mission we are trying to accomplish, the units of measure of that Mission in terms meaningful to the decision makers. Without that we can't now what DONE looks like. Amnd with that only our optimism will carry us along until it is too late to turn back.
Anyone using Planning Fallacy as the excuse for project failure, not planning, not estimating, not actually doing their job as a project and business manager will likely succeed in the quest for project failure and get what they deserve. Late, Over Budget, and the gadget they're building doesn't work as needed.
† Please note, that just because estimating is a problem in all domains, that's NO reason to not estimate. Like planning is a problem, it's no reason NOT to plan. Any suggestion that estimating or planning is not needed in the presence of uncertain future - as it is on all projects - is willfully ignoring the principles of Microeconomics - making choices in the presence of uncertainty based on opportunity cost . To suggest other wise confirms this ignorance.
Resources
These are some background on the Planning Fallacy problem from the anchoring and adjustment point of view that I've used over the years to inform our estimating processes for software intensive systems. After reading through these I hope you come to a better understanding of many of the mis-conceptions about estimate and the fallacies of how that is done in practice.
Interestingly there is a poster on twitter in the #NoEstimates thread that objects when people post links to their own work or work of others. Please do not fall prey to the notion that everyone has an equally informed opinion, unless you yourself have done all the research needed to cover the foundations of the topics. Outside resource are the very life blood of informed experience and the opinions that come from that experience.
- Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures".TIMS Studies in Management Science 12: 313–327.
- "Exploring the Planning Fallacy" (PDF). Journal of Personality and Social Psychology. 1994. Retrieved 7 November 2014.
- Estimating Software Project Effort Using Analogies,
-
Cost Estimation of Software Intensive Projects: A Survey of Current Practices
- "If you don't want to be late, enumerate: Unpacking Reduces the Planning Fallacy". Journal of Experimental Social Psychology. 15 October 2003. Retrieved 7 November 2014.
- A Causal Model for Software Cost Estimating Error, Albert L. Lederer and Jayesh Prasad, IEEE Transactions On Software Engineering, Vol. 24, No. 2, February 1998.
- Assuring Software Cost Estimates? Is It An Oxymoron? 2013 46th Hawaii International Conference on System Sciences.
- A Framework for the Analysis of Software Cost Estimating Accuracy, ISESE'06, September 21–22, 2006, Rio de Janeiro, Brazil.
- "Overcoming the Planning Fallacy Through Willpower". European Journal of Social Psychology. November 2000. Retrieved 22 November 2014.
- Buehler, Roger; Griffin, Dale, & Ross, Michael (2002). "Inside the planning fallacy: The causes and consequences of optimistic time predictions". In Thomas Gilovich, Dale Griffin, & Daniel Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment, pp. 250–270. Cambridge, UK: Cambridge University Press.
- Buehler, Roger; Dale Griffin; Michael Ross (1995). "It's about time: Optimistic predictions in work and love". European Review of Social Psychology (American Psychological Association) 6: 1–32. doi:10.1080/14792779343000112.
- Lovallo, Dan; Daniel Kahneman (July 2003). "Delusions of Success: How Optimism Undermines Executives' Decisions". Harvard Business Review: 56–63.
- Buehler, Roger; Dale Griffin; Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times". Journal of Personality and Social Psychology (American Psychological Association) 67 (3): 366–381. doi:10.1037/0022-3514.67.3.366.
- Buehler, Roger; Dale Griffin; Johanna Peetz (2010). "The Planning Fallacy: Cognitive, Motivational, and Social Origins" (PDF). Advances in Experimental Social Psychology (Academic Press) 43: 9.
- Hourglass Is Half Full or Half Empty: Temporal Framing and the Group Planning Fallacy". Group Dynamics: Theory, Research, and Practice. September 2005. Retrieved22 November 2014.
- Stephanie P. Pezzoa. Mark V. Pezzob, and Eric R. Stone. "The social implications of planning: How public predictions bias future plans" Journal of Experimental Social Psychology, 2006, 221–227.
- "Underestimating the Duration of Future Events: Memory Incorrectly Used or Memory Bias?". American Psychological Association. September 2005. Retrieved 21 November 2014.
- "Focalism: A source of durability bias in affective forecasting.". American Psychological Association. May 2000. Retrieved 21 November 2014.
- Jones,, Larry R; Euske, Kenneth J (October 1991). "Strategic misrepresentation in budgeting".Journal of Public Administration Research and Theory (Oxford University Press) 1 (4): 437–460. Retrieved 11 March 2013.
- Taleb, Nassem (2012-11-27). Antifragile: Things That Gain from Disorder. ISBN 9781400067824.
- "Allocating time to future tasks: The effect of task segmentation on planning fallacy bias". Memory & Cognition. June 2008. Retrieved 7 November 2014.
- "No Light at the End of his Tunnel: Boston's Central Artery/Third Harbor Tunnel Project". Project on Government Oversight. 1 February 1995. Retrieved 7 November 2014.
- "Denver International Airport" (PDF). United States General Accounting Office. September 1995. Retrieved 7 November 2014.
- Lev Virine and Michael Trumper. Project Decisions: The Art and Science, Vienna, VA: Management Concepts, 2008. ISBN 978-1-56726-217-9
- Michael and Lev provide the Risk Management tool we use - Risky Project.
- Risky Project is a Monte Carlo Simulation tool for reducible and irreducible risk from probability distribution functions of the uncertainty in project.
- Which by the way is an actual MCS tools not based on boot strapping from small number of past samples many times over.
- Anchoring and Adjustment in Software Estimation, Jorge Aranda and Steve Easterbrook, ESEC-FSE’05, September 5–9, 2005, Lisbon, Portugal.
- Anchoring and Adjustment in Software Estimation, Jorge Aranda, PhD Thesis, University of Toronto, 2005.
- Anchoring and Adjustment in Software Project Management: An Experiment Investigation, Timothy P. Costello, Naval Post Graduate School, September 1992.
- Anchoring Effect, Thomas Mussweiler, Birte Englich, and Fritz Strack
- Anchoring, Non-Standard Preferences: How We choose by Comparing with a Nearby Reference Point.
- Reference points and redistributive preferences: Experimental evidence, Jimmy Charité, Raymond Fisman, and Ilyana Kuziemko
- Anchoring and Adjustment, (YouTube), Daniel Kahneman. This anchoring and adjustment discussion is critical to how we ask the question how much, how big, and when.
-
Anchoring unbound, Nicholas Epley and Thomas Gilovich
- Assessing Ranges and Possibilities, Decision Analysis for the Professional, Chapter 12, Strategic Decision and Risk Management, Stanford Certificate Program.
- This book by the way should be mandatory reading for anyone suggesting the decisions can be made in the absence of estimates.
- They can't and don't accept they can, because they can't
- Attention and Effort, Daniel Kaheman, Prentice Hall, The Hebrew University of Jerusalem, 1973.
- Availability: A heuristic fir Judging Frequency and Probability, Amos Tversky and Daniel Kahneman.
-
On the Reality of Cognitive Illusions, Daniel Kahneman, Princeton University, Amos Tversky, Stanford University.
- Efficacy of Bias Awareness in Debasing Oil and Gas Judgements, Matthew B. Welsh, Steve H. Begg, and Reidar B. Bratvold.
-
The framing effect and risky decisions: Examining cognitive functions with fMRI, Cleotilde Gonzalez, Jason Dana, Hideya Koshino, and ,Marcel Just, The Journal of Economic Psychology, 26 (2005), 1-20.
-
-
The Anchoring-and-Adjustment Heuristic, Why the Adjustments Are Insufficient, Nicholas Epley and Thomas Gilovich.
-
This should be enough to get you started and set the stage for rejecting any half baked ideas about anchoring and adjustment, planning fallacies, no need to estimate and the collection of other cocka mammy ideas floating around the web on how to make credible decisions with other peoples money.