There is a popular notion that project and program overruns come from some core unprofessional behaviour of those involved in constructing the estimate. That there is some basic malfeasance afoot in the cost and schedule business. This conjecture usually comes from those on the outside (far outside) looking in.
An April 18th, 2011 report discovered some interesting facts in the US DOD domain...
The source of these problems is not unprofessional or incomplete cost estimates. It is bad programmatic assumptions
One example is the current poster child for cost and schedule overrun - the F-35, Joint Strike Fighter
The bad assumptions (postulates) include:
- The state of the art was much more mature than it actually was. The Technology Readiness Level (TRL) of the critical components of the aircraft was not well understood. Therefore cost and schedule estimates based on TRL models were wrong.
- The vertical lift component of the aircraft was assumed to be model-able in CATIA with simulation tools. Turns out that was not the case.
- The multi-service requirements - the Joint part of Joint Strike Fighter - would be straight forward. The book Illusion of Choice: The F-111 and the Problem of Weapons Acquisition Reform, was not on the reading list. This should be mandatory reading for anyone working in new and innovative domains, government or private sector.
The core issue is there are deeply held programmatic assumptions about getting things done by specific Milestones. These Milestones are baked into the process - Government or Private.
The source of these problems starts with the missing basics.
- Missing or poorly developed requirements -This includes immature requirements. Requirements committed to, too early in the life cycle of the program. We'll discover the requirements are we go, it'll be OK because we have an open architecture and can adapt to emergent needs.
- Undefined and possibly undefinable dependencies - between the programmatic elements and the increasing technical maturity of the deliverables. We didn't know there was deep connection between the behaviour of two critical components. This behaviour is not favorable, so neither component now works properly.
- Missing or underanalyzed interoperability assessments - you forgot to tell me we had to work with that other part of the system.
- Requirements instability - we keep discovering new needed capability and the resulting requirements.
- Lack of stakeholder involvement - if we can't talk to the "real" stakeholders, let's just keep going and they'll catch up.
- Poorly developed Significant Accomplishments and Accomplishment Criteria - we don't know what done looks like, even if it walked through the door.
- Lack of a credible Integrated Master Schedule - schedules, we don't need no stinkin schedules. Without a credible, risk adjusted, resource loaded, outcome measurable schedule, the project is just level of effort, executing until you run out of time, money, or both
- Looking at program performance at too high a level - the overused, silly and un-ctionable statement of "the devil is in the details" doesn't really help. But reading Darrell Huff How To Lie with Statistics, shows you how to hide the variance by agragating them to the top.
- Poor integration testing - unit testing, subsystem testing, system testing are all necessary but not sufficient. Full up integration testing - "test as you fly" is the final proof that the result actually has chance of working.
- Poor or missing integrated planning - a plan is a strategy for success. The strategy is a hypothesis. The hypothesis needs constant testing - experiments - to confirm we are not fooling ourselves. This is basic high school chemistry class, but seems very hard in program management.
These are inspired by Mr. Bliss's "Summary of PARCA RCA Findings after One Year," Acquisition Insight Days, 2011.