Glass's book is important in many ways, none the least of which is the fallacies of software development management that have creapt into the discussion of late around estimating. The discussions in this book are general in nature and are applicable to all forms of estimating for all forms of projects.
One engagement I work deals with the essential views of project performance and how to construct these views using data generated from the projects Performance Measurement Baseline (PMB). In order to produce credible measures of performance we must start with a credible estimate of the future performance to compare past performance to. If we don't know where we are headed, we can't tell if we are actually making progress. The notion of not knowing and letting it emerge is very sporty when you are spending other peoples money. Doing research, spending your own money, no one cares how you do it. Working in a governance based domain, they do.
There are many opinions about the use of estimating. Many of these opinions have failed to establish a domain and context for their conjectures. So let's do that now. Project Runaways, one of Glass's other books deals with many of the projects in domains similar to the ones we work. Large software intensive complex programs. Programs budgeted in the millions of dollars. Mission critical, business critical, safety critical programs. ERP, CRM, weapons, infrastructure, mega-science, mega-development. In firms and agencies where the IT budget is in the Billions to 10's of Billions. These projects are manifestly important and nearly impossible as Edwin Land was fond of saying.
Let's look at what Glass says are some root causes of these runaways.
- Lack of proper methodology is common when that methodology is being sold. The methodology may be an anti-thesis methodology. Not doing something for example. No requirements, no estimates, no management on purpose.
- Lack is discipline and rigor on the part of the developers. Same as above. Without having a formal structure for development we are free to explore alternatives and let the solution emerge.
These are easily dismissed, because research has shown and continues to show, these are secondary at best to the core root causes:
- Poor estimates of the cost and schedule
- Unstable requirements
Let's start with Fact 8 of the book. One of the two most common causes of runaway projects is poor estimation. The other fact, Fact 23 states one of the two most common causes of runaway projects is unstable requirements.
There is little doubt that project estimates are poor. Not just for software projects. James Webb Telescope (JWST) was estimated to cost $4B. After passing $6B the current estimate is near $8. Another program we work is a live extension of a weapon, first estimated at $3B. The current 2012 estimate presented to congress of $9B.
The literature is full of criticisms and suggestions for corrective actions for the Estimating Problem. Early on Function Points and Source Lines of Code were used to no real benefit. Glass ends (in 2002) with a process he calls human mediated estimation. This approach is referenced in "An Empirical Study of Maintenance and Development Estimation Accuracy," Kitchenham, Pfleeger, McCall, and Eagan, Journal of Systems and Software, September 2002. As well Analysis of Empirical Software Effort Estimation Models, provides background and further references on the estimating issue.
Another critical element of credible estimating is the managerial and cultural factors described in "A Bayesian Belief Network Cost Estimation Model That Incorporates Cultural and Project leadership Factors."
But before moving on it is important to establish the domain and context of estimating. If the project is 6 weeks long, with a $10K budget, with a customer that is exploring or discovering requirements as the project unfolds, estimating is of no value. This appears to be the domain of the #NoEstimates movement. But since there is no clear and concise statement of those proffering #NE as to what domain they work in, it's hard to tell.
For software intensive projects in the ERP, CRM, embedded systems, product development domain, making credible estimates of cost, schedule, and technical performance are part of the business success. A health insurance firm installing $150MM of ERP to integrate dozens of legacy systems needs to know the all in commitment for cost and schedule in order to run the business. A large construction firm converting dozens of legacy systems to a centralized program management system for $5MM needs to know the total cost and schedule in order to maintain their seamless operations around the world for delivery of construction management services. A flight avionics firm building the next generation of navigation and guidance software for its airframe clients needs to know cost and schedule in order to hold to its commitment for certification of the commercial aircraft.
In these domains, it is well known that uncertainty assessments of software development costs are strongly biased toward overconfidence. The result is the estimates typically are believed to be more accurate than they really are. This overconfidence leads to poor project planning. To improve the cost uncertainty assessments, evidence-based guidelines are needed to assess software development cost uncertainty. This approach must be based on results from relevant empirical studies.
Starting long ago, the software estimating problem was addressed in "Software Development Cost Estimating Approaches," Barry Boehm and Tshilidz Marwala. Boehm was a TRW when I was there. He is the father of COCOMO which we used to estimate time and cost for on orbit software systems that had emergent requirements. From another study - "Evidence-Based Guidelines for Assessment of Software Development Cost Uncertainty," IEEE Transactions on Software Engineering, November 2005 vol. 31 no. 11 - some general guidelines for producing good estimates are:
- Do not rely solely on unaided, intuition-based uncertainty assessment processes.
- Do not replace expert judgment with formal uncertainty assessment models.
- Apply structured and explicit judgment-based processes.
- Apply strategies based on an outside view of the project.
- Combine uncertainty assessments from different sources through group work, not through mechanical combination.
- Use motivational mechanisms with care and only if greater effort is likely to lead to improved assessments.
- Frame the assessment problem to fit the structure of the relevant uncertainty information and the assessment process.
Another useful set of root causes are found in "Regression Models of Software Development Estimation Accuracy and Bias,"
- Estimates were provided by a person in the role of software developer instead of project leader.
- The project had as its highest priority time-to-delivery instead of quality or cost.
- The estimator did not participate in the completion of the task.
This approach led to estimating biases
- Estimates were provided by a person with the role of project leader instead of software developer.
- The estimator assessed the accuracy of own estimates of similar, previously completed tasks to be low (more than 20% error).
In software systems where estimating is a critical success factor these issues Reference Class Forecasting processes have come into play. In Expert Estimation of Software Development Work: Learning through Feedback, Magne Jørgensen and Dag Sjøberg, Simula Research Laboratory, Norway, four additional guidelines are needed:
- Increase the motivation for learning estimation skills.
- Reduce the impact from estimation learning biases.
- Ensure a fit between estimation process and type of feedback.
- Provide learning situations.
Glass goes on to speak about when to perform the estimating in his Fact 9. Estimate only after requirements have been well understood. If you're on a project where the requirements are not well understood or are intentionally not understood for emergent purposes, then estimating is of little value.
So Know What
First, when we hear about not estimating as the next big thing in software development, take out your copy Beyond the Hype and read it cover to cover to see if those speaking actually have a basis for their message. What is the domain, the project size and duration, the criticality of budget and schedule to project success? If there are no answers to those question, move on.
But if you work on projects where the outcome is manifestly important and nearly impossible - as Land suggests - continuous estimating is likely needed. Learn to do it from others, the literature, books, modern tools that deal with the objections Glass mentioned in his book 12 years ago.
Glass's Facts for estimating are:
- Fact 8 - poor estimates - learn to estimate better is obvious. But start with reference class forecasting. Look to the literature. This really means do your homework before starting
- Fact 9 - estimates performed too early - baselined capabilities are the source of stable requirements. Requirements change, but changing capabilities means the project is unstable. Estimating is a waste in this situation.
- Fact 10 - estimates made by upper management - those doing the work need to be those doing the estimating. If management can't handle the truth then the project is doomed before it starts.
- Fact 11 - estimates rarely adjusted - estimating is a continuous process. The Estimate At Completion and Estimate to Complete need to be updated on a regular basis. One approach is very agile like. Make micro adjustments to the estimate from the current productivity numbers. Several #NoEstimate'rs have suggested this, probably without knowing the background, since it is intuitive.
- Fact 12 - since estimates are faulty, no reason to be concerned when they are not met - you can't have a performance management system to spend other people's money without a credible goal.
- Fact 13 - there is a disconnect between management and developers - when this is the case, the project is setting itself up for failure.
And always remember when engaging in any discussion of a controversial topic, especially when you hear a statement like you can't forecast the future or you can't estimate software development.