For some reason I've been hooked by David Anderson's conjectures about estimating. This has developed over time, mainly because my role on the current project is to aide in the production of a "credible" Integrated Master Schedule (IMS), where estimates of schedule and cost over the life of the program must be made during the proposal. This includes software, hardware, integration and test.
I'm really cranky about this type of information and I know why , but I'll tell you later why. But first let's dissect the following paragraph a sentence at a time.
- Let's assume that software development is the capacity constrained resource in our organization.
- This is probably a good assumption in most every environment, since hiring or moving staff on short notice is very difficult.
- Add to that the required experience and knowledge base and fixed staff means fixed capacity.
- It often is!
- Yep it is...
- Even if it isn't we wouldn't want to waste slack capacity which might otherwise be used to absorb variation elsewhere in our system.
- As Jack says... "The aphorist does not argue or explain, he asserts; and implicit in his assertion is a conviction that he is wiser or more intelligent than his readers"
- The system is not described in any specific manner, so there's no way to tell if the capacity could be, should be, or would be absorbed.
- When we spend time estimating something, and we use the capacity constrained resources to do the estimating, we effectively lower our capacity.
- This of course is only true if the estimating has no value. Since this hypothesis is stated up front - estimating has no value - this would be called a self referencing statement or a tautology.
- We lower the capacity on an activity which isn't client valued.
- This is true of course IF you see no value in making estimates
- The customer doesn't care how long your estimate for a task was.
- Have you asked the customer if they would like to know when a feature will emerge from the development team?
- Would this information be useful for their business processing planning?
- If they don't care when the feature arrives, then true, the customer doesn't care.
- I haven't come across such a customer yet in my ERP or embedded software world.
- Doing the estimate often takes a significant chunk of the time compared to the time it would take to do the work.
- This is speculation again. For a task that takes say 4 weeks (2 iterations in XP), the estimate should take a few hours at the outset. This is the role of the "planning game" in XP.
- A few hours for a classification of effort (1, 2, 5 ranking is a nice biblical approach) is probably worth it for something 4 weeks long.
- For a 6 year project, like flying to the moon, estimates may take a bit longer. But it is sure useful for funding purposed to have some estimate of how Lib it will take and how much it will cost - in say +/- 80% - since we'll have to ask for the money to be committed at some point.
- Doing the estimate even a few days but more likely a few weeks or months and in some cases years before the work is done means anything learned from the estimating process is lost.
- Lost in what way? Why would it be lost? The baseline estimate is used to validate later estimates? Actually are used to validate the estimates. My gut sense here is "doing estimates for money" is not in the experience base, so these conjecture come easy.
- There are times when estimates a few months ahead are needed. Even times when estimates a few years (say 2) are needed. For example, how many Handel-C programmer hours will be needed to build the test fixture for the manned spaceflight simulator? Gee, that information might be useful for planning and cost purposes for the Control Account Manager assigned to bring that piece in on time/on budget.
- Sure there will be changes, but getting the estimate within +/- 20% is a good start to staying on schedule.
- It gets worse.
- It always does. The estimate is just that, an estimate. Lots is field data (several 1,000 post program assessments) shows that after 15% of the program has passed (time and budget) the Estimate at Completion (EAC) never gets any better (less money and faster completion).
- Often we estimate work which gets cut or doesn't get done at all because the estimate is too large.
- Any field evidence for this?
- Any correlations between potential cuts and the estimating process?
- "Doing science" is mentioned in a later post - how about a hypothesis and a test of this hypothesis.
- Calculating a time on task estimate doesn't create customer valued knowledge
- This of course depends. If this labor cost (time on task) is connected to cost in anyway "time on task" = BCWS. If it is, then the "cost" the feature is part of the feature selection process. If Feature_A costs 5 times Feature_B, but both deliver close to the same value to the customer, then which one to do first needs to have some way of selection.
- There is always a discussion around "customer value," but David has removed one of the measurements of value - what something costs to deliver that value.
- Estimating is non-value added
- Not in all cases, in fact not in many cases.
- It does not add value IIF
- All features are worth nearly the same
- All features cost nearly the same
- Capacity is fixed over the period of interest
- Even in the maintenance world this is rare. It certainly is rare in the development world
- Estimating is muda!
- Estimates are not muda - stupid estimates may be muda, it's worth stating a core concept again...
- Don't do stupid things on purpose - making bad estimates, like wasting 40% of your constrained resource estimating future effort.
One of the unstated facts of XP is that the iteration planning process works because the work is diced into chunks that fit inside the iteration boundaries. This is called time box scheduling. If a Feature Story is too big, it is broken done into smaller chunks so the planners (remember the planning game in XPE?) can assign the chunks to an iteration. As well customer gets to say which chunk has what priority, so all the portfolio management issues of a normal project management process are pushed on to the customer.
In the end the customer defines a queue of work that has been prioritized and "chunked" to fit the capacity of the development team. But estimating, prioritizing, and planning were done - just not by the developers.
Here's my Beef
The "estimating is Muda" position does not recognize that estimating is part of decision making. Without estimates of the effort, the priority of this work becomes difficult. All work is then equal. If this IS the case, then estimating would add no value. But David doesn't lay out that background. From the end of the "estimating is Muda post"...
On the other hand, I thoroughly embrace the idea that we analyze and partition our problem space. In FDD, we analyze the work into a domain model and later partition it into components. We further analyze the work into Features and group them into Feature Sets and Subject Areas. All of this analysis work gives us a work breakdown structure which is entirely value added. All the work done analyzing the Feature Plan is value added. It creates knowledge which is used to deliver the customer valued functionality. We then estimate based on feature velocity. An agile estimating technique that takes almost no effort to calculate. Agile estimating based on analysis minimizes waste of capacity in the capacity constrained knowledge worker resources.
So now we're back to estimating, but only using FDD as the process.
An aphorism
I apologize for the negative tone, but the hook I have here is the "set up," the suggestion the being the "bad boy of project management" is somehow a position that conveys better understanding of the problem and the potential solutions. There are more problems than solutions in the business of managing software projects. Adding confusion to the search for solutions seems to remove value from the process.
For a place to look for broader approaches to estimating see... Estimation as Hypothesis
Late Update...
Since timing is everything, the December edition of Crosstalk has an article titled "Agile Software Development for the Entire Project," Granville Miller, pp. 9-12. This article describes how estimating is done in the Microsoft Solutions Framework (MSF)...
Scenarios and quality-of-service requirements in these lists (lists of desired functionality) are prioritized and rough order-of-magnitude estimates are initially provided by the developers.
The electronic version of this issue is not out yet, but when it does, it describes the mechanisms provided by MSF. Since Crosstalk is the journal of Defense Software Engineering there is an implicit connection to CMMI - although interestingly the article does not explicitly state this.