In a recent discussion it was mentioned there is no such thing as a fixed delivery date with fixed features. This of course is not actually the case when you look at complex, high risk, high reward, software intensive programs in defense, space, and ERP domains. So if it can done in those domains (granted sometimes successfully sometimes not), it can be scaled to fit any other domain.
Along with that conversation was one about forecasts and estimates are different, forecasts don't have uncertainty and estimates do. This of course is not actually the case when you look at the mathematics of probability and statistics and the information they provide to the decision makers.
Here's one place to start...
- Estimates deal with past, current and future situations.
- Forecasts deal with future situations
It's that simple. As well estimates and forecasts have probabilistic behaviours and therefore have confidence intervals. Looking around on my bookshelf here's some guidance. These are old books that started their use in jobs that required probability and statistics to be successful. Signal processing, pattern recognition, process control, and well a later project and program management. Everything is probabilistic, driven by underlying statistical processes.
- Probability and Statistics, Julius Blum and Judah Rosenblatt, W.B. Saunders,1972 - Chapter 10, pp. 323 is Constructive Methods of Obtaining Point Estimates.
- The Art of Modeling Dynamic Systems: Forecasting for Chaos, Randomness, and Determinism, Foster Morrison. Chapter 19 where there is a figure showing forecast and the error bands on the forecast.
So it's worth repeating.
All activities on projects are probabilistic, driven by underlying statistical processes. These statistical process create uncertainty. This uncertainty create risk. One class of uncertainty is irreducible (aleatory) meaning we can't do anything about it. The other class of uncertainty is reducible (epistemic)(epistemology is the study of knowledge) which means we can learn more about this uncertainty and do something about it.
The irreducible uncertainty is handled by margin. Cost, schedule, or technical margin. For reducible uncertainty and the resulting risk, we can spend money to fix this. We can build prototypes, run more tests, test often, use test driven development, produce working products every week, get customer buy in every close of business day, do something, anything to get feedback on the outcomes of the work effort. This is role of agile software development. Agile DOES NOT address the aleatory uncertainty, unless each sprint or itertion has margin.
For both uncertainty classes - reducible and irreducible - reference classes are the best starting point. These are statistical models of work done in the past. Some can be parametricaly adjusted for the new problem, some can just be anchors for guiding the estimating and forecasting processes, some can be the adjustment process after the initial estimate or forecast has been made. This anchorinng and adjustment paradigm is well developed in a wide variety of domains - software, hardware, oil & gas, power systems, aircraft, space craft. This approach is the basis of all good estimating processes along with the guidance found in How Many Licks? or, how to estimate damn near anything and How To Think About Statistics.
This notion that estimating or forecasting is fraught with errors, produces lies and can't or shouldn't be done is sinmply wrong, misinformed, and actually a bit naive. That's being overly crtiical I know, but it's done all the time, sometimes well, sometimes not so well, go learn how first, and then say how it's not working in your domain after you've mastered the process.
So Let's Look At Probability and Statistics on Projects
All elements of projects have statistical behaviour. Projects are dynamic systems. Dynamic means changing. For example orbital energy of a satellite changes very little, but battery power versus data transmission might. But all dynamically behaving variables have statistical properties - a probability distribution function, a mean, mode, median and other moments.
Many, if not most attributes of any project are random variables.
Each of these random variables participate in a forecast and an estimate. Forecast being about future behaviours and estimates about past, present, and future. But both are probabilistically driven by statistical processes. These have a fancy name - stochastic processes.
When I hear the difference between estimate and forecast is that the forecast doesn't states its uncertainty, this is mathematically incorrect.
- Project processes are probabilistic
- The probability distributions are driven by statistical (stochastic) processes
- These processes have moments (mean, median, mode, and higher order) that define the shape of the distribution - Poisson, Weilbul, Rayleigh, Normal (Normal never happens on real projects, sorry), Triangle (one we use most often when we don't know the higher order moments), Beta (used in PERT), etc.
- Each number generated by the underlying probability distribution function, when used in assessing something in the project, as a confidence interval. It has an uncertainty associated with it. Here are some forecasts
- We forecast to complete on or before the 3rd week in November, 2014 with an 80% confidence.
- We forecast to complete at or below $260MM with 75% confidence.
- We forecast a 30% chance of rain in the Denver Metro area for this coming weekend.
- For estimates we can do things like
- We estimate that 1,200 homes were damaged in the recent floods in Boulder County. FEMA didn't count them before they made that forecast, but the forecast had a confidence interval.
- We forecast that Amendment 66 will not pass, with 85% of the precincts reporting.
- Our Estimate to Complete for the remaining work is $267,000 with a 85% confidence.