One suggestion from the #NoEstimates community is the use of empirical data of past performance. This is many time called yesterdays weather. First let's make sure we're not using just the averages from yesterdays weather. And even adding the variance to that small sample of past performance can lead to very naive outcomes.
We need to do some actual statistics on that time series. A simple R set of commands will produce the chart below from the time series of past performance data.
But that doesn't really help without some more work.
- Is the future Really like the past - are the work products and the actual work performed in the past replicated in the future? If so, this sound like a simple project, just turn out features that all look alike.
- Is there any interdependencies that grow in complexity as the project move forward? This is the integration and test problem. Then the system of systems integration and test problem. Again simple project don't usually have this problem. More complex projects do.
- What about those pesky emerging requirements. This is a favorite idea of agile (and correctly so), but simple past performance is not going to forecast the needed performance in the presence of emerging requirements
- Then all the externalities of all project work, where are those captured in the sample of past performance?
- All big projects have little projects inside them is a common phrase. Except that collection of little projects needs to be integrated, tuned, tested, verified and validated that all the parts when assembled actually do what the customer wants.
Getting Out of the Yesterday's Weather Dilemma
Let's use the chart below to speak about so sources of estimating NOT based on simple small samples of yesterdays weather. This is a Master Plan for a non-trivial project to integrate half dozen or so legacy enterprise systems with a new health insurance ERO system for an integrated payer/provider solution:
- Reference Class Forecasting for each class of work product.
- As the project moves left to right in time the classes of product and the related work likely change.
- Reference classes for each of this movements through increasing maturity, and increasing complexity from integration interactions needs to be used to estimate not only the current work but the next round of work
- In the chart above work on the left is planned with some level of confiidence, because it's work in hand. Work in the right is in the future, so an courser estimate is all that is needed for the moment.
- This is a planning package notion used in space and defense. Only plan in detail what yuo understand in detail.
- Interdependencies Modeling in MCS
- On any non-trivial project there are interdependencies
- The notion of INVEST needs to be tested
- Independent - not usually the case on enterprise projects
- Negotiable - usually not, since he ERP system provides the core capability to do business. Would be illogical to have half the procurement system.
- We can issue purchase orders and receive goods. But we cant pay for then until we get the Accounts Payable system. We need both at the same time
- Valuable - Yep, why we doing this if it's not valuable to the business. This is a strawman used by low business maturity projects.
- Estimate - to a good approximation is what the advice tells us. The term good needs a unit of measure
- Small - is a domain dependent measure. Small to an enterprise IT projects may be huge to a sole contributor game developer.
- Testable - Yep, and verifiable, and validatable, and secure, and robust, and fault tolerant, and meets all performance requirements.
- Margin - protects dates, cost, and technical performance from irreducible uncertainty. By irreductible it means nothing can be done about the uncertainties. It's not the lack of knowledged that is found in reducible uncertainty. Epistemic uncertainty. Irreducible uncertainty is Aleatory. It's the natural randomness in the underlying processes that creates the uncertainty. When we are estimating in the presence of aleatory uncertainty, we must account for this aleatory uncertainty. This is why using the average of a time series for making a decision about possible future outcomes will always lead to disappointment.
- First we should always use the Most Likely value of the time series, not Average of the time series.
- The Most Likely - the Mode - is that number that occurs most often of all the possible values that have occurred in the past. This should make complete sense when we consider what value will appear next? Why the value that has appeared Most Often in the past.
- The Average of two numbers 1 and 99 is 50. The average of two numbers 49 and 51 is 50. Be careful with averages in the absence of knowing the variance.
- Risk retirement - Epistemic uncertainty creates risks that can be retired. This means spending money and time. So when we're looking at past performance in an attempt to estimate future performance (Yesterdays Weather), we must determine what kind of uncertainties there are in the future and what kind of uncertainties we encountered in the past.
- Were the and are they reducible or irreducible?
- Did the performance in the past contain irreducible uncertainties, baked into the numbers that we did not recognize?
This bring up a critical issue with all estimates. Did the numbers produced from the past performance meet the expected values or were they just the numbers we observed? This notion of taking the observed numvers and using them for forecasting the future is an Open Loop control system. What SHOULD the numbers have been to meet our goals? What SHOULD the goal have been? Did know that, then there is no baseline to compare the past performance against to see if it will be able to meet the future goal.
I'll say this again - THIS IS OPEN LOOP control, NOT CLOSED LOOP. No about of dancing around will get over this, it's a simple control systems principle found here. Open and Close Loop Project Controls
- Measures of physical percent complete to forecast future performance with cost, schedule, and technical performance measures - once we have the notion of Closed Loop Control and have constructed a steering target, can capture actual against plan, we need to define measures that are meaningful to the decisions makers. Agile does a good jib of forcing working product to appear often. The assessment of Physical Percent Complete though needs to define what that working software is supposed to do in support of the business plan.
- Measures of Effectiveness - one very good measure is of Effectiveness. Does the software provide and effective solution to the problem. This begs the question or questions. What is the problem and what does an effective solution looks like were it to show up.
- MOE's are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
- Key performance parameters - the companion of Measures of Effectiveness are Measures of Performance.
- MOP's characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
- Along with these two measures are Technical Performance Measures
- TPM's are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal.
- And finally there are Key Performance Parameters
KPPs represent the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
The connections between these measures are shown below.
With these measures, tools for making estimates of the future - forecasts - using statistical tools, we can use yesterdays weather, tomorrow models and related reference classes, desired MOE's, MOP's, KPP's, and TPM's and construct a credible estimate of what needs to happen and then measure what is happening and close the loop with an error signal and take corrective action to stay on track toward our goal.
This all sounds simple in principle, but in practice of course it's not. It's hard work, but when you assess the value at risk to be outside the tolerance range where thj customer is unwilling to risk their investment, we need tools and processes wot actually control the project.