Here's an article, recently referenced by a #NoEstimates twitter post. The headline is deceiving, the article DOES NOT suggest we don't need deadlines, but that deadlines without credible assessment of their credibility are the source of many problems on large programs...

**The Core Problem with Project Success**

There are many core Root Causes of program problems. Here's 4 from research at PARCA

- Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance.
- Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models.
- Inadequate accessment of risk and unmitigated exposure to these risks with proper handling plans.
- Unanticipated Technical issues with alternative plans and solutions to maintain effectiveness.

Before diving into the details of these, let me address another issue that has come up around project success and estimates. There is a common chart used to show poor performance of projects that compares *Ideal* project performance with the *Actual* project performance. Here's the notional replica of that chart.

This chart shows several things

- The notion of
*Ideal*is just that - notional. All that line says is this was the baseline Estimate at Completion for the project work. It says nothing about the credibility of that estimate, the possibility that one or all of the Root Causes above are in play. - Then the chart shows that many projects cost more or take longer (costing more) in the sample population of projects.
- The term
*Ideal*is a misnomer. There is no ideal in the estimating business. Just the estimate.- The estimate has two primary attributes - accuracy and precision.

- The chart (even the notional charts) usually don't say what the accuracy or precision is of the value that make up the line.

So let's look at the estimating process and the actual project performance

- There is no such thing as the
*ideal*cost estimate. Estimates are probabilistic. They have a probability distribution function (PDF) around the Mode of the possible values from the estimate. This Mode is the Most Likely value of the estimate. If the PDF is symmetric (as shown above) the upper and lower limits are usually done in some 20/80 bounds. This is typical in or domain. Other domains may vary. - This says here's our estimate with an 80% confidence
*.* - So now if the actual cost or schedule, or so technical parameter falls inside the
*acceptable range*(the confidence interval) it's considered GREEN. This range of variances addresses the uncertanty in both the estimate and the project performance.

But here's three problems. First, there is no cause stated for that variance. Second, the

idealline can never beideal. The straight line is for the estimate of the cost (and schedule) and that estimate is probabilistic. So the line HAS to have a probability distribution around it. The confidence interval on the range of the estimate. The resulting actual cost or schedule may well be within acceptable range of the estimate. Third is are the estimates being updated, when work is performed or new work is discovered and are those updates the result of changing scope? You can't statewe did make our estimateif the scope is changing. This is corePerformance Measurement Baselinestruff we use every week where we work.

As well since *ideal *line has no probabilistic attributes in the original paper(s), no shown above - Here's how we think about cost, schedule, and technical performance modeling in the presence of the probabilistic and statistical processes of all project work. †

So let's be clear. NO point estimates can be credible. The

Idealline is a point estimate. It's bogus on day and continues to mislead as more data is captured from projectsclaimedto not match the original estimate. Without the underlying uncertanties (aleatory and epistemic) in the estimating model theidealestimates are worthless. So when the actual numbers come in and don't match theideal estimatethere is NO way to know why.Was the estimate wrong (and all point estimates are wrong) or was one or all of Mr. Bliss's root causes the cause of the actual variance

So another issue with the *Ideal Line* is there is no confidence intervals around the line. What if the *actual* cost came *inside* the acceptable range of the *ideal* cost? Then would the project be considered *on cost* and *on schedule*? Add to that to *coupling * between cost, schedule, and the technical performance as shown above.

The use of the Ideal is Notional. That's fine if your project is Notional.

What's the reason a project or a collection of projects don't match the baselined estimate. That estimate MUST have an accuracy and precision number before being useful to anyone.

- Essentially that straight line is likely an unquantified
point estimate.AndALLpoint estimates are WRONG, BOGUS, WORTHLESS. (Yes I am shouting on the internet).- Don't ever make decisions in the presence of uncertanty with point estimates.
- Don't ever do analysis of cost and schedule variances without first understanding the accuracy and precision of the original estimate.
- Don't ever make suggestions to make changes to the processes without first finding the root cause of why the actual performance has a variance with the planned performance.

So what's the summary so far:

- All project work is probabilistic, driven by the underlying uncertainty of many processes. These process are coupled - they have to be for any non-trivial projects. What's the coupling factors? The non-linear couplings? Don't know these, no way to suggest much of anything about the time phased cost and schedule.
- Knowing the reducible and irreducible uncertainties of the project is the
for project success.*minimal critical success factor* - Don't know these? You've doomed the project on day one.

So in the end, any estimate we make in the beginning of the project, MUST be updated at the project proceeds. With this *past performance* data we can make improved estimates of the future performance as shown below. By the way, when the #NoEstimates advocates suggest using past data (empirical data) and don't apply the statistical assessment of that data to produce a confidence interval for the future estimate (a forecast is an estimate of a future outcome) they have only done half the work needed to inform those paying what is the likelihood of the future cost, schedule, or technical performance.

**So Now To The Corrective Actions of The Causes of Project Variance**

If we take the 4 root causes in the first chart - courtesy of Mr. Gary Bliss, Director Performance Assessment and Root Cause Analysis (PARCA), let's see what the first approach is to fix these

**Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance**

- Defining the Measures of Performance, the resulting Measures of Effectiveness, and the Technical Performance Measures of the resulting project outcomes is a critical success factor.
- Along with the Key Performance Parameters, these measures define what DONE looks like in units of measure meaningful to the decision makers.
- Without these measures, those decision makers and those building the products that implement the solution have no way to know what DONE looks like.

**Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models**

- Here's where estimating comes in. All project work is subject to uncertainty. Reducible (Epistemic) uncertainty and Irreducible (Aleatory) uncertainty.
- Here's how to Manage in the Presence of Uncertainty.
- Both these cause risk to cost, schedule, and technical outcomes.
- Determining the range of possible values for aleatory and epistemic uncertainties means making estimates from past performance data or parametric models.

**Inadequate assessment of risk and unmitigated exposure to these risks without proper handling plans**

- This type of risk is held in the Risk Register.
- This means making estimates of the probability of occurrence, probability of impact, probability of the cost to mitigate, the probability of any residual risk, the probability of the impact of this residual risk.
- Risk management means making estimates.
- Risk management is how adults manage projects. No risk management, no adult management. No estimating no adult management.

**Unanticipated Technical issues with no alternative plans and solutions to maintain effectiveness**

- Things go wrong, it's called development.
- When thing go wrong, where's Plan B? Maybe even Plan C.

When we hear we can't estimate, planning is hard or maybe not even needed, we can't forecast the future, let's ask some serious questions.

- Do you know what DONE looks like in meaningful units of measure?
- Do you have a plan to get to Done when the customer needs you to, for the cost the customer can afford?
- Do you have the needed resources to reach Done for the planned cost and schedule?
- Do you know something about the risk to reaching Done and do you have plans to mitigate those risks in some way?
- Do you have some way to measure physical percent complete toward Done, again in units meaningful to the decision makers, so you can get feedback (variance) from your work to take corrective actions to keep the project going in the right direction?

The answers should be YES to these Five Immutable Principles of Project Success

If not, you're late, over budget, and have a low probability of success on Day One.

†NRO Cost Group Risk Process, Aerospace Corporation, 2003