In software development, we almost always encounter situations where a decision must be made when we are uncertain what the outcome might or even the uncertainty in data used to make that decision.

Decision making in the presence of uncertainty is standard management practice in all business and technical domains. From business investment decision, to technical choices for project work.

Making decisions in the presence of uncertainty means making probabilistic inferences from the information available to the decision maker.

There are many techniques for decision making. Decision trees are common. Where the probability of an outcome of a decision is part of a *branch* of a tree. *If I go left in the branch - the decision - what happens? If I go right what happens?* Each branch point becomes the decision. Each of the two or more branches becomes the outcomes. The probabilistic aspect is applied to the branches, and the outcomes - which may be probabilistic as well and are assessed for befits to those making the decision.

Another approach is Monte Carlo Simulation of decision trees. Here's a tool we use for many decisions in our domain, Palisade, *Crystal Ball. *There are others. They work like the manual process in the first picture, but let you *tune* the probabilistic branching, probabilistic outcomes to model complex decision making processes.

In the project management paradigm of projects we work, there are networks of activities. Each of these activities has some dependency or prior work, and each activity produces dependencies on follow on work. These can be model with Monte Carlo Simulation as well.

The *Schedule Risk Analysis *(SRA) of the network of work activities is mandated on a monthly basis in many of the programs we work.

In Kanban and Scrum systems Monte Carlo Simulation is a powerful tool to reveal the expected performance of the development activity. *Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects Using Monte Carlo **Simulation*, Troy Magennisis a good place to start for this approach.

Each of these approaches and others are designed to provide *actionable information to the decision makers*. This information requires a minimum understanding of what is happening to the system being managed:

- What are the naturally occurring variances of the work activities that we have no control over - aleatory uncertainty?
- What are the event based probabilities of some occurrence - epistemic uncertainty?
- What are the consequences of each outcome - decision, probabilistic event, or naturally occurring variance - on the desired behavior of the system?
- What choices can be made that will influence these outcomes?

In many cases, the information available to make these choices is in the future. Some is in the past. But that information in the past needs careful assessment.

Past data is

Onlyuseful if you can be assured the future is like the past. If not, making decision using past data without adjusting that data for the possible changes in the future takes you straight into the ditch - seeThe Flaw of Averages.

In order to have any credible assessment of the impact of a decision using data in the future - *where will the system be going in the future?* - it is mandatory to **ESTIMATE**.

It is simply not possible to make decisions about future outcomes in the presence of uncertainty in that future without making estimates.

Anyone says you can is incorrect. And if they insist it can be done, ask for testable evidence of their conjecture, based on the mathematics of probabilistic systems. No testable credible testable data, then it's pure speculation. Move on.

**The False Conjecture of Deciding in Presence of Uncertainty without Estimates**

- Slicing the work into similar sized chunks, performing work on those chunks and using that information to produce information about the future makes the huge assumption the future is like the past.
- Record past performance, making nice plots, running static analysis for
*mean*,*mode*,*standard deviation, variance*is naive at best. The time series variances are*rolled up*hiding the latent variances that will emerge in the future. Time series analysis (ARIMA) is required to reveal the possible values in the dataset from the past that will emerge in the future, since the system under observation remains the same.

Time series analysis is a fundamental tool for making forecasting of future outcomes from past data. Weather forecasting - plus complex compressible fluid flow models - is based on time series analysis. Stock market forecasting uses time series analysis. Cost and Schedule modeling uses time series analysis. Adaptive process control algorithms, like the speed control and fuel management in your modern car uses time series analysis.

One of the originators of time series analysis, George E. P. Box and his seminal book *Time Series Analysis, Forecasting and Control*, is often seriously misquoted, when he said *All Models are Wrong, Some are Useful. *Anyone misusing that quote to try and convince you, you can't model the future didn't (or can't) do the math in Box's book and likely got a D in the High School probability and statistics class.

So do the math, read the proper books, gather past data, model the future with dependency networks, Kanban and Scrum backlogs, measure current production, forecast future production based on Monte Carlo Models - and don't believe for a moment that you can make decision about future outcomes in the presence of uncertainties without estimating that future.

## Recent Comments