All the work we do in the projects domain is driven by uncertainty. Uncertainty of some probabilistic future event impacting our project. Uncertainty in the work activities performed while developing a product or service.

Decision making in the presence of these uncertainties is a natural process in all of business.

The decision maker is asked to express her beliefs by assigning probabilities to certain possible states of the system in the future and the resulting outcomes of those states.

*What's the chance we'll have this puppy ready for VMWorld in August? What's the probability that when we go live and 300,000 users logon we'll be able to handle the load? What's our test coverage for the upcoming release given we've added 14 new enhancements to the code base this quarter? *Questions like that are normal everyday business questions, along with *what's the expected delivery date, what's the expected total sunk cost, and what's the expected bookable value measured in Dead Presidents for the system when it goes live?*

To answer these and the unlimited number of other business, technical, operational, performance, security, and financial questions, we need to know something about probability and statistics. This knowledge is an essential tool for decision making no matter the domain.

Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write - H.G. Wells

If we accept the notion that all project work is probabilistic, driven by the underlying statistical processes of time, cost, and technical outcomes, including Effectiveness, Performance, Capabilities, and all the *...ilities* that manifest and determine value after a system is put into initial use. Then these conditions are the source of *uncertainty* and come in two types:

**Reducible**- event based with a probability of occurrence within a specified time period.**Irreducible**- naturally occurring by a Probability Distribution Function of the variances produced by the underlying process.

If you don't accept this - *that all project work is probabilistic in nature* - stop reading, this Blog is not for you.

If you do accept that all project work is *uncertain*, then there are some more assumptions we need to make sense of the decision making processes. The term *statistic* has two definitions - one long ago and a current one. The long ago one means a *fact*, referring to numerical facts. A numerical *fact* as a measurement, a count, or a rank. This number can represent a total, an average or a percentage of several such measures. This term also applied to the broad discipline of statistical manipulation in the same way *accounting* applies to entering and balancing accounts.

*Statistics* in the second sense is a set of methods for obtaining, organizing, and summarizing numerical facts. These facts usually represent a *partial* rather than complete knowledge about a situation. For example the sample of the population rather than counting the entire population in the case of the census.

These numbers -

statistics- are usually subjected to formal statistical analysis to help in our decision making in the presence of uncertainty.

In our software project world uncertainty is an inherent fact. Software uncertainty is likely much higher than in construction, since the requirements in software development are *soft* unlike the requirements in interstate highway development. But while the domain may have different variance in the level of uncertainty, estimates are still needed to make decisions in the presence of these uncertainties. Highway development has many uncertainties - none the least is the weather and weather delays.

When you measure what you are speaking about and express it in numbers you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind - Lord Kelvin

Decisions are made on data. Otherwise those decisions are just gut feel, intuition, and at their core *guesses*. When you are guessing with other peoples money you have a low probability of keeping your job or the business staying in business.

... a tale told by an idiot, full of sound and fury, signifying nothing - Shakespeare

When we hear personal anecdotes about how to correct a problem and the conjecture that those anecdotes are applicable outside the individual telling the anecdote - beware. Without a test of any conjecture it is just a conjecture.

He uses statistics as a drunken man uses lampposts - for support rather than illumination - Andrew Lang

We many times confuse a symptom with the cause. When reading about all the failures in IT projects, and probability of failure, the number of failures versus success, there is rarely - in those naive posts on that topic - any assessment of the *cause *of the failure. The Root Cause analysis is not present. The *Chaos Report* is the most egregious of these.

There is no merit where there is no trial; and till experience stamps the mark of strength, cowards may pass for heroes, and faith for falsehood - A. Hill

Tossing out anecdotes, platitudes, and misquoted quotes does not make for a credible argument for anything. *I knew a person that did X successfully, therefore you should have the same experience *is common. Or *just try it you may find it works for you just like it worked for me*.

It seems there are no Principles or tested Practices in the approach to improving projects success. Just platitudes and anecdotes - masking *chatter* as process improvement advice.

I started to write a detailed exposition using this material for the #NoEstimates conjecture that decisions can be made without an estimate. But Steve McConnell's post is much better than anything I could have done. So here's the wrap up...

When it is conjectured that decisions, any decisions, some decisions, self selected decisions, can be made in the presence of uncertainty can be made

also making an estimate of the outcome of that decision, the cost of that decision, the impact of that decision - then let's hear how, so we can test it outside personal opinion and anecdote.without

**References **

It's time for #NoEstimates advocates to provide some *principle based *examples of how to make decisions in the presence of uncertainty without estimating. Here these are populist books (Books without the heavy math), but still capable of conveying the principles of the topic can be a source of learning.

- Flaws and Fallacies in Statistical Thinking, Stephen K. Campbell, Prentice Hall, 1974
- The Economics of Iterative Software Development: Steering Toward Better Business Results, Walker Royce, Kurt Bittner, and Mike Perrow, Addison Wesley, 2009.
- How Not to be Wrong: The Power of Mathematical Thinking, Jordan Ellenberg, Penguin Press, 2014
- Hard Facts, Dangerous Half-Truths & Total Nonsense: Profiting from Evidence Based Management, Jeffery Pfeffer and Robert I. Sutton, Harvard Business School Press, 2006.
- How to Measure Anything, Finding the Value of
*Intangibles*in Business, 3rd Edition, Douglas W. Hubbard, John Wiley & Sons, 2014. - Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways Ways to Lie With Statistics, Gary Smith
- Center for Informed Decision Making
- Decision Making for the Professional, Peter McNamee and John Celona

Some actual math books on the *estimating *problem

- Probability Methods for Cost Uncertainty Analysis, Pau R. Garvey
- Making Hard Decisions: An Introduction to Decision Analysis, 2nd Edition, Robert T, Clemen, Duxbury Press, 1996.
- Estimating Software Intensive Systems, Richard D. Stutzke, Addison Wesley, 2005.
- Probabilities as Similarly Weighted Frequencies, Antoine Billot · Itzhak Gilboa · Dov Samet · David Schmeidler