Statistics rule our lives. Traffic patterns for our commute to work, probabilistics weather anywhere you live. Your 401(k) earnigs reports that comes every month. For projects all the elements are statistically driven. The productivity of the engineers or developers. The partial testing coverage of sofwtare, hardware, integrated systems. The performance of any product. The efficacy of the product when it is in use. The forecasted cost to develop the products or provide the services. The total duration to produce the needed outcomes of the project. It's all statistics all the time.

To work on or manage projects with any hope of success, we need to know about statistics and the probabilistic outcomes that result. Let's start with a simple picture. All projects - at least ones beyond the simple single team working off a single set of work activities, looks like this. There are a collection of activities, connected with each other in a network. The durations are random numbers. The completed outcomes are randomly compliant with the needed quality, functionality, or capability. Even with high coverage testing, when we assemble parts into a whole new things happen. Things we didn't think would happen. The new system has a *probability* of working the first time. And that probability is not 100%. (We'll get to the quote on this idea).

Many in the agile community - especially the Kanban community - assume work can be divided into *same sized* chunks and the people doing the work can process these chunks in *same sized* time frames, but where I come from that would be considered naive at best. Steady arrivals, steady processes, steady exiting of finished products doesn't even happen on the Toyota assembly line. Let alone in the product development business.

**So Now For The Quote**

*One of the great myths of science is that it is rigorous. More to the point is that the scientific method chops up any problem into small pieces that can be comprehended by the human mind, and an above-average mind at that. Some of the pieces are more rigorous than others, and the reassembly of the whole always requires letting something fall through the cracks.*

- *The Art of Modeling Dynamic Systems*, Foster Morrison, Wiley, 1991.

So now let's think about the parts that have been decomposed and reassembled. Are they correlated in some way? Does change in one actually make a change in the other - causation? How can we tell? Start by assuming there is no causation between the parts, even though there is correlation.

So let's look at one more thing about the probability and statistics of projects. The picture below is critical to sorting out many of the misconceptions around how projects behave. Statistical processes drive the behaviour of projects. All the elements of a project are subject to these stochastic processes.

If we know something about a process - maybe by observing it - we can state things about the statistical behaviour of the process. Once we know about the underlying processes, we can make probablistic forecasts of future behaviours. For both the statistical and probabilistic measurement we also need to know the variance on those numbers. This allows us to make estimates of past, current, and future behaviours and make forecasts of future behaviors, both with uncertainty bounds.