This is from an article about the application of Bayesian Statistics to a civil suit in the UK over the source of a building fire.
The idea that you can assign probabilities to events that have already occurred, but where we are ignorant of the result, forms the basis for the Bayesian view of probability. Put very broadly, the 'classical' view of probability is in terms of genuine unpredictability about future events, popularly known as 'chance' or 'aleatory uncertainty'.
The Bayesian interpretation allows probability also to be used to express our uncertainty due to our ignorance, known as 'epistemic uncertainty', and popularly expressed as betting odds. Of course there are all gradations, from pure chance (think radioactive decay) to processes assumed to be pure chance (lottery draws), to future events whose odds depend on a mixture of genuine unpredictability and ignorance of the facts (whether Oscar Pistorius will be convicted of murder), to pure epistemic uncertainty (whether Oscar Pistorius knowingly shot his girlfriend).
When we build probabilistic models of project performance - cost, schedule, and technical - we assume we understand the underlying statistical processes that drive these probabilistic generating functions. These are the aleatory uncertainties in duration, cost, and performance. We define the Probability Density Function in the Monte Carlo Simulator. Then we apply that to the network of work activities (the Integrated Master Schedule), to produce confidence outcomes for completing on or before a planned date and at or below a planned cost. This is all fine and dandy. But we really don't know the underlying drivers that create coupling, correlation, and cross correlations between the work activities, cost, and technical performance. These can be model by discovering the drivers in the network.
For the Epistemic uncertainties we need another modeling tool. The current tools don't actually use Bayesian statistics, rather they use Monte Carlo Simulation and treat the Probability of an Event as an aleatory process integrated with the other PDF's, ranges, and their shapes (Kurtosis and Skew).
We're missing the tools needed to construct a credible epistemic model of how the program works. Using the Integrated Master Schedule (IMS) as the topology for work, the probabilistic behaviours of the work elements at each node - cost, schedule, and technical performance compliance of the products - and the coupling and cohesion of the nodes. With this information - assuming it is credible, which is a HUGE assumption - we could model the behaviour of the program and ask what if questions.