Our current work efforts on forecasting Estimate A Completion and connecting the dots between Earned Value Management and Technical Performance Measures and the sources of Measures of Performance an Measures of Effectiveness is proceeding. Conference and journal papers are coming in the May and June. Here are some books that have informed that effort.
Making Multiple-Objective Decisions, Manssoreh Mollaghasemi and Julia Pet-Edwards is a good starting point when faced with deciding anything on a project.
The book is a handbook for decision making, with examples, and step-by-step processes for multi-criteria decision making. Almost all decisions involve consideration of multiple objectives that often conflict. Cost, technical capabilities, deadlines, safety, appearance, efficiency, etc. In order to decide, information is needed about the tradeoffs involved in the selection processes. This of course is why estimating many of the parameters is mandatory for any cedible decision making process and suggestion we can make decision without estimates is essentially nonsense. A companion book is Making Hard Decisions: An Introduction to Decision Analysis, 2nd Edition Robert Cleman, This book is about value focused thinking and decision making. So when we hear about value and spending other peoples money, this is a good place to look.
Forecasting and Simulating Software Development Projects, Troy Magennis. This is a book about forecasting software cost and schedule for Kanban and Scrum projects. Starting with Scrum and Kanban, Troy shows how to estimate cost and schedule using a what if paradigm and his Monte Carlo Simulation tool.
Since all variables in all projects are random variables, Monte Carlo is one approach to simulating the outcomes. Method of Moments is another, but MCS is a straight forward approach.
Modeling is the basis of decision making as well. With the model we can ask questions about the future and generate confidence intervals on those answers. The George Box quote - nearly universally misused - all models are wrong some are useful is in play here. A model is an approximation of a process - in this case writing software for money. All models are useful to the extent we understand the processes by which the model was developed and applied.
This is a core process of all estimating and replaces guessing with modeling.
Towards Improved Project Management Practices: Uncovering the evidence of effective practices throughg emperical research, Terence John Cooke-Davies
We hear the term emperical all the time, but like Ignio Montoya says You keep using that word. I don't think it means what you think it means. Emperical data is gathered from observation. But in the mangement of projects that data must be used to create error signals from that observed performance - the emperical data - when compared to the target data for the projects desired outcomes.
Failing to have a target to steer toward is called Open Loop Control and a very good way to drive straight into the ditch. So once again, estimates to future desired performance, comparted to the past statistical performance (a few samples used to compute the mean with a 2 standard deviation of the means variance is not credible by the way), must be in place to forecast the future performance.
Troy's book shows how to deal with all this.
Forecasting Methods and Applications, 3rd Edition, Spyros Makridakis, Steven Wheelwright, and Rob Hyndman. Hydman's site has everything you need to start forecasting the future using your collected empirical data and the R programming language.
Forecasting has been around since the 1950's with George Box's methods. The same George Box people misquote about all models are wrong. Forecasting is all about decision making again.
The distinction between external - uncontrolled events - and controllable events is many times not made. This creates not only confusion, it's lays the ground work for bad decision making. The much quoted Taleb Black Swans are uncontrolled external event - Externalities in the financial market place. Projects are rarely impacted by externalities if the proper risk management processes are in place. When they are not in place and the project is not Managed then those Black Swans will appear more often. But this is simply bad project management - don't do that.
This book shows how to forecast the future given the past.
So What Can We Do With This Knowledge?
The first thing to do is realize that decision making is a probabilistic process based on the underlying statistics of the processes we are trying to make decisions about. Thinking that we can make decision in the absence of some form of knowledge about cost, schedule, and technical outcomes is simply not possible. Saying so does not make it so.
Exploring on how to make decisions in the absence of estimating - the kind of statistical estimating described in all these books - is unnecessary. These books are a start, but there is a nearly unlimited wealth of information on how to make informed decisions in the presence of uncertainty.
Re-posting Scott Adams Dilbert cartoons of bad management practices is probably good for Scott Adams, but does ZERO to provide corrective actions for that Bad Management. We all know the problems, how about some solutions? It's trivial to point out the problem actually. And since it's trivial, it's also intellectually lazy.
Read these books, read other books, read papers, explore how other people have addressed the problems of increasing the probability of project success, put in the effort needed to make that increase possible on your project. Stop reading Dilbert and start fixing the problems.