To increase the probability of project success, many things have to happen at the same time. Here's are Five principles and practices that can increase that probability of success
Principles, Practices, and Processes to Increase Probability of Project Success
To increase the probability of project success, many things have to happen at the same time. Here's are Five principles and practices that can increase that probability of success
They are many times end up being personal anecdotes from observations and local practices that may or may not be applicable outside those anecdotes and more importantly may not be statistically sound in principle, let alone practice.
I work in the Software Intensive System of Systems domains in Aerospace, Defense, Enterprise IT (both commercial and government) applying Agile, Earned Value Management, Productive Statistical Estimating (both parametric and Monte Carlo), Risk Management, and Root Cause Analysis with a variety of capabilities. In this domain, we are guided by credible results using principles, processes, and procedures to increase the probability of program success. References below.
The growth of cost and schedule is not unique to commercial development. One of my work colleagues is the former NASA Cost Director. This is from one of our presentations from a International Cost Estimating and Analysis Association meeting on the same topic. And there are many other examples (see references).
One case of 12 projects from a large contractor of Software Intensive System (SIS) shows similar variances
Some might say these are domains unrelated to our domain. I'd suggest the Principles for project success on non-trivial software efforts are universal. Your project may be different in practice, but the principles are the same. These principles are:
So let's look at an example
Here's a typical graph showing a core problem in the software development domain.
There are a collection of projects that started with an estimate at completion and as these projects are executed they didn't turn out as planned - and most importantly they didn't turn out as needed.
Figure 1 - Planned Estimates versus Actual Performance from 
Some unanswered critical questions at charts like Figure 1 are:
Each of these questions and the others needed to determine if the samples like those Figure 1 have any root causes not identified by the author of the chart.
Without determining the cause of why the sampled value is what it is, the chart is missing one half of the needed information to make a decision for the corrective actions, and the unfavorable answers to the questions above.
A Few References and Resources
All project work is random work. There are three core random variables on all projects, shown below. There are sub-variables as well as all the ...ilities involved in project work, but let's start with the major three.
Fixing, 1, 2, or all 3 of these random variables does NOT make the randomness go away.
These variables are random and all variables on projects are random because of uncertainty. This uncertainty (as mentioned on many other blogs) comes from two sources. Aleatory uncertainty that is the underlying natural randomness of all project activities. This is called irreducible uncertainty. It can't be reduced. Nothing you can do will reduce it. It's there and will always be there. This is a statistical process. The only way to work in the presence of irreducible uncertainty is to have margin. Cost margin, schedule margin, technical margin.
The second is epistemic uncertainty. This is uncertainty that is event based. It's there but can be handled in some ways. Those ways can include buying two of everything in case one breaks, having redundancy in other forms - a backup site for the data center, testing, prototypes, and other activities that provide a Plan B when the probability that something will go wrong becomes true and that thing that went wrong is no longer a probability but has turned into an Issue.
So Here's the Real Problem
When we hear, we don't need to estimate, I can fix time and budget, that doesn't make the randomness go away. It just sets an upper bound on what you CAN spend and when you HAVE TO BE DONE. Those uncertainties that create the randomness are still there. Then fixed time and fixed budget plans, leave open the technical randomness as well. The time and budget are still random inside the constraints set by the project.
There's no getting around this. No matter how often someone says you can. Those someones were asleep in the engineering probability and statistic class. Here's the classic engineering course we were all forced to take as physics grad-student Probability and Statistics. †
This is basic probability and statistics of project work. The probability that something will turn out unfavorable is created by epistemic uncertainty. The statistical variances of everyday life are created by aleatory uncertainty.
Ignoring these uncertainties means it's going to turn out bad for those paying for your work
You need margin to protect from irreducible uncertainty. You need specific actions to protect from reducible uncertainty. So you can in fact fix the cost and schedule IF AND ONLY IF (IFF) you have margin and risk buy down plans. When someone says we've fixed the duration and the budget. two things come to mind.
A third notion is the killer notion
When you fix time and cost, have sufficient risk buy down activities to reduce the epistemic uncertainty that creates the probability of something going wrong to an acceptable level, and have sufficient margin to cover the expected overruns in duration, you still have the technical reducible and irreducible uncertainties that the things you building won't work, won't be what the customer wants, will cause other issues - these are called externalities in the economics of software development, and other unknowns, possibly unknowable at the beginning of the project.
When you fix time and or budget, and don't have protections for reducible and irreducible, you're going to be late and over budget and you have willfully ignored those outcomes. Oh and by the way, there is a probability your little gadget is not likely to meet the needs of those paying you either.
These immutable condition (aleatory and epistemic uncertainty) are completely ignored in agile development. Agile provides rapid feed back to the risk management processes of software development. But agile is NOT a risk management process in and of itself. That's a topic for another time.
If you think you have no uncertainties - reducible or irreducible, and have fixed the budget and duration and maybe even the outcomes. You're likely on a de minimis project. Good luck with that.
† We had to take a few courses outside our major, and this was another. Classical Electrodynamics. This was an engineering course. We had a foundation of electrodynamics from the physics point of view. In that view everything can be solved through Maxwell's equations. A simple set of partial differential equation describing how electromagnetism works. When asked to give a talk on antenna theory in the engineering course, a friend (I was too afraid at that time) went to the chalk (yes no white boards) one wrote done maxwell's equations for the reciprocity theorem of antennas in free space. The Professor at the back of the told him (Steve) to sit down. We're engineers not physicists we want to know HOW things work not WHY things work
A 2013 webinar at Cyber Security & Information Systems Information Analysis Center, presented some Immutable Laws of Software Development. These are worth repeating every time there is a suggestion hat some method or another, or some new and untested idea is put forth that will increase productivity by 10X or increase your profitability by NOT doing core business processes.
Here's the list presented in the webinar and is dedicated to Watts Humphrey who said all these in the past. For each Immutable Law, I've made a suggestion on how to avoid the undesirable outcome.
Phillip Armour has a classic article in CACM titled "Ten Unmyths of Project Estimation," Communications of the ACM (CACM), November 2002, Vol 45, No 11. Several of these Unmyths are applicable to the current #NoEstimates concept. Much of the misinformation about how estimating is the smell of dysfunction can be traced to these unmyths.
Mythology is not a lie ... it is metaphorical. It has been well said that mythology is the penultimate truth - Joseph Campbell, The Power of Myth
Using Campbell's quote, myths are not untrue. They are an essential truth, but wrapped in anecdotes that are not literally true. In our software development domain a myth is a truth that seems to be untrue. This is Armour's origin of the unmyth.
The unmyth is something that seems to be true but is actually false.
Let's look at the three core conjectures of the #Noestimates paradigm:
The Accuracy Myth
Estimates are not numeric values. they are probability distributions. If the Probability Distribution below represents the probability of the duration of a project, there is a finite minim - some time where the project cannot be completed in less time.
There is the highest probability, or the Most Likely duration for the project. This is the Mode of the distribution. There is a mid point in the distribution, the Median. This is the value between the highest and the lowest possible completion times. Then there is the Mean of the distribution. This is the average of all the possible completion times. And of course The Flaw of Averages is in effect for any decisions being made on this average value †
“It is moronic to predict without first establishing an error rate for a prediction and keeping track of one’s past record of accuracy” — Nassim Nicholas Taleb, Fooled By Randomness
If we want to answer the question What is the probability of completing ON OR BEFORE a specific date, we can look at the Cumulative Distribution Function (CDF) of the Probability Distribution Function (PDF). In the chart below the PDF has the earliest finish in mid-September 2014 and the latest finish early November 2014.
The 50% probability is 23 September 2014. In most of our work, we seek an 80% confidence level of completing ON OR BEFORE the need date.
The project then MUST have schedule, cost, and technical margin to protect that probabilistic date.
How much margin is another topic.
But projects without margin are late, over budget, and likely don't work on day one. Can't be complaining about poor project performance if you don't have margin, risk management, and a plan for managing both as well as the technical processes.
So what we need is not Accurate estimates, we need Useful estimates. The usefulness of the estimate is the degree to which it helps make optimal business decisions. The process of estimating is Buying Information. The Value of the estimates, like all value is determined by the cost to obtain that information. The value of the estimate of the opportunity cost, which is the different between the business decision made with the estimate and the business decision made without the estimate. ‡
Anyone suggesting that simple serial work streams can accurately forecast - an estimate of the completion time - MUST read Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte Carlo Simulation, Troy Magennis.
In this book are the answers to all the questions those in the #NoEstimates camp say can't be answered.
The Accuracy Answer
But remember, making estimates is how you make business decisions with opportunity costs. Those opportunity costs are the basis of Microeconomics and Managerial Finance.
Cone of Uncertainty and Accuracy of Estimating
There is a popular myth that the Cone of Uncertainty prevents us from making accurate estimates. We now know we need useful estimates, but those are not prevented by in the cone of uncertainty. Here's the guidance we use on our Software Intensive Systems projects.
Finally in the estimate accuracy discussion comes the cost estimate. The chart below shows how cost is driven by the probabilistic elements of the project. Which brings us back to the fundamental principle that all project work is probabilistic. Modeling the cost, schedule, and probability of technical success is mandatory in any non-trivial project. By non-trivial I mean a de minimis project, one that if we're off by a lot it doesn't really matter to those paying.
The Commitment Unmyth
So now to the big bug a boo of #NoEstimates. Estimates are evil, because they are taken as commitments by management. They're taken as commitment by Bad Management, uninformed management., management that was asleep in the High School Probability and Statistics class, management that claims to have a Business degree, but never took the Business Statistics class.
So let's clear something up,
Commitment is how Business Works
Here's an example taken directly from ‡
Estimation is a technical activity of assembling technical information about a specific situation to create hypothetical scenarios that (we hope) support a business decision. Making a commitment based on these scenarios is a business function.
The Technical “Estimation” decisions include:
This kind of information allows us to calculate the amount of time we should allow to get there.
The Business “Commitment” and Risk decisions include:
These are the business consequences that determine how much risk we can afford to take.
Along with these of course is the risk associated with the uncertainty in the decisions. So estimating is also Risk Management and Risk Management is management in the presence of uncertainty. And the now familiar presentation from this blog.
Risk Management is how Adults manage projects - Tim Lister. Risk management is managing in the presence of uncertainty. All project work is probabilistic and creates uncertainty. Making decisions in the presence of uncertainty requires - mandates actually - making estimates (otherwise you're guess your pulling numbers from the rectal database). So if we're going to have an Adult conversation about managing in the presence of uncertainty, it's going to be around estimating. Making estimates. improving estimates, making estimates valuable to the decision makers.
Estimates are how business works - exploring for alternatives means willfully ignoring the needs of business. Proceed at your own risk
† This average notion is common in the No estimates community. Take all the past stories or story points and find the average value and use that for the future values. That is a serious error in statistical thinking, since without the variance being acceptable, that average can be wildly off form the actual future outcomes of the project
‡ Unmythology and the Science of Estimation, Corvus International, Inc., Chicago Software Process Improvement Network, C-Spin, October 23, 2013
There are enough opinions to paper the side of a battle ship. With all these opinions, nobody has a straightforward answer that is applicable to all projects. There are two fundamental understanding though: (1) Everyone has a theory , (2) there is no singular cause that is universally applicable.
In fact most of the suggestions on project failures have little in common. With that said, I'd suggest there is a better way to view the project failure problem.
What are the core principles, processes, and practices for project success?
I will suggest there are three common denominators consistently mentioned in the literature that are key to a project’s success:
Of the 155 defense project failures studied in “The core problem of project failure,” T. Perkins, The Journal of Defense Software Engineering, Vol 3. 11, pp 17, June 2006.
From this research these numbers can be summarized into two larger classes
So where do we start?
Let's start with some principles. But first a recap
Five Immutable Principles of Project Success
With these Principles, here's five Practiuces that can put them to work
The integration of these five Practices are the foundation of Performance–Based Project Management®. Each Practice stands alone and at the same time is coupled with the other Practices areas. Each Practice contains specific steps for producing beneficial outcomes to the project, while establishing the basis for overall project success.
Each Practice can be developed to the level needed for specific projects. All five Practices are critical to the success of any project. If a Practice area is missing or poorly developed, the capability to manage the project will be jeopardized, possibly in ways not know until the project is too far along to be recovered.
Each Practice provides information needed to make decisions about the majority flow of the project. This actionable information is the feedback mechanism needed to keep a project under control. These control processes are not impediments to progress, but are the tools needed to increase the probability of success.
Why All This Formality, Why Not Just Start Coding, Let Customer Tell Us To Stop?
All business works on managing the flow of cost in exchange for value. All business has a fiduciary responsibility to spend wisely. Visibility to the obligated spend is part of Managerial Finance. Opportunity Cost is the basis of Microeconomics of decision making.
The 5 Principles and 5 Practices are the basis of good business management of the scarce resources of all businesses.
This is how adults manage projects
Constructing a credible Integrated Master Schedule (IMS) requires sufficient schedule margin be placed at specific locations to protect key deliverables. One approach to determining this margin is the use of a Monte Carlo simulation tool.
This probabilistic margin analysis starts with the construction of a “best estimate” Integrated Master Schedule with the work activities arranged in a “best path” network.
While there may be “slack” in some of the activities, the Critical Path exists through this network for each Key Deliverable. This network of activities must show how each deliverable will arrive on or before the contractual need date. This “best path” network is the Deterministic Schedule – the schedule with fixed activity durations.
By assigning a duration variance for each class of work activity, the Monte Carlo model shows if the at what confidence level the probabilistic delivery date occurs on or before the deterministic date. The needed schedule margin for each deliverable can be derived by the Monte Carlo simulation. This activity network is referred to as the Probabilistic Schedule – the schedule with activity durations of random variables.
With the schedule margin inserted in front of each deliverable, the Deterministic schedule becomes the basis of the Probabilistic schedule. Next is a cycle of adjusting the Deterministic schedule to assure the needed margin produces the final baselined Deterministic schedule to be placed on baseline. As the program proceeds, this schedule margin is managed through a “margin burn down” process. Assessing the sufficiency of this margin for the remaining work is then part of the monthly program performance report.
Here's an example from an upcoming workshop on building and executing a credible Performance Measurement Baseline based on the Wright Brother's work
For this to work we need several things:
Here's how to use a Monte Carlo tool for determining the likelihood of completing on or before a given date, when there is a schedule of the work with Most Likelies for the work durations and the variances in those durations
One suggestion from the #NoEstimates community is the use of empirical data of past performance. This is many time called yesterdays weather. First let's make sure we're not using just the averages from yesterdays weather. And even adding the variance to that small sample of past performance can lead to very naive outcomes.
We need to do some actual statistics on that time series. A simple R set of commands will produce the chart below from the time series of past performance data.
But that doesn't really help without some more work.
Getting Out of the Yesterday's Weather Dilemma
Let's use the chart below to speak about some sources of estimating NOT based on simple small samples of yesterdays weather. This is a Master Plan for a non-trivial project to integrate half dozen or so legacy enterprise systems with a new health insurance ERO system for an integrated payer/provider solution:
This bring up a critical issue with all estimates. Did the numbers produced from the past performance meet the expected values or were they just the numbers we observed? This notion of taking the observed numbers and using them for forecasting the future is an Open Loop control system. What SHOULD the numbers have been to meet our goals? What SHOULD the goal have been? Did know that, then there is no baseline to compare the past performance against to see if it will be able to meet the future goal.
I'll say this again - THIS IS OPEN LOOP control, NOT CLOSED LOOP. No about of dancing around will get over this, it's a simple control systems principle found here. Open and Close Loop Project Controls
KPPs represent the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
The connections between these measures are shown below.
With these measures, tools for making estimates of the future - forecasts - using statistical tools, we can use yesterdays weather, tomorrow models and related reference classes, desired MOE's, MOP's, KPP's, and TPM's and construct a credible estimate of what needs to happen and then measure what is happening and close the loop with an error signal and take corrective action to stay on track toward our goal.
This all sounds simple in principle, but in practice of course it's not. It's hard work, but when you assess the value at risk to be outside the tolerance range where thj customer is unwilling to risk their investment, we need tools and processes wot actually control the project.
Confidence intervals are the means to measure population parameters. A concern in inferential statistics (making a prediction from a sample of data or from a model of that data) is the estimation of the population parameter from the sample statistic.
The sample statistic is calculated from the sampled data and the population parameter is estimated from this sample statistic.
One issue in inference making - estimating - is sample size determination. How large of a sample do we to make an accurate estimation? This is why small sample sizes produce very unreliable inferences. For example sampling 27 stories in an agile project and making in inference about how the remaining stories are going to behave is Very sporty business.
To have a good estimator, that is to make good estimates from sampled or simulated data the estimator must be:
The point estimate differs from the population parameter due to the sampling error, since there is no way to know who close it is to the actual parameter. Because of this, statisticians give an interval estimate as a range of values used to estimate the parameter.
What's the cost of this project going to be when we're done with all our efforts, given we done some work so far?
The confidence interval is an interval estimate with a specific level of confidence. A level of confidence is the probability that the interval estimate will contain the parameter. The level of confidence is 1 — α. Where 1— α area lies within the confidence interval. The maximum error of the estimate, E, is ½ the width of the confidence interval.
The confidence interval for a symmetric distribution is the point estimate minus the maximum error of the estimate is less than the true population parameter which is less than the point estimate plus the maximum error of the estimate.
An Example from Actual Observations
While staying at the Yellowstone Lodge during the Millennium (year 2000), our kids got sick with some type of flu going around the lodge. My wife lay in bed, tending them all night long and passed the time recording data about Old Faithful erupting outside our bedroom window.
The data looked something like this:
Eruptions is the duration of the eruption of Old Faithful and Waiting is the waiting time before the next eruption. There is a correlation between these pieces of data. This is due to the physical processes of expelling water at high temperature and the refilling processes of the caverns below the surface
If we use R as our analysis tool, we can get a sense of what is happening statistically with Old Faithful. (R code below)
Then we create a new data frame that set the waiting time value.
We now apply the predict function and set the predictor variable in the newdata argument. We also set the interval type as "confidence", and use the default 0.95 confidence level.
What Does The Mean?
It means two things:
What Does This Really Mean?
Anyone suggesting we can make decisions about future outcomes in the presence of uncertainty and at the same time in the absence of estimating those outcomes is pretty much clueless about basic probability and statistics random processes.
Since all project variables - the statistical parameters - are random variables, driven by underlying process that we must estimate using statistical process available in R and our High School Stats book.
When it is mentioned I use bayesian statistics, or I use Real Options, ask if they are using something like the R Tutorial Resource with Bayesian Statistics. And of course the source code for the statistical processes described above. Then ask to see their data. There seems to be a lot of people tossing around words, like Bayesian, Real Options, Monte Carlo, and other buzz words without actually being able to show their work or the result that an be tested outside their personal ancedotes. Sad but true.
From the History of Project Management, a very interesting comparision that might be applicable to our IT project failures.
We all have experiences like those two ships but in the software domain.
Here's one starting point for avoiding that sinking feeling that we're going to get wet. It's call Governance. Not just project management, not just development and testing, not just creative discovery of the requirements. But oversight of every process in the process of spending other peoples money.
So when we hear about some new and clever way of spending other peoples money in the absence of knowing how much money, when we'll be done, what's the probability that what we are producing is actually going to show up when the business needs those mission critical capabilities and there is no tangible answer - then ask if there is a governance process in place.
I'm reminded of a old joke when I hear about untested ideas in the absence of working examples. Or the best I'm not going to tell you how to do this, I'm just going to tell you my personal experiences.
What's the difference between our organization and the Boy Scouts?
The Boy Scouts have adult supervision
Writing software for money is a Closed Loop Control System.
Without the Desired State, the Current State, the comparison of the two, the Error Signal, the project is running open loop. We'll arrive when we arrive at the rate of progress we are performing at, for the cost we are consuming. There is no information available to show what the needed performance of cost, schedule, or value production needs to be to arrive, on time, on budget, and on value (or near enough to call it close).
And when you hear about control systems and they don't follow the picture at the top, they're not Closed Loop. They may be Open Loop, but they are not Close Loop.
When we hear a conjecture about a topic that skips over principles of business, the economics of decision making, or the mathematics of probabilistic and statistical modeling, listen to what Kate said to Richard.
Putting This Skepticism To Work
There are three concerns for every project manager and those funding the work of the project †
Schedule - Will the project go over schedule? All projects are probabilistic endeavors. Uncertainty abounds. Both reducible uncertainty and irreducible uncertainty. Work can address the reducible uncertainty. Buying down the risk associated with this reducible uncertainty. Irreducible uncertainty can only be addressed with margin. Schedule margin, cost margin, technical margin.
Cost - Will the project overrun its budget? Cost margin is needed to protect the project from an over budget condition. This is called Management Reserve. But MR can only do some much the estimate of the cost, the management of the work to that estimate is also needed. With a credible estimate, MR and Contingency are still needed to avoid going over budget.
Performance - Will the deliverables satisfy the goal(s) of the project? The technical performance of the deliverables is founded on the Measures of Effectiveness and Measures of Performance of the capabilities provided by the projects. Capabilities Based Planning is the foundation of defining what DONE looks like in units of measure meaningful to the decision makers.
At the start and up until the end of a project, the answer to each of these questions is knowable to some degree of confidence - less in the beginning and more as the project progresses. A yes answer to any or all of the questions is taken to be an undesirable outcome. These are business questions as well as technical questions. But it is the business that is most interested in the answers and the confidence level of the answer - a simple Yes or No is not sufficient. Yes, we have an 80% confidence of completing on or before the need date.
In The End
To provide answers to these questions before arriving at the end of the project, we need estimates. So when we answer Yes to the question - which is unavoidable - we don't to proceed in the absence of corrective actions to increase the probability of a desirable outcome. At the beginning of the project that confidence is low, since project evolve. To provide credible answers about the confidence of arriving on time, on budget, with the needed capabilities, we must estimate not only the cost, schedule, and outcomes, but estimate the impact of our corrective actions.
If we fail to do this, if by lack of knowledge or experience or with intentional ignorance of the probabilistic process of all projects, we've set the foundation of failure. The approach of making decisions in the absence of estimating the cost or that decision and the resulting impact of that decision, ignore - with intent - the principles of Microeconomics of decision making. Ignoring the opportunity cost of the decision. This opportunity cost must be estimated, since it will occur in the future and is usually beyond our ability to measure directly.
Ignoring opportunity cost and ignoring estimating the future is called Open Loop Control. To increase the probability of project success we need to apply the principles of Closed Loop Control. And when we manage projects with Open Loop processes, those providing us the money to produce value will be disappointed.
† Quantitative Risk Analysis for Project Management A Critical Review, Lionel Galway, WR-112-RC, February 2004
When we read on a blog post that estimates are not meaningful unless you are doing very trivial work, † I wonder if the poster has worked on any non-trivial software domain. Places like GPS OCX, SAP consolidation, Manned Space Flight Avionics, or maybe Health Insurance Provider Networks. Because without some hands on experience in those non-trivial domains, it's be hard to actually knowing what you're talking about when it comes to estimating the spend of other peoples money.
Maybe some background on estimates for nontrivial work will shed light on this ill informed notion that only trivial projects can be estimated.
These are a small sample of papers from one journal on software estimating for misison critical, some times National Asset projects.
Go to Cross Talk, The Journal of Defense Software Engineering, and search for "estimating" to get 10 pages of 10 articles on this topic alone. This notion of estimating in non-trivial domains is well developed, well documented, and many examples of tools, processes, and principles.
If Do Your Homework and the Test is much easier.
It could be that the original poster has little experience in mission critical, national asset, enterprise class, software intensive systems. Or it could be the poster simply doesn't know what making estimates for project that spends other peoples money, many times significant amounts of money, is all about.
And of course most of the problems describes as the basis for Not Estimating - the illogical notion that if we can't do something well, let's stop doing it - starts with not knowing what Done Looks Like in any units of measure meaningful to the decision makers.
So start here with my favorite enterprise architect blog amd his list of books when you follow the link to the bottom.
So when you have some sense of what DONE looks like in terms of capabilities, the estimating process is now on solid ground. From that solid ground you can ask have we done any like this before? Or better yet can we f ind someone who has done something like this before? Or maybe can we look around to see what looks like our problem and figure out how long it took them by simply asking them? I
If the answer to any of those questions is NO and you're NOT working in a research and development domain, then don't start the project because you're not qualified to do the work, you don't know what you're doing and you're going to waste your customers money.
† Scroll to the bottom of http://zuill.us/WoodyZuill/category/estimating/ and search for "A Thing I Can Estimate," to see the phrase, and remember the questions and the answers above. If you're not answering those in some positive way, you're now on a death march project starting day one, because you don't know what done looks like for the needed capabilities. Not the requirements, not the code, not the testing - that's all straight forward. Without some notion of what the system is supposed to do, you're never recognize it if it were ever to come into view. And since the customer doesn't know as well, all the money they're spending to find out has to be written off as IRAD or flushed down the toliet as a waste of time and effort in the end. And then you'll know why Standish (improperly) reports projects fail.
With the plethora of opinions on estimating - some informed, many uninformed - here's my list of books and papers that inform our software estimating activities for Software Intensive Systems. These books range for hard core engineering to populist texts
is not actually true after you have read the book. So please read the book and see how McConnell provides step-by-step actions for producing credible estimates.
Estimating software development starts with understanding what the software system is supposed to be doing and how we're able to measure that. This process is based on defining the needed capabilities, the measures of Effectiveness, Measures of Performance, Key Performance Parameters, and Technical Performance Measures all needed for the ultimate success of the project. Along with a Plan showing the increasing maturity of the delivered capabilities. If we don't these in some form, it's going to be a disappoiintment for those payinig for our efforts when they get to the end and the outcomes are what they were expecting.
Capabilities are not Requirements. Requirements implement Capabilities. Capabilities are pretty much fixed while the Requirements evolve. Capabilities Based Planning is the basis of project management in many Software Intensive Systems.
With the project's capabilities defined to a level needed to start the project - failing to do this results in a Death March at worse, and spending the customer's money to discover what should have been discovered before starting. With the capabilities, the project needs to be managed in a way that will increase the probability of success.
So when you hear of some new approach to project management, ask if there is any connection to a domain and a context in that domain. Because there any many ideas about how to improve the probability of project success. But without a domain and context it'll be hard to assess if they are applicable to your specific situation. Here's one way to think about this domain dependency. From solo projects to national assets, methods, processes, tools are different as is the value at risk.
This is one of those pictures tossed out at some conference that drives me crazy. It's uninformed, ignores the disciplines of developing software for money, and is meant to show how smart someone is, without actually understanding the core processes needed for actually being knowledgeable of the topic - in this case statistical processes of project work. Then the picture gets circulated, re-posted, and becomes the basis of all kinds of other misunderstanding, just like the Dilbert cartons that represent cartons of the problem, but have no corrective actions associated.
It is popular in some circles of agile development to construct charts showing the strawman of deterministic and waterfall approaches, then compare them to the stochastic approaches and point out how much better the latter is than the former. Here's an example.
These strawman approaches are of course not only misinformed, they're essentially nonsense in any domain where credible project management is established, and the basis of the their response with Don't Do Stupid Things on Purpose.
Let's look at each strawman statement for the Deterministic View in light of actual project management processes, either simply best practice or mandated practice.
The only explanation here is the intentional ignorance of basic science, math, engineering, and computer science.
In the stochastic View there are equally egregious errors.
In the End
For some reason using charts like this one, re-posting of Dilbert cartons, making statements using buzz words - we're using Real Options and Bayesian Statistics to manage our work - are may favorite ones - seems to be more common the closer we get to the sole contributor point of view. Along with look at my 22 samples of self-selected data with a ±70% variance as how to forecast future performance.
It may be because sole contributors are becoming more prevalent. Sole contributors have certainly changed the world of software development in wasy never possible by larger organizations. But without the foundation of good math, good systems engineering - and I don't mean "data center systems engineering," I mean INCOSE Systems Engineering - those sole contributor points of view simply don't scale.
Always ask when you hear a piece of advice - in what domain have you applied this advice with success?
From another source The right principles trump practices everytime - Dean Leffingwell.
This notion that practices and processes can be put forward in the absence of testing them against principles has become popular.
The most visible of course is that decisions can be made in the absence of estimating the cost and impact of that decision. The principle of MicroEconomics of Software development was first stated by Dr. Boehm. Early in that #NoEstimates discussion was a comment that all those ideas are old and no longer applicable. Of course that ignores the principle of Microeconomics along with most every other principle of managing projects while spending other peoples money.
As well there are other principles of project success
Here's how to develop the answers to those Principles questions.
When it is mentioned project management is a control system many in the agile world whince. But in fact project is a control system, a closed loop control system.
Here's how it works.
Each of these elements has some unit of measure:
Here's a small example of incremental delivery of value in an enterprise domain
The accomplishment of a mission or fulfillment of a business strategy can be called the value produced by the project. In the picture above the value delivered to the business is incremental, but fully functional on delivery to accomplish the business goal. These goals are defined in Measures of Effectiveness and Measures of Performance and these measures are derived from the business strategy or mission statement. So if I want a fleet of cars for my taxi service, producing a sketboard, then a bicycle, is not likley to accomplishment the business goal.
But the term value alone is nice, but not sufficient. Value needs to have some unit of measure. Revenue, cost reduction, environmental cleanup, education of students, reduction of disease, the process of sales orders at a lower cost, flying the 747 to it's destination with minimal fuel. Something that can be assessed in tangible units of measure.
In exchange for this value, with it's units of measure, we have the cost of producing this value.
To assess the value or the cost, we need to know the other item. We can't know the value of something without knowing its cost. We can't know if the cost is appropriate without knowing the value produced by the cost.
This is one principle of Microeconomics of software development
The process of deciding between choices about cost and value - the trade space between cost and value - starts with information about both cost and value. This information lives in the realm of uncertainty before and during the project's life-cycle. It is only known on the cost side after the project completes. And for the value may never be known in the absence of some uncertainty as to the actual measure. This is also a principle of microeconomics - the measures we use to make decisions are random variables.
To determine the value of the random variable we need to estimate, since of course they are random. With these random variables - cost of producing value and the value exchanged for the cost, the next step in projects is to define what we want the project to do:
The actual delivery of this value can be incremental, it can be iterative, evolutionary, linear, big bang, or other ways. Software many times can be iterative or incremental, pouring concrete and welding pipe can as well. Building the Interstate might be incremental, the high rise usually needs to wait for the occupancy permit before the value is delivered to the owners. There is no single approach.
For each of these a control system is needed to assure progress to plan is being made. The two types of control systems are Open Loop and Close Loop. The briefing below speaks to those and their use.
The book Integrated Project Management and Control: First Comes Theory, Then the Practice, Mario Vanhoucke, arrived while I was on a business trip.
Mario has spoken at several Earned Value Management conferences, written two other important books Measuring Time and Dynamic Scheduling. The first is a books about research of a methods of different computational experiments about project performance management. The second, is book about existing methods of dynamic scheduling and the integration of baseline scheduling, risk analysis, and project controls.
Integrated Project Management is a book about monitoring project progress using Earned Value and Earned Schedule, combined with Schedule Risk Analysis, Monitoring and controlling processes for identifying potential problems in a timely manner (abstracted from back cover).
The book is small - 141 pages - packed with dense material. It's not an easy read for all the right reasons. Any project management book that is an easy read will likely not contain much material that can be applied to the hard problems of increasing the probability of project success.
Chapter 5 is my favorite for many reasons. It is titled Forecasting, has starts with
One of the primary tasks of a project manager is making decisions about the future, during project progress. ... Forecasting to total project cost and the time to completion is crucial to take corrective actions when problems or opportunities arise and hence, the performance measures will be mainly used as early warning signals to detect these project problems or opportunities.
There are several impact concepts here:
Forecasting is used in the book. Forecasting is estimating about this in the future. Weather forecasting. General estimating includes forecasts of past, present, and future. These are mathematical and statistical terms that are sometimes unfamiliar to software developers.
So if you hear the future of project variables can't be estimated suggest the person saying that buy and read this book, then think again how this is done.
What Does This Mean in Practice?
The notion that business and technical decisions around cost and schedule can be made without estimating the variables that impact those decisions has not basis in theory or practice. I've been reminded by Mike Clayton, that decisons can certaintly be made with knowing this information - these are called Bad Decisions. And like Bad Management, done often in many domains.
So time to learn how to manage projects with success, rather than listing all the reasons things go wrong.
This book starts with the hypothesis put forth by Theory of Project Management: Explanation of Novel Methods, in which it is conjectured that traditional project management processes are now obsolete and new project management processes are needed. Agile of course is one of those suggested.
This theory is the basis of a product, Last Planner used in the construction business.
There are some fundamental flaws starting on page 1 of the eBook, where it is asserted Project Management can be divided into main components:
The first is nearly universal, projects are the basis of most business processes, other than production and even then, projects are used to establish the production processes. The second part is domain dependent. In the eBook, it is conjectured that the view of projects rests on the assumptions listed in the orginal paper:
First let's look at these assumptions from a theory testing point of view. If these assumptions are found to be flawed, then what follows in terms of seeking new ways, may be based on unfounded assertions.
So Now What?
With the basis for seeking a new and innovative project management process grounded in assumptions that are not actually correct, is there any reason to continue reading the eBook?
Yes, for one simple reason. To put into perspective the notion of chaos as the basis of any credible probability calculaton for the success of a project.
Let's review the assumptions that are suggested as the reason to abandon the current project management approach and move to something else.
These assumptions — wrongly described — are then challanged by the Agile Communities approach.
Let's start with a framework, well developed in many domans - Capabilities Based Planning. In this framework, we don't start with the requirements. We start with the needed capabilities that will result from the project's outcomes. Let's look at the challanges above in light of Capabilities Based Planning.
Projects have lots of problems - symptoms actually - with root causes. But with the assumptions that are the basis of the paper and the eBook, one primary root cause is simple ...
BAD PROJECT MANAGEMENT
So do we move on to the next project management method before to assess the root causes of the current symptoms, fix the root causes and reassess if the current method has shortcomings? Let's hope not.
Here's the Principles, Practices, and Processes needed to increase the probability of project success. Apply these first, see if they are found wanting for the domain you work in. If so, assess then before abandoning them for others that must first be tested to improve the probability of success before jumping on that band wagon.
Four core components must be in place, have credible values, be used to make decisions, be connected to the actual performance of the project in a closed loop feedback system for the project to have any hope of success.
Without a sequence of the work to be performed, some knowledge of how long it will take to perform each of those work elements, there is no way to know when the project will be complete.
There are many ways of discovering the work, its duration and the sequence of this work. The approach to answering these questions is dependent on the domain, value at risk, and the needs of the customer.
It is the needs of the customer that anchor the selection process, not the providers of services to those customers. This is a repeated theme here. Those providing the money to do the work have a vested interest in how the work is performed.
The value of the produced outcomes is dependent of knowing something about their cost. This is a fundamental principle of all business and is the same for project work.
If we have some notion of value, either through the customers opinion or through and actual business modeling process, we need to know the cost of producing that value.
Since all variables on projects are random varaibles, our cost estimating processes must be applied to reveal the could cost, will cost, should cost aspects of these deliverables.
One approach to cost estimating is Reference Class Forecasting. Other approaches can provide credible estimates including parametric modeling. Tools, processes, formal guidance are all available for estimating cost in nearly every business and technical domain.
Guessing is not estimating, ignoring the cost of performing the work is negligence, claiming costs can only be known at the end to the project is ignorance.
All risk comes from uncertainty
Risk management is how adults manage projects - Tim Lister
Unmanaged risk will not go away, it is always there. The management of risk starts with a list of risks, the risk register. This list states the probability of occurrence, the probability of impact, the cost to handle the risk, the residual probability of the risk once it is handled.
The risk management processes is explicit and follows a step-by-step process. Some development processes like agile can increase the visibility to the reduction of risk, but they are not risk management processes, just contributors to the risk reduction and handling processes.
All project variables are random variables, act accordingly.
Knowing the underlying probability distribution functions of the statistical processes driving these random variables is a critical success factor for all project.
Anyone seeking or suggesting there is certainty on projects has failed to pay attention in their High School statistics class.
Every cost, schedule, and technical performance parameter on every project has a probability distribution assigned to it. Without knowing or understanding that distribution, or that the distribution is present, will lead to unanticipated cost, schedule, and technical performance disappointment.
So Now What?