In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.
A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.
Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.
But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.
What is Strategy?
Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.
Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm.
Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.
Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.
In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place.
“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.
Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.
I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.
The post then asks
The 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.
The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:
Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?
ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.
Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?
ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.
All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?
All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the underlying model is statistical, we need to estimate the impact of this behaviour.
Let's Look At The Five Decision Making Processes
1. Do the most important work first - If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.
2. Do the Highest Techncial Risk First - When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.
3. Do the Easiest Work First - Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.
4. Do the legal Requirements First - In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.
5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.
General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.
In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement by Ron Jeffries,
Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.
This is a sole contributor or small team paradigm.
So let's pretend we work at Price/Waterhouse/Cooper and we’re playing our roles - Peter as CIO advisor, me as Program Performance Management adviser. We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.
What would we do? Well, we'd start with PWC resource management database – held by HR – and ask for “past performance experience” people in the business domain and the problem domain. Our new customer did not “invent” a new business domain, so it's likely we'll find people who know what our new customer does for money. We’d look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.
If we found no one who knows the business and the needed capabilities, we’d no bid.
This notion of “I've been asked to do something that’s never been done before, so how can I possibly estimate it” really means “I'm doing something I’ve never done before.” And since “I’m a sole contributor, the population of experience in doing this new thing for the new customer is ONE – me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say - new development with unknown requirements can't be estimated. Because those unknown requirements are actually Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.
So the answer to the question, “what if we encounter new and unknown needs, how can we estimate?” is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation of small numbers versus large numbers.
This is the reason the PWC’s of the world exist. They get asked to do things the sole contributors never have an opportunity to see.
From the Introduction of the book to the left.
Good estimates are key to project (and product) success. Estimates provide information to make decisions, define feasible performance objectives, and plans. Measurements provide data to gauge adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.
Engineers use estimates and measurements to evaluate the feasibility and affordability of proposed products, choose amoung alternatives designs, assess risk, and support business decisions. Engineers and planners estimate the resources needed to develop, maintain, enhance, and deploy a product. Project planners use the estimated staffing level to identify needed facilities.
Planners and managers use the resource estimates to compute project cost and schedule, and prepare budgets and plans. Estimates of product, project and process characteristics provide "baselines"to assess progress the execution of the project. Managers compare compare estimates and actual values to identify deviations from the project plan and to understand the causes of the variation.
For products, engineers compare estimates of the technical baseline to observed performance to decide if the product meets its functional and operational requirements. Process capability baselines establish norms for process performance. Managers use these norms to control the process and detect compliance problems. Process engineers use capability baselines to improve the production process.
Bad estimates affect everyone associated with the project - the engineers and managers, the customer who buys the product, and sometimes even the stockholders of the company responsible for delivering the software. Incomplete or inaccurate resource estimates for a project mean that the project may not have enough time and money to complete the required work.
If you work in a domain where none of these conditions are in place, then by all means don't estimate.
If you do recognize some or all of these conditions, then here's a summary of the reasons to estimate and measure, from the book.
Complex and unstable environments encountered in project work - especially software development project work - calls for critical thinking by all participants. Complexity comes in part from technical uncertainties, starting with requirements for software capabilities. If there is uncertainty in what capabilities are needed, the project is starting off on the path to failure on day one. Certainly the functional and operational requirements have emergent properties that create uncertainty. As do staffing, productivity and risks created by reducible and irreducible uncertainty.
To deal with these complexities, critical thinking is needed on behalf of the project leaders and project participants alike.
The first responsibility of the project staff as well as management is to think. To think what is being asked of them. To consider that they are being paid to produce value by someone other than themselves. To think about why they are there, what they are being asked to do, and how they can go about being stewards of the money provided by those paying for their work. To be true professionals applying their education, training, and experience through analysis and creative, informed thought to make daily decisions.
Failure to apply critical thinking creates a disconnect between those providing the value and those paying for the value. This is best illustrated in the notion that business decisions can be made in the absence of knowing the cost and outcomes of those decisions. The first gap in critical thinking occurs when the decsion making process ignores the principles of MicroEconomics. This gap is future reinforced when the probabilistic nature of all project work is also ignored.
The three core elements of all projects are the delivered capabilities, the cost of producing those capabilities, and the time frame over which those capabilities are delivered for the cost. Each of these acts as a random variable, interacting with the other two in statistically complex ways.
To manage in the presence of these random variables, means making decisions in the presence of uncertainties - and resulting risks - created by the randomness of the variables. These uncertainties are reducible - we can pay more to find out more information. Or they are irreducible, we can only manage in their presence with margin for cost, schedule, and technical performance.
To make decisions in the presence of this paradigm we need to Estimate. Making decisions in the absence of estimating first violates the principles of microeconomics, and secondly ignores the underlying statistical and probabilistic nature of all project work.
When we hear that decisions be made in the absence of estimates, ask how Microeconomics and statistical uncertainty are handled.
The notion that projects are somehow no longer needed fails to address how to replace the processes, governance, and stewardship of a business's assets while producing the needed value from those expenditures. Here's a framework for management a firms funds in producing value in exchange of those funds.
There is a popular notion that Projects are some how not needed to develop software. Just fund the Team and the business outcomes needed to fulfill a strategy and deliver value to the balance sheet will appear.
This may work in domains where the project team, the business and its processes and the funding sources are in fact all the same - a small startup for example. When the business moves beyond this self-contained startup mode, other processes become needed to Govern the use of funds - Stewardship of Funding.
Here's how to approach the process, once the firm moves beyond a collection of individuals personally all interacting with their customer based. These processes, data, and outcomes belong to the realm of Governance. In IT, Governance †
From another source The right principles trump practices everytime - Dean Leffingwell.
This notion that practices and processes can be put forward in the absence of testing them against principles has become popular.
The most visible of course is that decisions can be made in the absence of estimating the cost and impact of that decision. The principle of MicroEconomics of Software development was first stated by Dr. Boehm. Early in that #NoEstimates discussion was a comment that all those ideas are old and no longer applicable. Of course that ignores the principle of Microeconomics along with most every other principle of managing projects while spending other peoples money.
As well there are other principles of project success
Here's how to develop the answers to those Principles questions.
I came across a nice blog post from DelancyPlace about the navigation powers of birds.
This bird was taken from Wale's to Venice Italy and released and found it's way home in 14 days. 930 miles, over mountains .
To be able to find their way home from an unfamiliar place, birds must carry a figurative map and compass in their brains.
The map tells them where they are, and the compass tells them which direction to fly, even when they are released with no frame of reference to their home loft.
Projects Are Not Birds
As project managers, what's our map and compass? How can we navigate from the start of the project to the end, even though we haven't been on this path before.
How can we find our way Home?
We have a map. It starts with a Capabilities Based Plan. The CBP states what Done looks like in units of measure meaningful to the decision makers. These units of measure are Measures of Effectiveness and Measures of Performance.
Measures of Effectiveness - are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
These measures speak to our home and the attributes of the home. The map that gets us ti home is the Integrated Master Plan. This shows the increasing maturity of the deliverables that implement the Measures of Performance and those Performance items that enable the project to produce the needed capabilities that are effectively accomplish the mission or fulfill the business need.
This looks like a map for the increasing value delivery for an insurance company. The map shows the path or actually paths to home. Home is the ability to generate value from the exchange of money to develop the software.
In the absence of a control system that provides feedback to the progress to plan, the project is just wandering around looking for what Done looks like. The notion of emergent requirements is fine. The notion of emergent capabilities is not.
If we don't know what capabilities are needed to fulfill the business plan or fulfill the mission need, then we're on a Death March project. We don't know what value is being produced, when we will be done, or how much this will cost when we're done.
This is the role of the project control system. Without a control system there is no way to use feedback to steer the project toward success.
There is a suggestion that only the final target of a project's performance is needed to steer toward success. This target can be budget, a finish date, the number of stories or story points in an agile software project. With the target and the measure of performance to date, collected from the measures at each sample point, there is still a missing piece needed to guide the project.
With the target and the samples, no error signal is available to make intermediate corrections to arrive on target. With the target alone, any variances in cost, schedule, or techncial performance can only be discovered when the project arrives at the end. With the target alone, this is an Open Loop control system.
Irreducible Uncertainty can only be handled with Margin. Cost margin, schedule margin, technical margin. This is the type of margin you use when you drive to work. The GPS Navigation system says it 23 ninutes to the office. It's NEVER 23 minutes to the office. Something always interferes with our progress.
Reducible Uncertainty is handled in two way. Spending money to buydown the risk that results from this uncertainty. Management Reserve (budget reserve and schedule contingency) to be used whenm soemthnig goes wrong to pay for the fix when the uncertainty turns into reality.
The next figure (page 28) shows how to manage in the presence of these uncertainties, by measuring actual performance against the desired performance at each step along the way.
In this figure, we measure at each assessment point the progress of the project against the desired progress - the planned progress, the needed progress. This planned, desired, or needed progress is developed by looking at the future effort, duration, risk, uncertainty - the stochastic processes that drive the project - and determining what should be the progress at this point in time to reach our target on or before the need date, at or below the needed cost, and with the needed confidence that the technical capabilities can be delivered along the way? This is closed loop control.
The planned performance, the needed performance, the desired performance is developed early in the project. Maybe on day one, more likely after actual performance has been assessed to calibrate future performance. This is called Reference Class Forecasting. With this information estimates of the needed performance can then be used to establish steering targets along the way to completing the project. These intermediate references - or steering - points provide feedback along the way toward the goal. They provide the error signal needed to keep the project on track. They are the basis of Closed Loop control.
In the US, many highways have rumble strips cut into the asphalt to signal that you are nearing the edge of the road on the right. They make a loud noise that tells you - hey get back in the lane, otherwise you're going to end up in the ditch.
This is the purpose of the intermediate steering targets for the project. When the variance between planned and actual exceeds a defined threshold, this says hey, you're not going to make it to the end on time, on budget, or with your needed capabilities if you keep going like this.
Kent Beck's quote is...
Optimism is the disease of software development. Feedback is the cure.
This feedback must have a reference to compare against if it is to be of any value in steering the project to a successful completion. Knowing it's going to be late, over budget, and doesn't work when we arrive at late, over budget, and not working is of little help to the passengers of the project.
Much of the objection to SAFe comes from its seemingly Top Down paradigm. Many agile voices object that this approach is not agile, in the way they define agile - individual teams making their own decisions about what to do with their customer.
The domain of this bottom up approach is usually not well defined, other than the classic eXtreme Programming or the Agile Spectrum of Guy Strelitz where Co-Hacking is on the left, where the developers live by the pure agile manifesto.
But what happens when agile is applied to an enterprise development effort. One where the business needs define the capabilities that are not emergent, but rather they are needed to fulfill the business strategy or the mission of the organization. Then another paradigm emerges. One where higher order questions, frameworks, framing assumptions, governance, and other externalities trump the needs of the individual team.
Here's one approach that has served us well over time.
The statement is a bit off, since it's the Capabilities that are defined by the business. These capabilities are then turned into requirements, which may in fact emerge, which themselves are turned into working software. Starting with the capabilities, an enterprise software development effort means re-looking at the agile manifesto statements.
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development.
The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity - the art of maximizing the amount of work not done - is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
So What Does All This Mean?
Without a domain, hard to assess the applicability and appropriateness of much of anything.
What this really means of Scaled Agile Framework is the place to start for the enterprise.
When we hear about new and possibly innovative ways of improving the probability of success of project work, ask this simple question...
How Do I Make Your Suggestion Actionable on My Project?
With the answer to that question in some unit of measure meanigful to the decision maker (you), the suggestion may be an interesting personal ancedote or observation, but it is not actionable. Let's start with a notion that we can determine what needs to be done to deliver value to the customer. Here's a typical agile view of what Done looks like. More formal descriptions can be found in Intergated Master Schedules or other planning tools, but the Scrum approach is a good start.
So there are a few questions:
If we don't have the answers to these questions and others, I'd suggest we're in the co-hacking mode according to Guy Strelitz, where we don't really care about the questions above. If that's your domain, what follows is of little value.
But if your customer is not in the co-hacking domain, there is likely a need to know the answers to the five immutable principles of project success:
The answers to each of these needs to be in units of measure meanigful to the dedcision maker. Without these answers, the probability of success is going to be low.
Obvious not every decision we make is based on mathematics, but when we're spending money, especially other people's money, we'd better have so good reason to do so. Some reason other than gut feel for any sigifican value at risk. This is the principle of Microeconomics.
All Things Considered is running a series on how people interprete probability. From capturing a terrortist to the probability it will rain at your house today. The world lives on probabilitic outcomes. These probabilities are driven by underlying statistical process. These statistical processes create uncertainties in our decision making processes.
Both Aleatory and Epistemic uncertainty exist on projects. These two uncertainties create risk. This risk impacts how we make decisions. Minimizing risk, while maximizing reward is a project management process, as well as a microeconomics process. By applying statistical process control we can engage project participants in the decision making process. Making decision in the presence of uncertainty is sporty business and many example of poor forecasts abound. The flaws of statistical thinking are well documented.
When we encounter to notion that decisions can be made in the absence of statistical thinking, there are some questions that need to be answered. Here's one set of questions and answers from the point of view of the mathematics of decision making using probability and statistics.
The book opens with a simple example.
Here's a question. We're designing airplanes - during WWII - in ways that will prevent them getting shot down by enemy fighters, so we provide them with armour. But armor makes them heavier. Heavier planes are less maneuverable and use more fuel. Armoring planes too much is a proplem. Too little is a problem. Somewhere in between is optimum.
When the planes came back from a mission, the number of bullet holes was recorded. The damage was not uniformly distributed, but followed this pattern
The first thought was to provide armour where the need was the highest. But after some thought, the right answer was to provide amour where the bullet holes aren't - on the engines.
- Engine - 1.11 bullet holes per square foot (BH/SF)
- Fueselage - 1.73 BH/SF
- Fuel System - 1.55 BH/SF
- Rest of plane - 1.8 BH/SF
"where are the missing bullet holes?" The answer was onb the missing planes. The total number of planed leaving minus those returning were the number of planes that were hit in a location that caused them not to return - the engines.
The mathematics here is simple. Start with setting a variable to Zero. This variables is the probability that a plane that takes a hit in the enginer manages to staty in the air and return to base. The result of this analysis (pp. 5-7 of the book) can be applied to our project work.
This is an example of the thought processes needed for project management and the decision making processes needed for spending other peoples money. The mathematician approach is to ask what assumptions are we making? Are they justified? The first assumption - the errenous assumption - was tyhat the planes returning represented were a random sample of all the planes. If so, the conclusions could be drawn.
In The End
Show me the numbers. Numbers talk BS walks is the crude phrase, but true. When we hear some conjecture about the latest fad think about the numbers. But before that read Beyond the Hype: Rediscovedring the Essence of Management, Robert Eccles and Nitin Nohria. This is an important book that lays out the processes for sorting out the hype - and untested and liley untestable conjectures - from the testable processes.
The presentation Dealing with Estimation, Uncertainty, Risk, and Commitment: An Outside-In Look at Agility and Risk Management has become a popular message for those suggesting we can make decisions about software development in the absence of estimates.
The core issue starts with first chart. It shows the actual completion of a self-selected set of projects versus the ideal estimate. This chart is now in use for the #NoEstimates paradigm as to why estimating is flawed and should be eliminated. How to eliminate estimates while making decisions about spending other peoples money is not actually clear. You'll have to pay €1,300 to find out.
But let's look at this first chart. It shows the self-selected projects, the vast majority completed above the initial estimate. What is this initial estimate? In the original paper, the initial estimate appears to be the estimate made by someone for how long the project would take. No sure how that estimate was arrived at - the basis of estimate - or how was the estimate was derived. We all know that subject matter expertise is the least desired and past performance, calibrated for all the variables is the best.
So Here in Lies the Rub - to Misquote from Shakespeare's Hamlet
The ideal line is not calibrated. There is no assessment if the orginal estimate was credible or bogus. If it was credible, what was the confidence of that credibility and what was the error band on that confidence.
This is a serious - some might say egregious - error in statistical analysis. We're comparing actuals to a baseline that is not calibrated. This means the initial estimate is meaningless in the analysis of the variances without an assessment of it accuracy and precision. To then construct a probability distribution chart is nice, but measured against what - against bogus data.
This is harsh, but the paper and the presentation provide no description of the credibility of the initial estimates. Without that, any statistical analysis is meaningless. Let's move to another example in the second chart.
The second chart - below - is from a calibrated baseline. The calibration comes from a parametric model, where the parameters of the initial estimate are derived from prior projects - the reference class forecasting paradigm. The tool used here is COCOMO. There are other tools based on COCOMO and Larry Putman's and other methods that can be used for similar calibration of the initial estimates. A few we use are QSM, SEER, Price.
One place to start is Validation Method for Calibrating Software Effort Models. But this approach started long ago with An Empirical Validation of Software Cost Estimation Models. All the way to the current approaches of ARIMA and PCA forecasting for cost, schedule, and performance using past performance. And current approaches, derived from past research, of tuning those cost drivers using Bayesian Statistics.
The issue of software management, estimates of software cost, time, and performance abound. We hear about it every day. Our firm works on programs that have gone Over Target Baseline. So we walk the walk every day.
But when there is bad statistics used to sell solutions to complex problems, that's when it becomes a larger problem. To solve this nearly intractable problem of project cost and schedule over run, we need to look to the root cause. Let's start with a book Facts and Fallacies of Estimating Software Cost and Schedule. From there let's look to some more root causes of software project problems. Why Projects Fail is a good place to move to, with their 101 common casues. Like the RAND and IDA Root Cause Analysis reports many are symptoms, rather than root causes, but good infomation all the same.
So in the end when it is suggested that the woo's of project success can be addressed by applying
Ask a simple question - is there any tangible, verifiable, externally reviewed evidence for this. Or is this just another self-selected, self-reviewed, self-promoting idea that violates the principles of microeconomics as it is applied to software development, where:
Economics is the study of how people make decisions in resource-limited situations. This definition of economics fits the major branches of classical economics very well.
Macroeconomics is the study of how people make decisions in resource-limited situations on a national or global scale. It deals with the effects of decisions that national leaders make on such issues as tax rates, interest rates, and foreign and trade policy, in the presence of uncertainty
Microeconomics is the study of how people make decisions in resource—limited situations on a personal scale. It deals with the decisions that individuals and organizations make on such issues as how much insurance to buy, which word processor to buy, what features to develop in what order, whether to make or buy a capability, or what prices to charge for their products or services, in the presence of uncertainty. Real Options is part of this decision making process as well.
Economic principles underlie the structure of the software development life cycle, and its primary refinements of prototyping, itertaive and incremental development, and emerging requirements.
If we look at writing software for money, it falls into the microeconomics realm. We have limited resources, limited time, and we need to make decisions in the presence of uncertainty.
In order to decide about the future impact of any one decision - making a choice - we need to know something about the furture which is itself uncertain. The tool to makes these decisions about the future in the presence of uncertainty is call estimating. Lot's of ways to estimate. Lots of tools to help us. Lots of guidance - books, papers, classrooms, advisers.
But asserting we can in fact make decisions about the future in the presence of uncertainty without estimating is mathematically and practically nonsense.
So now is the time to learn how to estimate, using your favorite method, because to decide in the absence of knowing the impact of that decision is counter to the stewardship of our customers money. And if we want to keep writing software for money we need to be good stewards first.
Project Management is a control system, subject to the theory and practice of control systems. The Project Management Control System provides for the management of systems and processes - cost estimating, work scope structuring and authorization, scheduling, performance measurement, reporting, for assessing the progress of spending other peoples money.
The level of formality for these processes varies according to domain and context. From sticky notes on the wall for a 3 person internal warehouse locator website of a plastic shoe manufacture - to a full DCMA ANSI-748C validated Earned Value Management System (EVMS) on a $1B software development project and everything in between.
The key here is if we're going to say we have a control system it needs to be a Closed Loop control system, not an Open Loop control system. On Open Loop system is called train watching, we sit by the side of the tracks and count the trains going by and report that number. How many trains should go by, could go by? We don't know. That's what's shown in the first picture. We sample the data, we apply that data to the process and it generates an output. There is no corrective action, it's just a signal based on the past performance of the system. Some examples of Open Loop control implemented in the first picture:
The key attribute of Open Loop Control
The key disadvantage of open-loop systems is it is poorly equipped to handle disturbances or changes in the conditions which that reduce its ability to complete the desired task.
A close loop system behaves differently. Here's some example of controllers used in the second picture
The key attributes of Close Loop Control, shown in the second picture
Because the closed-loop system has knowledge of the output condition - in the case of projects the desired cost, schedule, and technical performance, it is equipped to handle system disturbances or changes in the conditions which may reduce its ability to complete the desired task.
When we have a target cost - defined on day one by the target budget, a planned need date, and some technical performance target, closed loop control provides the needed feedback to make decisions along the when, when the actual performance is not meeting our planned or needed performance
In the end it comes back to the immutable principle of microeconomics. When we are spending money to produce a value, we need to make decisions about which is the best path to take, which are the best of multiple options to choose. In our to do this we need to know something about the cost, schedule, and performance forecasts from each of the choices. Then we need feedback from the actual performance to compare with our planned performance to create an error signal. With this error signal, we can then DECIDE what corrective actions to take.
Without this error signal, derived from the planned values compared with the actual values there is no information needed to decide. Sure we can measure what happened in the past and decide, just like we can count trains and make some decision. But that decision is not based on a planned outcome, a stated need, or an Estimated Arrival time for example.
Without that estimated arrival time, we can't tell if the train is late or early, just that it arrived. Same with the project measurements.
Open Loop provides no feedback, so you're essentially driving in the rear view mirror, when you should be looking out the windshield deciding where to go next to escape the problem.
In the estimating discussion there is a popular notion that we can't possibly estimate something we haven't done before. So we have to explore - using the customers money by the way - to discover what we don't know.
The when we hear about we've never done this before and estimating is a waste of time, think about the title of the post.
Everything's a Remix
Other than inventing new physics all software development has been done in some form or another before. The only true original thing in the universe is the Big Bang. Everything else is derived from something that came before.
Now we may not know about this thing in the past, but that's a different story. It as done before in some form, but we didn't realize it. There are endless examples of copying ideas from the past is thinking they are innovative, new and break through. The iPad and all laptops came from Allan Kay's 1972 paper, "A personal computer for childern of all ages." Even how the touch screen on the iPhone works was done before Apple announced it as the biggest breakthrough in the history of computing.
In our formal defense acquisition paradigm there are many programs that are research. This looks like this flow. Making estiimates about the effort and duration is difficult, so blocks of money are provided to find out. But these are not product production or systems development processes. The Systems Design and Development (SDD) is between MS-B and MS-C. We don't confuse exploring from developing. Want to explore work on a DARPA program. Want to develop, work in post-MS-B and know somethiong about what came before.
The Pre-milestone A works is to identify what capabilities will be needed in the final product. The DARPA programs I work are even further to the left of Milestone A.
On the other end of the spectrum from this formal process, a collection of sticky notes on the wall could have similar flow of maturity. But the principles are still the same.
So How To Estimate in the Presence of We've Never Done This Before
Here's a critical concept - we can't introduce anything new until we're fluent in the language of our domain, and we do that through emulation.† This means for us to move forward we have to have done something like this in the past. So if we haven't done something like this in the past, don't know anyone who has, or can't find an example of it being done, we will have little success being innovative. As well, we will hopelessly fail in trying to estimate the cost, schedule, and probability of delivering capabilities. In other words we'll fail and blame it on the estimating process and assume that we'll be successful if we stop estimating.
So stop thinking about we can't know what we don't know and start thinking someone has done this before and we just need to find that someone, somewhere, something. Nobody starts out being original, we need copying to get started. Once copied, transformation is the next step. With the copy we can estimate size and effort. We can now transform it into something that is better, and since we now know about the thing we copied, we have a reference class. Yes that famous Reference Class Forecasting used by all mature estimating shops. With the copy and it's transformed item, we can them combine ideas into something new. The Alto from Xerox and then the Xerox Star for executives, was the basis of the Lisa and Mac.
You can estimate almost anything, and every software system if you do some home work and suspend the belief it can't be done. WHY? because it's not your money, and those providing you the money have an interest in several things about their money - what will it cost, when will you be done, and using the revenue side of the balanced sheet, when will they break even on the exchange of money for value? This is the principle of every for profit business on the planet. The not-for-profits have to pay the electric bill as well, as do the non-profits. So everyone, everywhere needs to know the cost of that value they asked us top produce BEFORE we've spent all their money and ran out of time to reach the target market for that pesky break even equation.
Anyone tells you otherwise is not in the business of business, but just on the expense side and that means not on the decision making side either, just labor doing what they're told to do - which is a noble profession, but unlikely to influence how decisions are made.
The notion of decision rights is the basis of governance. When you hear about doing or not doing something in the absence of who needs this information, ask who needs this information and is it your decision right to decide to fulfill or not fulfill the request for that information? As my colleague, retired NASA Cost Director, says follow the money, that's where you find the decider.
† Everything is a remix, Part 3, Kirby Furgeson.
A common problem in our development of the Program Management Office is getting so caught up in putting out fires. This is Covey's “addiction of the urgent.” In this process we lose the big-picture perspective. This note is about the big-picture view of the project management process as it pertains to our collection of projects. These are very rudimentary principles, but they are important to keep in mind.
5 Basic Principles
1. Be conscious of what you're doing, don’t be an accidental manager. Learn PM theory and practice. Realize you don't often have direct control. Focus on being a professional and the PM's mantra:
"I am a project professional. I work on projects. Projects are undertakings that are goal-oriented, complex, finite, and unique. They pass through a life cycle, which begins with project selection and ends with project termination."
2. Invest in front-end work; get it right the first time. We often leap before we look due to an over–focus on results-oriented processes, simple and many times simple-minded platitudes about project management and the technical processes and ignore basic steps. Trailblazers often achieve breakthroughs, but projects need forethought. Projects are complex, and the planning, structure, and time spent with stakeholders are required for success. Doing things right takes time and effort, but this time and effort is much cheaper than rework.
3. Anticipate the problems that will inevitably arise. Most problems are predictable. Well-known examples are:
4. Go beneath surface illusions; dig deep to find the real situation. Don't accept things at face value. Don't treat the symptom, treat the root cause, and the symptoms will be corrected. Our customers usually understands their own needs, but further probing will bring out new needs. Robert Block suggests a series of steps:
5. Be as flexible as possible; don’t get sucked into unnecessary rigidity and formality. Project Management is the reverse of Fermi's 2nd law: we're trying to create order out of chaos. But in this effort:
 The Politics of Projects, Robert Block, Yourdon Press, 1983.
In many discussions of process there is a proposed solution to a problem in the absence of domain and context. Over generalization is usually the result. This is so common in the agile development world, that I built a short briefing to communicate the issues with making suggestions in the absence of domain and context.
I'm working a proposal for a university client that is submitting on a sofwtare, hardware, and operations research project to a large research organization. The buyer has a budget in the single digit millions. The response to the RFP is to develop algorithms, build some hardware, and build a prototype flying machine to test out all these ideas. The needed capabilities are to show in the first phase, that the provider has the right ideas to get hired for the second phase worth mid-double digit millions.
The project is to discover what will be needed to convince the customer that the winning provider should be hired for the next research phase and the likely follow-on contract to build some number of machines using all the software and hardware from the winning previous phases.
So now comes the estimating part
The buyer wants to use agile while also complying with the acquisition processes of the domain, which are more traditional. With the target winning cost -price to win - the winning set of research outcomes and their formal deliverables - algorithms, software, hardware, reports, trade studies, recommendations, etc., a set of technical and science capabilities, some past performance algorithms, software, and hardware from previous research projects (this is a major university bidding on this) - the big question is what can we say in the proposal about why the customer should hire us? We can describe...
So how are we going to answer these questions, stay within the target contract award, and contractual delivery date?
We're going to have to make estimates of many of the project variables long before the project is awarded:
In this project, emergent, vague outcomes, innovative project for a technology suite that doesn't exist, we need to estimate cost, schedule, and performance if we're going to have a chance to get any follow on work.
So when I here, our customers don't know what they want. Ask if they have any sense of what capabilities they need to earn back the money they are investing. NO, then they're about to waste some money trying to find out. You can buy information or you can have some of that information already in your hand. Reference Class Foecasting is one way. Parametric estimating is another.
When I hear - our developers don't like to work toward deadlines, ask the customer if they have any deadlines? NO? then everyone start spending money, and we'll stop when we run out of money or time. YES? Then we'd better have some sense of our capacity for work, what work is needed to deliver the needed capabilities, and how we're going to measure our physical progress toward that deadline, while spending our customers money to provide feedback needed to take corrective actions. Without an estimate of the goal and our future capacity for work, there can be no closed loop control of the project.
When I hear - estimating is a waste, decisions can be made without estimating, or estimating is some how an evil fo software develop, run down the hall to the finance and accounting department and see if they have a sense of the Basic Rules of Business. NO? You're home free, start coding and spending, someone will come along later and tell you what they need, when they need it or maybe tell you to stop. YES? Then learning how to make credible estimates is straight forward. Tons of literature, books, and tools for most every domain, including software development.
Estimating is part of running any business that spends money, other peoples money or your own money in exchange for produced value. Making a decision in the absence of knowing the cost, schedule imapct, or technical imnpact is like driving in the dark with the lights off.
It can be done, but you don't know you ran off the road or ran over something, until it's too late. Those pesky estimating processes, that appear to be such a waste to the developers, are actually the lights that show what is coming down the road. They are the leading indicators of where you want to go, so you can steer toward the goal. The actual feedback you get from the past performance provides the corrective actions needed to keep in between the white lines, speed up, slow down, stop, reverse course or reconsider your plans all together. But you must know where those lines are, and how tightly you need to adhere to their boundaries.
In the end when it is conjectured that estimates are not needed in the software development business, ask that person if they are now or have ever been accountable for the P&L or Balance Sheet of a business. NO? Walk away, they don't have any notion of why estimating is a critical success factor to any business.