Principles, Practices, and Processes to Increase Probability of Project Success
The latest thread in agile is ...
the continued paradigm of deadline-driven development is killing the benefits that Agile Software Development can bring.
It is suggested by Neil Killick that ...
... using genuine time constraints as a factor in the prioritisation of work rather than as a focus for its execution, the odds of meeting those "deadlines" are actually improved.
Not sure what a genuine time constraint is versus any time constraint. But the conjecture that Executives of software product and service companies always want stuff delivered faster, cheaper and better. Agile principles and methods are believed to be a way to achieve this ignores several fundamental principles of all work and especially software development work.
All project work has uncertanty. Reducible uncertanty and irreducible uncertanty.
Agile does not remove this uncertanty. Agile is NOT a risk management process. Genuine constraints don't remove this uncertanty., I've spoken many times in the past about how to Manage in the Presence of Uncertainty.
These principles are always in place. More so on Agile projects, where emergent requirements are encouraged, which in turn drive uncertanty further to the right in the Probability Distribution Function of the possible range of durations, costs, and technical performance shortfalls.
When those uncertainties are not considered and handled, any project is going to have an increased chance of being late, over budget, and have technical issues.
Setting genuine constraints may make this issue visible, but does not remove the risk to the project's probability of success. Only active risk management and the necessary margin can increase this probability.
The only protection for irreducible uncertanty is Margin and the only protection for reducible uncertainty is active risk management. Both of these activities require careful planning and execution of the plan, along with the estimate of the probability of occurrence of a reducible event and the statistical distribution of the naturally occurring variances and the probabilistic impact on the success of the project.
This is the Reason for Planning
It's been suggested I work in a unique domain, where deadlines and need dates are themselves unique. This is False.
No credible business, no matter the size, Doesn't have a need date for the Value produced by the software project. If there was no need date, the developers would show up whenever they wanted after spending whatever they wanted, with whatever they thought the customer needed.
Ignoring the simple time cost of money and the time phased Return of Investment of (Value-Cost)/Cost, any business that intends to stay in business is spending money for software - either developed or purchased - to provide some value to the business. Not having a need date for the production of that Value means the business is ignoring the core principle of retained earnings. Even non-profits and not-for-profit business (and I've worked there as well) have a time value of money economic model.
So if you're going to produce value for your customer, that value is most always time sensitive other wise it's de minimis value. If it's time sensitive, there is a deadline. If there's a deadline, reducible and irreducible uncertanty and the risk it produces must be handled.
Risk Management is How Adults Manage Projects - Tim Lister
† The naive notion that scrum teams are self contained and need no external support is only the case when there is little at risk for the resulting code. Cyber security, Database integrity, performance validation, operational integrity are external surveillance roles on any Software Intensive System of Systems. This is called Governance and guided by documents like ISO 12207, ITIL V3.1, COBIT, DoDAF, ToGAF.
All successful projects adhere to these five immutable principles during the lifecycle of the design, development, deployment, and operation. These principles are independent of any project or program domain or context in that domain. They are also independent of any project management method or product development method, including agile.
They ask five questions that must have credible answers that establish the foundation for success. Without credible answers to these 5 questions, the project has little hope of success.
So if you hear some unsubstantiated conjecture like ... decisions can be made without estimating ask how any or all of the 5 immutable principles can be met?
The Project Breathalyzer provides question for the Program Manager to assess of the project is fit to be on the road. If the Program Manager cannot answer these questions about the current status, or answers in the negative , then the project is subject to a critical review.
This concept comes from the Software Program Managers Network, and the work of Norm Brown in 1997. The SPMN was a non profit organization, with funding canceled in 1002. but is now for Profit. The questions can from the Airlie Software Council of experts after the failure of the Airane 5 launch vehicle failure attributed to a Software Design Flaw.
Jurgen Appelo used the double pendulum on page 42 of his Management 3.0 book to explain the differences between simple, complicated, ordered, complex, and chaotic. He defines the term Chaotic as “very unpredictable.” Like many of Jurgen’s definitions, they are localized to suit the needs of the story line.
There is no universally excepted definition of chaos. But almost everyone would agree on the following ingredients:
Chaos is an aperiodic long-term behavior in a deterministic system that exhibits sensitive dependence on initial conditions.
In this context, the phrase aperiodic long-term behavior means that the motion does not settle down to a fixed point or a periodic orbit. Since the double pendulum loses energy to the environment, after some time the motion does become periodic and it eventually stops at a stationary fixed point. In this sense it is only the theoretical double pendulum without energy losses that would really be a chaotic system.
Please read that last sentence again. It is critical. As well the “sensitivity” to initial conditions is a parametric measure in itself. The starting angle of the pendulum is one parameter. Low starting angles result in different “sensitivities” than larger starting angles. This is an exercise for students in the introductory classical mechanics class as an undergraduate in Physics.
A deterministic system means that the system has no random or noisy inputs. The irregular behavior is intrinsic and arises from the system’s non-linearity rather than from any noisy driving forces.
Please read this last sentence again. It is critical to understanding the definitions needed to describe the behavior of the double pendulum.
Sensitive dependence to initial conditions means that nearby trajectories separate exponentially fast, i.e. two identical systems set up together in the same way such that the initial conditions are arbitrarily close together will have their trajectories rapidly diverge. To make this more concrete, consider two trajectories, where at some time t the trajectories are at position x(t) and x(t) + d(t), then the statement of chaos would be that d(t) ~ d(0) exp [ L t] , where the average value of L is called the Lyapunov exponent, and if this is positive it means that the two trajectories are quickly separating from each other.
Why is this an issue in the management of agile software projects? Good question?
Management 3.0 - and now #NoEstimates advocates - proffers a solution to a complex problem of managing the development of software. The book, while providing advice to managers on how to manage, mixes pseudo-scientific references and concepts – like the double pendulum – in support of essentially sound staffing and personnel management. I came to the book, through Jurgen’s himself. But on first reading I ran straight into what seemed like a collection of ideas that have no actual basis in fact. The double pendulum is just an example of this approach.
So here's the fix for these conjectures. There's a paper "Distilling Free-Form Natural Laws from Experimental Data," Michael Schmidt and Hod Lipson, Science, Vol. 324 3 April 2009, showing not only the equations of motion for the double pendulum, but a machine that can deduce these equations by observing the double pendulum in motion.
Here’s the core problem. When we can't get the analogies right, what else isn’t right in the foundational principles proposed by those suggesting we can't operate in the presence of uncertainty? If those analogies miss the mark on the underlying principles of these analogies, are the other suggested approached equally flawed? Maybe, maybe not, but for someone like me, trained and experienced in the application of approaches to solving complex problems, many of the fundamental approaches used in the book are simply muddled thinking. It’s too bad. A good editor, with experience in the analogies Jurgen uses could have established that they are just notional, analogies, or possible just anecdotal experiences. Instead Jurgen states them as the foundations of the principles of Management 3.0. In the same way the original posters of #NoEstimates state their case that decisions CAN be made without estimating, when in fact that violates microeconomics, managerial finance, and several other principles.
And of course, this plays directly into the #NoEstimates conjectures, based on even less credibility than Jurgen's management processes - minus the illformed analogies.
There is no principle stated to date by the advocates of #NoEstimates that supports the conjecture that decisions can be made in the presence of uncertainty with estimating the impact on the business of those decisions.
When we mean to build, we first survey the plot, then draw the model, and we see the figure of the house. Then we must rate the cost of erection, which if do find outweighs ability. What do we then, but draw a new model in fewer offices, or at least detest to build at all?
- Bardolph, Henry IV, Part II, Act I
The goal of managing other people's money when build a product or providing a service is to plan and coordinate the needed work activities to deliver a satisfactory outcome, or complete enterprise endeavor within the constraints of schedule, budget, resources, infrastructure, and available technology.
The intellectual content of the discipline of engineering, business and technical management, risk management, and program controls, are oriented on the components and are value neutral.  Not matter the outcome the processes are the same or similar. The value produced by these efforts is independent of the means to produce them. Once delivered he consumer of this value cares little how they arrived. That consumer didn't buy the process to produce that value, they bought the value. When these are confused the notion of focusing on value is perverted to focused on those spending the money rather than on those providing the money.
The underlying principles of these disciplines are focused inside the boundaries of that system. The resulting value is focused outside the system of it production.
Project success depends on the integration of the activities below. The primary role of the processes below guides the value producing activities to …
Design the Programmatic Process to support the Technical Project Engineering activities to Increase the Probability of Project Success.
 Systems Engineering Body of Knowledge, V0l.5,
A 2013 webinar at Cyber Security & Information Systems Information Analysis Center, presented some Immutable Laws of Software Development. These are worth repeating every time there is a suggestion hat some method or another, or some new and untested idea is put forth that will increase productivity by 10X or increase your profitability by NOT doing core business processes.
Here's the list presented in the webinar and is dedicated to Watts Humphrey who said all these in the past. For each Immutable Law, I've made a suggestion on how to avoid the undesirable outcome.
It's common these days to re-purpose a quote or a platitude from one domain into another and assume it's applicable to the second domain. My favorite recent one is
"Layers of redundancy are the central risk management property of natural systems” - Taleb
Taleb is the author of Black Swan, about long tailed statistical processes in the financial domain. These Black Swans tend to bite you when you least expect it. Are there Black Swans in the software development domain? Only if you're not looking. Financial systems are rarely engineered to perform in specific ways. Software systems are, st where I work and I suspect everywhere someone is paying money for the system to be developed or acquired.
So let's look at the Taleb quote that is often re-quoted by agile people and especially those advocating no estimates.
First some full disclosure. One of my graduate degrees is in Systems Management, which is a combination of Systems Engineering and Finance. As well I work with systems engineers and support systems engineering processes in the aerospace and defense domain. So I'll predisposed to view the work through the eyes of Systems Engineering. Everything is a System is a good starting point for what we do.
Now let's look at the Taleb quote through the eyes of Systems Engineering and the software systems that are engineered in the domain we work. There are many kinds of redundancy found in our systems. To avoid falling victim to platitudes that abound in the agile and No estimates domains, let's start with a framing assumption.
Redundancy provides resiliency to the system to withstand disruption within acceptable degradation parameters and to recover within an acceptable time and composite costs and risks.
In Taleb's (financial trading systems) domain resilience is desirable as it is in software intensive systems. Software systems that fly the airliner you ride on, manage the trains, process credit card transactions, control air traffic, manage the moving parts of your car. Any system where software is the dominate component for the proper functioning of the product or service also require resiliency.
But redundancy is not the only way to do this. And many times redundancy is very expensive, and creates less resiliency. - Fool Proof: Why Safety Can Be Dangerous and How Dangerous Makes Us Safe, Greg Ip, Little Brown, 2015
There are rules for assessing the resiliency that results from approaches beyond just redundancy. There are many other system design aspects that provide resiliency.
This notion of margin is absent from Agile development. And the result is when things go wrong, you're late, over budget and the product doesn't work. To have margin we must be able to estimate how much margin. Too much margin is a waste. Too little margin will not protect the system from disruption.
So when we hear a platitude like Layers of redundancy are the central risk management property of natural systems ask what kind of redundancy, what kind of fault handling and response processes. In fact ask first is that quote used as a platitude even applicable in the domain of interest? Or is it just a phrase picked up and repeated with little or no understanding of the principles, practices, or processes to which it CAN be applied.
 The Theory and Practice of Reliable System Design, Daniel Siewiorek and Robert Swarz
Phillip Armour has a classic article in CACM titled "Ten Unmyths of Project Estimation," Communications of the ACM (CACM), November 2002, Vol 45, No 11. Several of these Unmyths are applicable to the current #NoEstimates concept. Much of the misinformation about how estimating is the smell of dysfunction can be traced to these unmyths.
Mythology is not a lie ... it is metaphorical. It has been well said that mythology is the penultimate truth - Joseph Campbell, The Power of Myth
Using Campbell's quote, myths are not untrue. They are an essential truth, but wrapped in anecdotes that are not literally true. In our software development domain a myth is a truth that seems to be untrue. This is Armour's origin of the unmyth.
The unmyth is something that seems to be true but is actually false.
Let's look at the three core conjectures of the #Noestimates paradigm:
The Accuracy Myth
Estimates are not numeric values. they are probability distributions. If the Probability Distribution below represents the probability of the duration of a project, there is a finite minim - some time where the project cannot be completed in less time.
There is the highest probability, or the Most Likely duration for the project. This is the Mode of the distribution. There is a mid point in the distribution, the Median. This is the value between the highest and the lowest possible completion times. Then there is the Mean of the distribution. This is the average of all the possible completion times. And of course The Flaw of Averages is in effect for any decisions being made on this average value †
“It is moronic to predict without first establishing an error rate for a prediction and keeping track of one’s past record of accuracy” — Nassim Nicholas Taleb, Fooled By Randomness
If we want to answer the question What is the probability of completing ON OR BEFORE a specific date, we can look at the Cumulative Distribution Function (CDF) of the Probability Distribution Function (PDF). In the chart below the PDF has the earliest finish in mid-September 2014 and the latest finish early November 2014.
The 50% probability is 23 September 2014. In most of our work, we seek an 80% confidence level of completing ON OR BEFORE the need date.
The project then MUST have schedule, cost, and technical margin to protect that probabilistic date.
How much margin is another topic.
But projects without margin are late, over budget, and likely don't work on day one. Can't be complaining about poor project performance if you don't have margin, risk management, and a plan for managing both as well as the technical processes.
So what we need is not Accurate estimates, we need Useful estimates. The usefulness of the estimate is the degree to which it helps make optimal business decisions. The process of estimating is Buying Information. The Value of the estimates, like all value is determined by the cost to obtain that information. The value of the estimate of the opportunity cost, which is the different between the business decision made with the estimate and the business decision made without the estimate. ‡
Anyone suggesting that simple serial work streams can accurately forecast - an estimate of the completion time - MUST read Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte Carlo Simulation, Troy Magennis.
In this book are the answers to all the questions those in the #NoEstimates camp say can't be answered.
The Accuracy Answer
But remember, making estimates is how you make business decisions with opportunity costs. Those opportunity costs are the basis of Microeconomics and Managerial Finance.
Cone of Uncertainty and Accuracy of Estimating
There is a popular myth that the Cone of Uncertainty prevents us from making accurate estimates. We now know we need useful estimates, but those are not prevented by in the cone of uncertainty. Here's the guidance we use on our Software Intensive Systems projects.
Finally in the estimate accuracy discussion comes the cost estimate. The chart below shows how cost is driven by the probabilistic elements of the project. Which brings us back to the fundamental principle that all project work is probabilistic. Modeling the cost, schedule, and probability of technical success is mandatory in any non-trivial project. By non-trivial I mean a de minimis project, one that if we're off by a lot it doesn't really matter to those paying.
The Commitment Unmyth
So now to the big bug a boo of #NoEstimates. Estimates are evil, because they are taken as commitments by management. They're taken as commitment by Bad Management, uninformed management., management that was asleep in the High School Probability and Statistics class, management that claims to have a Business degree, but never took the Business Statistics class.
So let's clear something up,
Commitment is how Business Works
Here's an example taken directly from ‡
Estimation is a technical activity of assembling technical information about a specific situation to create hypothetical scenarios that (we hope) support a business decision. Making a commitment based on these scenarios is a business function.
The Technical “Estimation” decisions include:
This kind of information allows us to calculate the amount of time we should allow to get there.
The Business “Commitment” and Risk decisions include:
These are the business consequences that determine how much risk we can afford to take.
Along with these of course is the risk associated with the uncertainty in the decisions. So estimating is also Risk Management and Risk Management is management in the presence of uncertainty. And the now familiar presentation from this blog.
Risk Management is how Adults manage projects - Tim Lister. Risk management is managing in the presence of uncertainty. All project work is probabilistic and creates uncertainty. Making decisions in the presence of uncertainty requires - mandates actually - making estimates (otherwise you're guess your pulling numbers from the rectal database). So if we're going to have an Adult conversation about managing in the presence of uncertainty, it's going to be around estimating. Making estimates. improving estimates, making estimates valuable to the decision makers.
Estimates are how business works - exploring for alternatives means willfully ignoring the needs of business. Proceed at your own risk
† This average notion is common in the No estimates community. Take all the past stories or story points and find the average value and use that for the future values. That is a serious error in statistical thinking, since without the variance being acceptable, that average can be wildly off form the actual future outcomes of the project
‡ Unmythology and the Science of Estimation, Corvus International, Inc., Chicago Software Process Improvement Network, C-Spin, October 23, 2013
As far as hypothesis are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Andreas Osiander's (editor) preface to De Revolutionbus, Copernicus, in To Explain the World: The Discovery of Modern Science, Steven Weinberg
In the realm of project, product, and business management we come across nearly endless ideas conjecturing to solve some problem or another.
Replace the word Astronomy with what ever word those conjecturing a solution will fix some unnamed problem.
From removing the smell of dysfunction, to increasing productivity by 10 times, to removing the need to have any governance frameworks, to making decisions in the presence of uncertainty without the need to know the impacts of those decisions.
In the absence of any hypothesis by which to test those conjectures, leaving a greater fool than when entering is the likely result. In the absence of a testable hypothesis, any conjecture is an unsubstantiated anecdotal opinion.
An anecdote is a sample of one from an unknown population
And that makes those conjectures doubly useless, because they can not only not be tested, they are likely applicable only the those making the conjectures.
If we are ever to discover new and innovative ways to increase the probability of success for our project work, we need to move far away from conjecture, anecdote, and untestable ideas and toward evidence based assessment of the problem, the proposed solutions and the evidence that the propsed correction will in fact result in improvement.
One Final Note
As a first year Grad student in Physics I learned a critical concept that is missing from much of the conversation around process improvement. When an idea is put forward in the science and engineering world, the very first thing is to do a literature search.
Without some way to assess the credibility of any idea, either through replication, assessment against a baseline (governance framework, accounting rules, regulations), the idea is just an opinion. And like Daniel Moynihan says:
Everyone is entitled to his own opinion, but not his own facts.
and of course my favorite
Again and again and again — what are the facts? Shun wishful thinking, ignore divine revelation, forget what "the stars foretell," avoid opinion, care not what the neighbors think, never mind the unguessable "verdict of history" — what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts! - Robert Heinlein (1978)
There's a common notion in some agile circles the projects aren't the right vehicle for developing products. This is usually expressed by Agile Coaches. As a business manager, applying Agile to develop products as well as delivering Operational Services based on those products, projects are how we account for the expenditures of those outcomes, manage the resources and coordinate the needed resources to produce products as planned.
In our software product business, we use both a Product Manager and a Project Manager. These roles are separate and at the same time overlapping.
Product Managers focus on Markets. What features are the market segments demanding? What features Must Ship and what featues can we drop? What is the Sales impacts of any slipped dates?
Project Managers are inward focused to the resource allocation and management of the development teams. How can we get the work done to meet the market demand? When can we ship the product to maintain the sales forecast?
In very small companies and startups these roles are usually performed by the same person.
Once we move beyond the sole proprietor and his friends, separation of concerns takes over. These roles become distinct.
Products are about What and Why. Projects are about Who, How, When, and Where. From Rudyard Kipling's Six Trusted Friends)
Product Management focuses on the overall product vision - usually documented in a Product Roadmap, showing the release cycles of capabilities and features as a function of time. Project Management is about logistics, schedule, planning, staffing, and work management to produce products in accordance with the Road Map.
When agile says it's customer focused, this is true only when there is One customer for the Product, rather than a Market for the Product and that customer is on site. That'd not be a very robust product company if they had only one customer.
When we hear Products are not Projects, ask in what domain, business size, and value at risk is it possible not to separate these concerns between Products and Projects?
When confronted with making decisions on software projects in the presence of uncertainty, we can turn to an established and well tested set of principles found in Software Engineering Economics.
First a definition from Guide to the Systems Engineering Body of Knowledge (SEBoK)
Software Engineering Economics is concerned with making decisions within the business context to align technical decisions with the business goals of an organization. Topics covered include fundamentals of software engineering economics (proposals, cash flow, the time-value of money, planning horizons, inflation, depreciation, replacement and retirement decisions); not for-profit decision-making (cost-benefit analysis, optimization analysis); estimation, economic risk and uncertainty (estimation techniques, decisions under risk and uncertainty); and multiple attribute decision making (value and measurement scales, compensatory and non-compensatory techniques).
Engineering Economics is one of the Knowledge Areas for educational requirements in Software Engineering defined by INCOSE, along with Computing Foundations, Mathematical Foundations, and Engineering Foundations.
A critical success factor for all software development is to model the system under development as holistic, value-providing entities have been gaining recognition as a central process of systems engineering. The use of modeling and simulation during the early stages of the system design of complex systems and architectures can:
The process above can be performed in any lifecycle duration. From formal top down INCOSE VEE to Agile software development. The process rhythm is independent of the principles.
This is a critical communication factor - separation of Principles, Practices, and Processes, establishes the basis of comparing these Principles, Practices, and Processes across a broad spectrum of domains, governance models, methods, and experiences. Without a shared set of Principles, it's hard to have a conversation.
Developing products or services with other peoples money means we need a paradigm to guide our activities. Since we are spending other peoples money, the economics of that process is guided by Engineering Economics.
Engineering economic analysis concerns techniques and methods that estimate output and evaluate the worth of products and services relative to their costs. (We can't determine the value of our efforts, without knowing the cost to produce that value) Engineering economic analysis is used to evaluate system affordability. Fundamental to this knowledge area are value and utility, classification of cost, time value of money and depreciation. These are used to perform cash flow analysis, financial decision making, replacement analysis, break-even and minimum cost analysis, accounting and cost accounting. Additionally, this area involves decision making involving risk and uncertainty and estimating economic elements. [SEBok, 2015]
The Microeconomic aspects of the decision making process is guided by the principles of making decisions regarding the allocation of limited resources. In software development we always have limited resources - time, money, staff, facilities, performance limitations of software and hardware.
If we are going to increase the probability of success for software development projects we need to understand how to manage in the presence of the uncertainty surrounding time, money, staff, facilities, performance of products and services and all the other probabilistic attributes of our work.
To make decisions in the presence of these uncertainties, we need to make estimates about the impacts of those decisions. This is an unavoidable consequence of how the decision making process works.
The opportunity cost of any decision between two or more choices means there is a cost for NOT choosing one or more of the available choices. This is the basis of microeconomics of decision making. What's the cost of NOT selecting an alternative?
So when it is conjectured we can make a decision in the presence of uncertainty without estimating the impact of that decision, it's simply NOT true.
That notion violates the principle of Microeconomics
On the way home last week from a program managers conference, was listening to Bob Dylan's Idiot Wind
Everything's a little upside down, matter of fact the wheels have stopped. What’s good is bad, what’s bad is good. Idiot Wind, Bob Dylan, Blood on the Tracks, Copyright 1978
Remindeds me of the current discourse of #NoEstimates
The more those in the #NoEstimating community try to convince others that Estimating is Bad, can't be done, results in a smell of dysfunction, the more Bob Dylan resonants.
We’re idiots, babe
It’s a wonder we can even feed ourselves
We in the management of other peoples money domain must be, since we must have missed the suspension of the Microeconomics of Software Development when making decisions. We must have missed the suspension of Managerial Finance applied when we're asked to be stewards of the money our customers have given us to provide value for the needed cost on the needed date. We must have missed the suspension of the need to know when and how much so our Time Phased Return on Investment doesn't get a divide by Zero error.
There is no good way to perform a software cost‐benefit analysis, breakeven analysis, or make‐or‐buy analysis without some reasonably accurate method of estimating software costs and their sensitivity to various product, project, and environmental factors. ‐ Dr. Barry Boehm
The previous post on Source Lines of Code, set off a firestorm from the proponents of #NoEstimates.
I'd rather not estimate than estimate with SLOC
or my favorite since we work in the domains of flight avionics (command and data handling (C&DH) and guidance navigation and control (GN&C)), fire control systems, fault tolerant process control and the diagnostic coverage needed for process safety management, ground data and business process systems for both aircraft and spacecraft.
I'm no longer going to fly with any company that counts LOC as (it) shows a lack of intelligence. †
So the question is where and when are estimating the source lines of code useful for making business decisions?
Embedded Software Intensive Systems
In the embedded systems business, memory is fixed, processor speed is hardwired and many times limited by thermal control process. Aircraft and spacecraft avionics bays have limited cooling, so getting a faster processes has repercussions beyond the cost of getting a faster processor. In an aircraft cooling must be added, increasing weight, possibly impacting the center of gravity. In a spacecraft, cooling is not done with fans and moving air. There is no air. Heat pipes and radiators are needed, again adding weight.
For those with experience in rapid development of small chunks of code the get released often to the customer for incremental use in the business process that then provide feedback for the next sliced piece of functionality being concerned about the center of gravity, thermal load, realtime critical path of the executing code so it maintains the realtime closed loop control algorithm so we don't crash into the end of the runway or onto the surface of a distance planet is probably not in their vocabulary.
Business and Processing Systems
For terrestrial systems, even business processing systems, the number of lines of code has direct impact on cost and schedule. Let's start with a source code security analyzer. Those whose skills are rapidly chunking out pieces of useful functionality aren't likely to be interested in running all their code through a security analyzer before even starting the compile and check out process.
A source code security analyzer examines source code to detect and report weaknesses that can lead to security vulnerabilities.
They are one of the last lines of defense to eliminate software vulnerabilities during development or after deployment. Like all things mission critical there is a Source Code Security Analysis Tool Functional Specification Version 1.1, NIST Special Publication 500-268, February 2011, http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf
Development and Product Maintenance
A recent hands on experience with the need to know the SLOC comes from a refactoring effort to remove all the reflection from a code base. Those note familiar with reflection it provides objects that describe assemblies, modules and types. Reflection dynamically creates an instance of a type, binds the type to an existing object, or gets the type from an existing object and invoke its methods or access its fields and properties. If you are using attributes in your code, reflection enables you to access them.
This is a cleaver way to build code in a rapidly changing requirements paradigm. A bit too cleaver in our high performance transaction processing system
In larger production transaction processing systems, it's a way to crater the performance of the code by searching for object types on every single call for the transaction.
Removing all the reflection code structures has eliminates a huge percentage of the CPU time, memory requirements, database performance impacts - along with separating all the DB logic into Stored Procedures - resulting in the decommissioning of large chucks of the server farm running a very large public health application.
How long is it going to take to refactor all this code? I know, let's make an estimate by counting the lines of code. Do a few conversions from the current design (reflections), count how long that took. Divide the total lines of code (objects and their size) by that and we have an Estimate to Complete. Add some margin and we'll know approximately when the big pile of crappy code can get rid of the smell of running fat, slow, and error prone.
High Performance Embedded Mission Systems
High Performance Embedded Systems are found everywhere. Current estimates show they outnumber desktop and server systems 100 to 1. Most of these systems have ZERO defect goals. As well as ZERO tolerance for performance shortfalls, processing disruptions, and other reset conditions.
How do we have any sense of that the code base is capable of meeting these conditions? Testing of course is one way. But exhaustive testing is simply not possible. In a past life verification and validation of the code was the method - and still the method. Along with that is the cyclomatic complexity assessment of the code base. Another activity not likely to be of much interest to those producing the small chunks of sliced code to rapidly satisfy the customers emerging and possibly unknowable needs until they see it working.
So In The End
Unless we suspend the principles of Microeconomics and Managerial Finance when making management decisions in the presence of uncertainty, we're going to need to estimate the outcomes of our decisions.
This process is the basis of opportunity cost - that is what is the cost of one decisions over some others. If I make Decision A, what is the cost of NOT making decision B or C. This LOST opportunity is the cost of choice.
Unless we suspend the principles of probability and statistics when applied to networks of interrelated work, we're not going to be able to make decisions without making estimates.
In the four examples above, from direct hands on experience, Source Lines of Code are a good proxy for making estimates about cost and schedule, as well as the complexity of the code base when computing the inherent reliability and fault tolerance of the applications that are embedded in the software by which our daily lives depend on. From flight controls in aircraft, process control loops in everything under computer control, including the computers themselves, the assurance that the code we write is secure and will behave as needed.
If you hear some unsubstantiated claim that SLOC are not of any use in estimating further outcomes, ask when you were working a system where failure is not an option did those paying for that system tell you they didn't need to estimate the outcomes of their decisions? Haven't worked in that environment? May want to do some exploring of your own to see some of the many ways estimates are made and how SLOC is one of those in Software Intensive Systems Cost and Schedule Estimating (this document is an example of how SLOC is used in systems that are sensitive to size and performance based on the size of the code base. So take a read and possible see something you may not have encountered before. May not be your domain, but embedded systems outnumber desktop and server side systems 100 to 1)
One final thought about Software Intensive Systems and their impact on larger software development processes is the introduction of Agile Development in these domains. Progress is being made in the integration of Agile with large systems acquisition processes. Here's a recent briefing in a domain where systems are engineered. Systems we depend on to work as specified every single time.
† It's going to be a long walk for the poster of that nonsense idea. Oh yeah those building Positive Train Controls, are also realtime embedded systems developers and they use SLOC to estimate timing, testing, complexity, and many other ...ilities. Same with auto manufacturers. Maybe the Nike show company doesn't. So enjoy the walk. And BTW that OP deleted his post, but worry not, got a screen capture.
In our domain, Jon Katzenbach's definition of a team informs how we interact with our project members. A Team is defined as ...
A group of qualified individual who hold each other mutually accountable for a shared outcome - Katzenbach, Wisdom of Teams
It has been suggested that ...
The Estimate-Commitment relationship stands in opposition to collaboration. It works against collaboration. It supports conflict, not teamwork.
This position is counter to our Katzenbach based teaming processes. The conjecture that estimates work against collaboration, rather than for collaboration, removes the mutual accountability condition for team success.
This is like speaking with our builder about the bedroom remodel project and him saying...
Oh here's my estimate to complete your bedroom remodel, but I have no intention of meeting that estimate.
Where we work, Estimates provide clarity and understanding of the mutual accountability for the shared outcome between the group of qualified individuals.
Where we work, and apply Agile software development processes, we've adopted Seven Pillars of Program Success. We work hard, every day, to: †
Your domain of course will be different. You or your team may not work on projects must succeed on our before the needed date, at or below the needed budget with the needed capabilities. That is, you can show up late, over budget, and with missing capabilities and the customer will consider that OK. And just to be clear, the notion of the value of incremental delivery is defined by the receiver of those capabilities, not the producer. Ask the customer if the partial outcomes can actually be put to productive use in the business environment. Capabilities Based Planning defines which capabilities are needed in what order to provide business value.
We show up late, over budget, and with missing capabilities many times of course - so no need to point that out - without corrective actions attached. Any number of reports, including bogus reports show this. But a critical understanding is we know we're going to be late, and we know we're going to be over budget, and most of the time we know the delivered capabilities will not meet the intended specifications every reporting period and have a plan (maybe not the right plan) to fix it.
Risk Management is How Adults Manage Projects - Tim Lister
In our domain, being late, over budget, and less than required capabilities it is never acceptable to the customer. Are we late, over budget, and have performance issues? Of course. It's called development. But we know it, have visibility to the root causes, and have corrective action plans. This visibility is part of the process. Without a steering target and actuals, no error signal can be generated to be used for course correction. One of our PMs was a Navy navigator on an air craft carrier. The commanded heading was required for him to carry out is navigation processes. Without estimates of the impediments to be encountered along the way for the course to the desired destination, the productivity of progress, with effort to make progress along the course there is no way to know which path to take to that destination. By the way, measuring past performance and projecting that as future performance only works if the future conditions are like the past conditions. This is rarely the case on any sufficiently complex project.
Yogi Berra reminds us — If you don't know where you are going, you'll end up someplace else.
This poor performance is actually reported in a database for review every reporting period (minimal monthly) and used to adjust award fees and assessment for the next job that significantly impacts the selection process. This is called Closed Loop Control.
When there are no Estimates to Complete (ETC) or Estimates At Completion (EAC) there is an Open Loop Control condition and the corrective actions needed (but not always effective) have no steering target with variance to steer toward to move the project back to GREEN.
So estimates don't stand in the way of cooperation, they are the foundation of mutual accountability for the shared outcome based on cooperation.
† These seven pillars are derived from VADM Joseph Wendell Dyer, USN (Retired), Navy's chief test pilot, F/A-18E/F Program Manager, and Commander, Naval Air Systems Command, plus ten years as an executive at iRobot Corporation. Many of our projects are not VADM Dyer's but they are still mission critical, manifestly important to the success of our customers business success. If they were to fail - cost too much, show up beyond the business need date, or not provide the needed capabilities, the success of the business is in jeopardy. Again you're domain my be significantly different. Use as appropriate.
The Cone of Uncertainty chart comes from the original work of Barry Boehm, "Reducing Estimation Uncertainty with Continuous Estimation: Assessment Tracking with 'Cone of Uncertainty.'" In this paper Dr. Boehm speaks to the lack of continuous updating of the estimates made early in the program as the source of unfavorable cost and schedule outcomes.
As long as the projects are not re-assessed or the estimations not re-visited, the cones of uncertainty are not effectively reduced .
The Cone of Uncertainty is a notional example of how to increase the accuracy and precision of software development estimates with continuous reassessments. For programs in the federal space subject to FAR 34.2 and DFARS 34.201, reporting Estimate to Complete (ETC) and Estimates at Completion (EAC) is mandatory on a monthly basis. This is rarely done in the commercial world with the expected results shown in Todd's chart for his data and Demarco's data.
The core issue from current research at PARCA (http://www.acq.osd.mil/parca) from Root Cause Analysis (where I have worked as a support contractor) shows many of the issues are poor estimates when the program was baselined and failure to update the ETC and EAC with credible information about risks and physical percent complete
The data reported in Todd's original chart are the results of the projects based on estimates that may or may not have been credible. So the analysis of the outcomes of the completed projects is Open Loop ...
... that is the target estimate measured against the actual outcomes May or May not Have Been Against Credible Estimates. So showing project overages doesn't actually provide the needed information the correct this problem. The estimate may have been credible, but the execution failed to perform as planned.
With this Open Loop assessment it is difficult to determine any corrective actions. Todd's complete presentation "Uncertainty Surrounding Cone of Uncertainty," speaks to some of the Possible root cause of the mismatch between Estimates and Actuals. As Todd mentions in his response, this was not the purpose his chart. Rather I'd suspect just to show the existence of this gap.
The difficulty however is pointing out observations of problems, while useful to confirm there is a problem, does little to correct the underlying cause of the problem.
At a recent ICEEA conference in San Diego, Dr. Boehm and several others spoke about this estimating problem. Several books and papers were presented addressing this issue.
Software Cost Estimation Metrics Manual, Bradford Clark Raymond Madachy (Eds.)
The 2nd Edition of Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey, CRC Press.
Both these resources , and many more, speak to the Root Causes of both the estimating problem and the programmatic issues of staying on plan.
This is the Core Problem That Has To Bee Addressed
We need both good estimates and good execution to arrive as planned. There is plenty of evidence that we have an estimating problem. Conferences (ICEAA and AACE) speak to these. As well as government and FFRDC organizations (search for Root Cause Analysis here PARCA, IDA, MITRE, RAND, and SEI).
But the execution side is also a Root Cause. Much research has been done on procedures and process for Keeping the Program Green. For example the work presented at ICEAA "The Cure for Cost and Schedule Growth" where more possible Root Causes are addressed from our research.
While Todd's chart shows the problem, the community - cost and schedule community - is still struggling with the corrective action. The chart is ½ the story. The other ½ is the poor performance on the execution side IF we had a credible baseline to execute against.
To date both sides of the problem are unsolved and there for we have Open Loop Control with neither the proper steering target nor the proper control of the system to steer toward that target. Without corrections to both estimating, planning, scheduling, and execution, there is little hope in improving the probability of success in the software development domain.
Using Todd's chart from the full presentation, the core question that remains unanswered in many domains is
How can we increase the credibility of the estimate to complete earlier in the program?
By the way the term possible error comes from historical data. And like all How to Lie With Statistics charts that historical data is self selected, so a specific domain, classification of projects, and most importantly, the maturity of the organization making the estimates and executing the program.
Much research has shown the maturity of the acquirer influences the accuracy and precision of the estimates. Our poster child is Stardust, with on time, on budget, working outcomes due to both government and contractor Program Manager's maturity for managing in the presence of uncertainty. Which is one of the source of this material
 Boehm, B. “Software Engineering Economics”. Prentice-Hall, 1981.
When there is a discussion around making improvements to anything, trouble starts when we don't have a shared understanding of the outcomes. For example, speculating that something can be done or that something should be stopped in pursuit of improvement has difficulty maintaining traction in the absence of a framework for that discussion.
The discussion falls into he said, she said style or I'll tell you a story (anecdote) of how this worked for me and it'll work for you.
Over the years I've been trained to work on proposals, provide training materials, write guidance documents, and other outlets - PodCasts, conference presentations - all designed to convey a new and sometimes controversial topic. Connecting agile and earned value management is the latest.
There are several guides that have formed the basis of my work. The critical success factor for this work is to move away from personal anecdotes - although those are many time used inside a broader context to make the message more personal. Rather start with a framework for the message.
A good place to start is Cliff Atkinson's Beyond Bullet Points. It's not so much the making of Power Point briefings, but the process of sorting through what are you trying to say. Version 1 of the book is my favorite, because it was simple and actually changed how we thought about communication. Here's a framework from Cliff's 1st edition.
So when we hear about we're exploring or all we want is a conversation and at the same time the suggestion - conjecture actually - that what we're talking about is a desire to change an existing paradigm, make some dysfunction go away, take some correcrtive action - ask some importanrt questions:
We've been doing this for 20 years and therefore you can as well
Is a common phrase used when asked in what domain does you approach work? Of course without a test of that idea outside the domain in which the anecdotal example is used, it's going to be hard to know if that idea is actually credible beyond those examples.
So if we hear we've been successful in our domain doing something or better yet NOT doing something, like say NOT estimating, ask in what domain have you been successful? Then the critical question, is there any evidence that the success in that domain is transferable to another domain? This briefing provides a framework - from my domain of aircraft development - illustrating that domains vary widely in their needs, constraints, governance processes and applicable and effective approaches to delivering value.
When we hear words about any topic, my favorite of course is all things project manage, it doesn't make them true.
So when we hear some phrase, idea, or conjecture - ask for evidence. Ask for domain. Ask for examples. If you hear we're just exploring ask who's paying for that? Because it is likely those words are unsubstantiated conjecture from personal experience and not likely very useful outside that personal experience
Working over the week on a release of a critical set of project capabilities and need a break from that. This post will be somewhat scattered as I'm writing it in the lobby to get some fresh air.
Here's the post asking for a conversation about estimates. Here's a long response to that request.
Let's ignore the term FACT for the moment as untestable and see how to arrive at some answers for each statement. These answers are from a paradigm of Software Intensive Systems, where Microeconomics of decision-making is the core paradigm used to make decisions, based on Risk and Opportunity Costs from those decisions.
So if the OP is actually interested in moving from the known problem of using estimates in a dysfunction way, let's stop speaking about how to make decisions without estimates, and learn how to make good estimates needed for good decisions.
This issue of Harvard Business Review is dedicated to Make Better Decisions. Start with reading how to make good decisions. There is a wealth of guidance how to do that. Why use Dilbert-style management examples. We all now about those. How about some actionable corrective actions to the root causes of bad management. All backed up with data beyond personal anecdotes. Reminds me of eraly XP where just try it was pretty much the approach. So if the OP is really interested in...
Let’s use our collective influence and intelligence to take the discussion forward to how we can cure the horrible cancer in our industry of Estimate = Date = Commitment.
Then there are nearly unlimited resources for doing that. The first is to call BS on the notion decisions can be made without estimates, without stating where this is applicable - first. Acknowledge unequivocally that estimates are needed when the value at risk reaches a level deemed important by the owners of the money, and start acting like the professionals we want to be and gain a seat at the team to improve the probability of success of our projects with credible estimates of cost, effort, risk, productivity, production of value, and all the other attributes of that success.
For those interested in exploring further how to provide value to those paying your salary, here are some posts on Estimating Books