Principles, Practices, and Processes to Increase Probability of Success
The Holy Grail of all Agile discussions goes like this ...
We focus on value over cost
This is a mantra repeated by agilest and vendors of agile tools as well. The big question is ...
What are the units of measure of Value?
The units of measure of Cost are dollars. When we hear We Focus on Value over Cost, in what units of measure are these two variables being compared to one another? A better question is where is this Value being defined?. And another question how is this value being defined?
If we are going to move beyond the platitude of value over cost - which by the way is simply bad economics, since the ,,,
Value of something can't be determined unless you know the cost to acquire that value
but let's ignore this naïve concept for the moment. How can Value be defined and measured?
In our Software Intensive System of Systems world, System Engineering is the dominant paradigm for increasing the probability of program success. This is not the Systems Engineering of the IT server systems engineering. This is the INCOSE Systems Engineering, defined as:
Systems Engineering is an interdisciplinary approach and means to enable the realization of successful systems.
Systems Engineering focuses on defining customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the problems encountered for:
Systems Engineering integrates all the disciplines and specialty groups into a team effort forming a development process that proceeds from concept to production to operation. Systems Engineering considers both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs.
In this paradigm there are two primary measures that the product or service being produced satisfies the needs of those paying for the work:
So Now What?
We've got some high level definitions, but are no closer to the units of measure needed to compare Value with Cost.
The Measures of Effectiveness (MOE) are defined by the customer or user point of view. These are the customer's key indicators that the mission has been achieved in terms of performance, suitability, and affordability across the lifecycle of the product or service.
MOE's focus on the systems capabilities to achieve mission success, within the total operational environment. MOEs represent the customer's most important evaluation and acceptance criteria.
If the customer doesn't know the Measures of Effectiveness in some form, to some level of confidence, the software project is on a Death March and no software development method is going to fix this problem.
The Measures of Performance state the attributes considered important to ensure that the system has the capability to achieve the operational objectives. MOPs are used to assess whether the system meets design or performance requirements that are necessary to satisfy the Measures of Effectiveness. MOPs are derived from or provide insight to the MOEs or other user needs.
If the customer doesn't know the Measures of Performance in some form, to some level of confidence, the software project is on a Death March and no software development method is going to fix that problem.
In the End
When we hear value over cost and don't have a unit of measure for Value it's just a platitude. When we hear value over cost and don't know the cost to achieve that Value, it's just a platitude.
So don't fall for the platitude approach to spending other peoples money in the presence of uncertainty. Define the MOEs, MOPs and the cost to achieve them.
With the MOEs and MOPs there is a 3rd measure for the products or services that must also be connected with the cost for achieving them. Technical Performance Measures.
With all three (MOE,MOP, TPM) those paying for the work can monetize these to establish a common basis of measure with the Cost to produce the value.
Until those conjecturing you should focus on Value over Cost can produce units of measure for that comparison, consider their statements as just platitudes with no actionable outcomes.
In the business of estimating in the presence of uncertainty, a useful tool is Bayesian analysis of what we know today to make forecasts or estimates of the future. The Bayesian approach to inference, as well as decision-making and forecasting, involves conditioning on what is known to make statements about what is not known.
Bayesian estimating consider a probability of some outcome in the future as a belief or an opinion. This is different from the frequentist approach of estimating where it is assumed there is a long-run frequency of events. These events could be a cost, an expected completion date, some possible performance parameter. A probability that some value will occur. This is useful when there are long-term frequencies of an occurrence. When that is not the case - for example in project work which may be a unique undertaking - a Bayesian approach to estimating is called for.
Conditioning our decisions on what is known, means making use prior knowledge. This knowledge in the project domain comes from past performance of the parameters of the project. These include cost, schedule, work capacity, technical performance and other variables involved in the planning and execution of the work of the project.
This information is a distinguishing feature of the Bayesian approach to estimating the future. To do this we first need to fully specify what is known and what is unknown about the past and the future. Then what is known in making probabilistic statements about what is unknown.
The Bayesian approach to estimating differs from the traditional approach - frequentist - in that it interprets probability as a measure of believability in an event. That is how confident are we in an event occurring.
For project work Bayesian estimating asks what's the believability that this project will cost some amount or less. Or what's the believability that this project will complete on some date or before. This belief is based on prior information about the question. The assessment of the question is then a probability based on this prior condition.
This is stated as Bayes Theorem
Where P(A) and P(B) are the probabilities of A and B without regard to each other. And P(A|B) is the conditional probability of observing the event A, given that B is true.And P(B|A) is the conditional probability of observing the event B, given that A is true.
For project work this can be very useful, given we have prior knowledge of some parameter's behaviors and would like to know some probability of that parameters behavior in the future.
This is distinctly different from averaging past behavior and projecting the future behavior. It is also distinctly different from assuming that the past behavior is going to be like the future behavior. This two assumptions of course are seriously flawed but at the same time often used in naive estimating or forecasting.
This Bayesian approach to forecasting or estimating future outcomes is also the basis of machine learning using Markov Chain Monte Carlo Simulation.
When faced with questions like when will we be done or how much will it cost when we are done - and these are normal everyday questions asked by any business that expects to stay in business - then Bayesian modeling can be useful. Along with frequentist modeling and standard Monte Carlo Simulation of the processes that drive the project.
A good starting place for the whole topic of estimating software development is ...
I've started writing more book reviews for ACM Computing Reviews this year. I'l start putting references to the reviews here. The current review is for
I started my career in Fortran doing signal processing in graduate school for Mie scattering and processing of particle accelerator Pictures from digitized film. This is a heavy duty book for the science and engineering developers.
“… software economics has often been misconceived as the means of estimating the cost of programming projects. But economics is primarily a science of choice, and software economics should provide methods and models for analyzing the choices that software projects must make” - Taming the Tiger: Software Engineering and Software Economics, Leon Levy
The estimates are needed to make those decisions. This is the basis of the Microeconomics of decision making in the presence of uncertainty. These estimates support all 5 Immutable Principles of Project Success.
Estimating is an informed assessment of an uncertain event. All project work operates in the presence of Uncertainty. Estimating means developing probabilistic confidence intervals for the possible outcomes of that work in the presence of uncertainty.
The numbers we use on projects come in two types
Story points are Ordinal numbers, Stories are Cardinal numbers. Story Points are relative measures of effort. They are not duration or cost. Story Points are measures of relative effort 
Story Points are arbitrary measures used by Scrum teams to determine the Relative (Ordinal) effort of the work. They tell the team how hard a story is, from it’s perceive complexity, risk, unknowns – each related to effort. These Relative (Ordinal) measures are the antithesis of Business Management measures of work planning and accomplishments, which are in Hours and their rated dollars for the direct labor needed to produce the outcome (assuming no material cost).
Story Points don't tell us the duration or cost of this relative effort. Story Points dont' tell us the absolute effort to performance the work. They aren’t normalized across work efforts, across teams, or across the program. Story Point effort estimates are not Calibrated across the project, but rather are developed for the work at hand. The calibrated units of measure for Story Points – can and will change – change as the program progresses.
Business operated in units of dollars and duration for the work needed to produce the needed capabilities in exchange of those dollars and time. Business does not operate in units of Story Points.
The killer question is what is a Story Point Worth to those paying for the work? Agile teams rarely produce comparable calibrated Story Points for dissimilar or even similar work. This is a key difference between Business estimates and Agile estimating. Most businesses have an external Basis of Estimate process to calibrate the cost and duration of planned work. Business teams working on different parts of the project, with different assessments of Effort, different story point values, and different project costs result in dissimilar units of measure for a Story Point.
When Agile teams have different approaches to applying Story Points, the physical effort can still be calculated for each team, and rolled up to the Total Story Point count for the project for an individual Feature Physical Percent Complete.
The program level budget can flowed down to the planned Work in the Product Backlog and connected with the Total Story Point count built bottom up from the Agile Planning process.
From there, all Physical Percent Complete calculations remain the same - units of Story Points and Dollars
With the proper application of Story Points, at the agile estimating level, the Business can produce a Cardinal estimate the cost of the work with some simple rules:
Measuring The Project with Stories and Their Completon Rate
There is a conjecyure that measuring Stories is better than measuring Story Points. Here's the simple answer to that conjecture
This can only be valid if the Stories are statistically similar enough that their individuals variances (range of actual effort versus the estimated effort) in the collection of Stories is de minimis. That is the Stories are statistically identical.
If this is not the case, then uses Stories rather than Story Points as the measure of effort and conversation of those measures into units meaningful to the business is a fools errand. It violates the principles of statistical process control since the unit of measures of plan and progress itself has statistical variance unaccounted for the the data received is bogus.
If you don't have statistically identical relative efforts for all stories, never use Stories, Only use Story Points.
As well, a second caution is the false assumption that the future is always like the past. I got a book for Christmas Fool Proof, Greg Ip and our false belief that the future will be like the past. On any non-trivial project this is never the case, so making the assumptions that all stories are of the same size turns us into the fool in Fool Proof.
 Scrum + Engineering Practices: Experiences of Three Microsoft Teams, Laurie Williams North Carolina University, Gabe Brown, Adam Meltzer, Nachiappan Nagappan, Microsoft Corporation
I'm working on the edits to a National Defense Industry Association guide to Earned Value Management on Agile programs and also working two other programs where Agile is being applied to Earned Value Management programs, with flow down for both agile (Scrum in all cases) and FAR 34.2 Earned Value Management Systems that will be validated by the Defense Contract Management Agency (DCMA) or the Cognizant Federal Agency (CFA) for compliance with IEA-748-C.
There are huge fights on the internet about what it means when we say X. Typically these are coming for sole contributors that have adopted their own localizations of terms. Many times from the trainign they took or a book they read.
The trouble is, when writing guidance that will be referenced in policy documents, have local definitions is not very useful. Add to that the applicable of Agile on EVM programs, means those programs ar $20M or greater, and it will rarely be the case where there is a single Scrum team. Scrum-of-Scrum, Agile at Scale, or Large Scale Agile is the norm. So a standard set of definitions is needed to avoid chaos
When I use a word Humpty Dumpty said in a rather scornful tone, it means just what I choose to to mean - neither more nor less. The question is, said Alice, whether you can make words mean so many different things. The question is said Humpty Dumpty which is to be master. Through the Looking Glass, Chapter 6
In the Agile paradigm, having structure and governance is considered not agile. Working programs with Earned Value Management for sovereigns requires structure and governance for he program controls processes and data, and the words used in that program controls process. Why - because it's public money and you don't get to make up the definitions of things.
Here's a sample Glossaries that provide terms as candidates for agile
When individuals or organizations define or worse redefine terms it creates confusion at best, and creates opportunities for strong disagreement at worse.
There a misinformed notions floating around the agile community that products and better than projects. That some how - unspecified and unsubstantiated as usual - that projects are undesirable and focusing on products and their value - again unspecified how to measure that value and unsubstantiated that value measurement isn't what projects do as well.
A Project is a temporary endeavor undertaken to create a unique product or service. There is a definition (possibly emerging) of what needs to be delivered and a target date (with a probabilistic confidence) when it needs to be delivered. A project is unique in that it is not a routine operation (Server Ops is not a project, it's operations), A Project is a specific set of operations designed to accomplish a stated goal. The project team often includes people who don’t usually work together – sometimes from different organizations and across multiple geographies. Projects for example:
Project management is the application of the knowledge, skills, tools, and techniques to the Project's activities to meet the project requirements.
A Product is anything that can be offered to a market to solve a problem or satisfy a want or need. It has a life cycle with multiple stages. A product is conceived, developed, introduced and managed in the market, and retired when the need for it diminishes. A product can only be developed within the context of a project, and multiple projects can occur within a product’s life cycle.
There are multiple roles in the Product development
Product managers and project managers work closely together in high-performance organizations. And both work with the broader product team and executive leaders.
When I hear ...
Project: laced with untested assumptions, based on industrial thinking. Product: cont. tests assumptions, based on lean/agile good practice.
I'm breath taken at the serious lack of understanding of the roles, principles, processes, procedures, and governance frameworks for spending other peoples money to produce products.
A simple summary. A Product is a entity provided to a user. It can be anything: a physical product that you hold in your hands, a software application, or a service that you are delivering. A Project is the series of activities to produce the defined outcome that turns into theProduct.
† I used to work in the Product Lifecycle Management business. PLM integrates people, data, processes and business systems.
Complexity and chaos are not the same. Complexity requires a higher degree of order and works against chaos. To construct a complex system, work is required - resisting or decreasing entropy. Chaos increases entropy and is a natural process of the universe. - Alexander B. Alleman, PhD student, Montana State University
Managing projects in the presence of uncertainty requires energy be put into the system to maintain it's equilibrium and stability. How much energy, for how long, in what order, in what type is the role of management. To know the answers to these question of how much, when, where means make estimates of those quantities and the resulting outcomes that reduce chaos.
In other domains, this principle is called the 2nd Law of Thermodynamics. In project work, this principle is also applicable, since the system of work is a dynamic coupled collection of random processes, interacting with each other in non-linear, non-stationary ways.
The Law of Entropy is expressed in this 2nd law. The law predicts that the natural state of all things—from the tiniest atoms to the largest of galaxies—is that of disorder. This means, without appropriate systems or balances in place, everything wants to fall into chaos. The management of this naturally occurring (statistical) - as well as the probabilistic - set of processes is the role of Risk Management. If there no uncertainties on your project the 2nd Law would not be applicable. Since all project work is uncertain there is ALWAY risk associated with those uncertainties. And of course managing in the presence of these uncertainties Mandates making estimates about the impact of managerial actions on the outcomes of those decisions.
A nice background on the 2nd Law of Thermodynamics of Project work comes from Shim Marom. This topic is little understood in some SW and IT domains. In Engineering and Construction and Software Intensive Systems is it a well known process.
The entire Universe and everything in it is a collection of 2nd order non-linear partial differential equations, all obeying the 2nd Law of Thermodynamics.
"Can you do addition?" the White Queen asked. "What's one and one and one and one and one and one and one and one and one and one?"
"I don't know" said Alice, I lost count"
"She can't do addition," the Red Queen interrupted"
Lewis Carroll, Through the Looking Glass and What Alice Found There
Our daily business as project managers or program planning and controls managers is to produce value for those paying for that value. In this role it's very easy to see what one wants to see or expects to find in the performance of the project. There is temptation to seek simplifying processes in the pursuit of this goal. Turns out managing projects in the presence of uncertainty is actually hard work.
We live in an empirical world, where data is always available to support our decision making if we choose to use it. As well in this empirical world, uncertainty is alway present. It never goes away. It can be reduced and it can be handled with margin, but it is always present. We must act in our role appropriately
There is a temptation to look for simple answers to complex questions. Sometimes even to stop doing activities required to manage in the presence of uncertainty simply because it's too hard, we're bad at it, or we see those activities are perceived as a waste.
Like value, waste is rarely defined by those performing the work. It's defined by those paying for the work. These notion have no place in rigorous, rational, implacably empirical world of spending other peoples money to produce value from that money.
When we assume that those spending the money have an equal say about what value is produced compared to those providing the money, we've broken the relationship between payer and provider. There has to be a tight relationship between the provider and payer, but that relationship is not one of equals. When this is forgotten, those spending the money are disconnected from the governance process of those providing the money.
This has become more common lately with the notion of No Estimates and the other NO paradigms associated with it. This is not a sustainable concept, since in the end those providing the money have a fiduciary obligation to manage in the presence of uncertainty and make informed decisions based on estimating the outcomes of those decisions. Those spending the money don't share this fiduciary obligation.
Until this is understood by the advocates of No estimates, No Projects, No Management, it will be an argument without end.
General notions are generally wrong - Letter to Mr. Wortley Montegu, 28th March 1710
This reminds me of many suggestions that new and supposed innovative ideas can be applied outside the personal anecdotal experience of the person making the suggestion - conjecture actually.
In Agile development the quest for Value has become an obsession in the absence of other factors. The first missing attribute is what does it cost to produce that mythical Value. We focus on value over focusing on cost, is a nonsense statement in any Managerial Finance paradigm.
We cannot assess the value of a product or service until we know the cost to acquire that product or service
In the value definition parlance we need to use units of measure that are meaningful to the decisions makers
These are connected as
The next level of detail of these measures can be described below from "Review of Value and Lean in Complex Product Development," Ghadir I. Siyam, David C. Wynn, and P. John Clarkson in Systems Engineering, Vol. 18, No. 2, 2015.
In the end the define of Value is a Systems Engineering process.
So when we hear we focus on value ask how much does that value cost? What are the units of measure of that value? Are those units of measure meaningful to the decision makers?
No answers, then value is a meaningless phrase, just a platitude.
All project work is random work. There are three core random variables on all projects, shown below. There are sub-variables as well as all the ...ilities involved in project work, but let's start with the major three.
Fixing, 1, 2, or all 3 of these random variables does NOT make the randomness go away.
These variables are random and all variables on projects are random because of uncertainty. This uncertainty (as mentioned on many other blogs) comes from two sources. Aleatory uncertainty that is the underlying natural randomness of all project activities. This is called irreducible uncertainty. It can't be reduced. Nothing you can do will reduce it. It's there and will always be there. This is a statistical process. The only way to work in the presence of irreducible uncertainty is to have margin. Cost margin, schedule margin, technical margin.
The second is epistemic uncertainty. This is uncertainty that is event based. It's there but can be handled in some ways. Those ways can include buying two of everything in case one breaks, having redundancy in other forms - a backup site for the data center, testing, prototypes, and other activities that provide a Plan B when the probability that something will go wrong becomes true and that thing that went wrong is no longer a probability but has turned into an Issue.
So Here's the Real Problem
When we hear, we don't need to estimate, I can fix time and budget, that doesn't make the randomness go away. It just sets an upper bound on what you CAN spend and when you HAVE TO BE DONE. Those uncertainties that create the randomness are still there. Then fixed time and fixed budget plans, leave open the technical randomness as well. The time and budget are still random inside the constraints set by the project.
There's no getting around this. No matter how often someone says you can. Those someones were asleep in the engineering probability and statistic class. Here's the classic engineering course we were all forced to take as physics grad-student Probability and Statistics. †
This is basic probability and statistics of project work. The probability that something will turn out unfavorable is created by epistemic uncertainty. The statistical variances of everyday life are created by aleatory uncertainty.
Ignoring these uncertainties means it's going to turn out bad for those paying for your work
You need margin to protect from irreducible uncertainty. You need specific actions to protect from reducible uncertainty. So you can in fact fix the cost and schedule IF AND ONLY IF (IFF) you have margin and risk buy down plans. When someone says we've fixed the duration and the budget. two things come to mind.
A third notion is the killer notion
When you fix time and cost, have sufficient risk buy down activities to reduce the epistemic uncertainty that creates the probability of something going wrong to an acceptable level, and have sufficient margin to cover the expected overruns in duration, you still have the technical reducible and irreducible uncertainties that the things you building won't work, won't be what the customer wants, will cause other issues - these are called externalities in the economics of software development, and other unknowns, possibly unknowable at the beginning of the project.
When you fix time and or budget, and don't have protections for reducible and irreducible, you're going to be late and over budget and you have willfully ignored those outcomes. Oh and by the way, there is a probability your little gadget is not likely to meet the needs of those paying you either.
These immutable condition (aleatory and epistemic uncertainty) are completely ignored in agile development. Agile provides rapid feed back to the risk management processes of software development. But agile is NOT a risk management process in and of itself. That's a topic for another time.
If you think you have no uncertainties - reducible or irreducible, and have fixed the budget and duration and maybe even the outcomes. You're likely on a de minimis project. Good luck with that.
† We had to take a few courses outside our major, and this was another. Classical Electrodynamics. This was an engineering course. We had a foundation of electrodynamics from the physics point of view. In that view everything can be solved through Maxwell's equations. A simple set of partial differential equation describing how electromagnetism works. When asked to give a talk on antenna theory in the engineering course, a friend (I was too afraid at that time) went to the chalk (yes no white boards) one wrote done maxwell's equations for the reciprocity theorem of antennas in free space. The Professor at the back of the told him (Steve) to sit down. We're engineers not physicists we want to know HOW things work not WHY things work
There is Nothing so Practical as a Good Theory
This quote comes from the German-American social psychologist Kurt Lewin (1890-1947), who was pointing out that psychology was thin on good theories, but benefited greatly when there was one. A good theory simplifies explanations and makes them more coherent, robust, objective, and even allows better predictions of behavior.
So what's the theory - Principle - of making decisions in the presence of uncertainty without estimating the impact of those decisions? So far haven't heard one that could be tested outside of personal anecdotes 0f it works for me, so it must work for you.
The real problem that was brought to light by Woody's Zuill's original post way back when is quite simple
Estimates are misused by bad management to hold people accountable for things can never be accountable for. But it's not the estimate or the estimating process that is the root cause. It's the bad management.
But conjecturing - and it's pure conjecture - that Not Estimating is the corrective action for that Root Cause of dysfunctional management is essentially nonsense. First the dysfunction is behavioral - Bad Managers. Not mathematical - estimates made in the presence of underlying certainty.
Yes estimates are sometimes harder. Much too hard for those who have no experience making estimates. Even much harder when the customer is clueless about what she actually wants to spend her money on. So as Capers Jones says below, if our customer can't come up with some form of needed capabilities in exchange for the money being spent - we don't know how much or when we'll be done.
And if the customer puts an upper bound - a Not To Exceed contract we call that - on the spend, it doesn't remove the other two random variables from the work effort - Time and Capabilities. If the customer doesn't have some notion of What Done Looks Like - and not the lame definition of done found in the agile literature - but the real definition of done in units of measure of effectiveness, measures of performance, and all the ...ilities associated with the work outcomes - then you're on a DEATH MARCH project and estimating or not estimating isn't going to add one iota of increased probability of success.
Here's an example of a VERY software intensive system of systems A consistent multi-user, multi-goal framework for assessing system performance with application to a sonar system. This is not likely a system you will have worked on. But it is similar most all other Software Intensive System of Systems found in Enterprise IT.
But Jones's quote again fits a very broad set of domains - all domains I'd suggest.
When we mean to build, we first survey the plot, then draw the model, and we see the figure of the house. Then we must rate the cost of erection, which if do find outweighs ability. What do we then, but draw a new model in fewer offices, or at least detest to build at all?
- Bardolph, Henry IV, Part II, Act I
The goal of managing other people's money when build a product or providing a service is to plan and coordinate the needed work activities to deliver a satisfactory outcome, or complete enterprise endeavor within the constraints of schedule, budget, resources, infrastructure, and available technology.
The intellectual content of the discipline of engineering, business and technical management, risk management, and program controls, are oriented on the components and are value neutral.  Not matter the outcome the processes are the same or similar. The value produced by these efforts is independent of the means to produce them. Once delivered he consumer of this value cares little how they arrived. That consumer didn't buy the process to produce that value, they bought the value. When these are confused the notion of focusing on value is perverted to focused on those spending the money rather than on those providing the money.
The underlying principles of these disciplines are focused inside the boundaries of that system. The resulting value is focused outside the system of it production.
Project success depends on the integration of the activities below. The primary role of the processes below guides the value producing activities to …
Design the Programmatic Process to support the Technical Project Engineering activities to Increase the Probability of Project Success.
 Systems Engineering Body of Knowledge, V0l.5,
Dream or nightmare, we have to live our experience as it is, and we have to live it awake. We live in a world which is penetrated through and through by science and which is both whole and real. We cannot turn it into a game simply by taking sides - Jacob Bronowski
This quote is from the preface of A Universe from Nothing, Lawrence M. Krauss. It reminds me of the #NoEstimates discussion in which the core principles of business management are ignored. These principles are the microeconomics of decision making in the presence of uncertainty, and the managerial finance processes around making those decisions to maximize the performance of the firm paying for the development of the software. These principles are ignored by conjecturing decision can be made without estimating the impact of the decisions on the success of the projects funded by the firm.
It is a capital mistake to theorize before one has data - Sir Arthur Conan Doyle
When we hear about a new idea that will undo all the established principles and practices . Ask first you got any data to show your cocka mammy idea has any hope of being correct?