Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.
The structure of this checklist is taken directly from Scientific American's essay on scientific baloney, but sure feels right for many of the outrageous claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.
How reliable is the
Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated.
In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population.
Does this source often make similar claims?
Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature of early development in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.
But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed.
They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right.
Have the claims been verified by another source?
Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information.
We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development.
How does the claim fit with what we
Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea.
Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.
In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes.
Has anyone gone out
This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.
It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.
When self-selected communities see external criticism as harassment or you're simply not getting it, or those people are just like talking to a box of rocks, the confirmation bias is in full force.
Does the preponderance of evidence point to the claimant's conclusion or to a different one?
Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner.
Is the claimant employing the
Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school is quantitative methods for experiments. This course sets the ground rules for conducting research in the field.
Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation?
This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.
Show us your data, is that starting point for engaging in a conversation about a speculative idea.
If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did?
This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance.
Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?
All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice?
Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.
In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias.
So the next time you hear a suggestion that appears to violate a principles of business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.
The core question is how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility.
A final recommendation is Ken Schwaber's talk and slides to think about evidence based discussions around improving the business of software development. And the book he gave away at the end of the talk Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management
The Lean Aerospace Initiative and the Lean Aerospace Initiative Consortium define processes applicable in many domains for applying lean. At first glance there is no natural connection between Lean and System Engineering. The ideas below are from a paper Igave at a Lean conference.
Core Concepts of Systems Engineering
Typical System Engineering Activities
Steps to Lean Thinking 
Differences and Similarities between Lean and Systems Engineering
Despite these differences and similarities both Lean and Systems Engineering are focused on the same objectives – delivering products or lifecycle value to the stakeholders.
It is the lifecycle value that drives both paradigms and must drive any other process paradigm associated with Lean and Systems Engineering. Paradigm like software development, the management of any form of a project and the very notion of agile. A critical understanding often missed is that Lifecycle Value includes the cost of delivering that value.
Value can't be determined in the absence of knowing the cost. ROI and Microeconomics of decision making require both variables to be used to make decisions.
What do we mean by lifecycle?
Generally lifecycle is a combination of product performance, quality, cost and fulfillment of the buyers needed capabilities.
Lean and Systems Engineering share this common goal. The more complex the system, the more contribution there from Lean and SE.
Putting Lean and Systems Engineering Together on Real Projects
First some success factors on complex projects 
This last success factor is core to any complex environment, no matter what the process is called. In the absence of stability of requirements and funding, improvements to the flow of work is constrained.
The notion of adapting to changing requirements is not the same as having the requirements – and the associated funding – be unstable.
Mapping of the Value Stream to the work process requires some level of stability. It is the search for this stability where Systems Engineering – as a paradigm – adds measureable value to any Lean initiative.
The standardization and commonality of processes across complex systems is the basis for this value. 
 “The Lean Enterprise – A Management Philosophy at Lockheed Martin,” Joyce and Schechter, Defense Acquisition Review Journal, 2004.
 Lean Thinking, Womack and Jones, Simon and Schuster, 1996
 Lean Enterprise Value: Insights from MIT’s Lean Aerospace Initiative, Murman, et al, Palgrave 2002.
 “Lean Systems Engineering: Research Initiatives in Support of a New Paradigm,” Rebentisch, Rhodes, and Murman, Conference on Systems Engineering, April 2004.
 LM21 Best Practices, Jack Hugus, National Security Studies, Louis A. Bantle Symposium, Syracuse University Maxwell School, October 1999
 “Enterprise Transition to Lean Roadmap,” MIT Lean Aerospace Initiative, 2004 Plenary Conference.
Unless you're building sofwtare as a hobby, someone is paying you to do that work. Those paying aren't likley doing it as a hobby either. They have some expectation of getting their money back sometime in the future. Somewhere in the discussion of writing software for money, the notion of writing software for money was lost.
Those with money pay those with software writing capabilities to produce products that can be sold or put to use to create a value in return. Along the way was a disconnect that software is an end in itself. That the needs to developers trumps the needs of those providing the money for the developers. That those spending the money get to say what they'll do, how they'll do it, or what they won't do with that money.
Writing software for money as practiced in a sole contributor paradigm provides nearly infinite flexibility on requirements, cost and schedule forecasting, and the current notion of making business, programmatic, and technical decisions in the absence of estimating the cost and impact of those decisions.
When that paradigm leaks into a larger domain of producing a return on the investment from that cost, there are two varaibles that must enter every conversation. The Value generated by expending a Cost to produce an assessment of both those variables.
ROI = (Value - Cost) / Cost
A Value at Risk is one approach to assessing what processes should be in place when spending othe people money. The larger the Value at Risk requires a larger discipline of managing both the Cost and Value. There are many paradigms of Agile and the domain and context of software development, or any project for that matter, is important to assess before stating any method is applicable outside the ancedotal domain of the speaker.
The first assessment is always Value at Risk. That is, what is the cost of making a wrong decision? This is the basis of Microeconomics. This is the oppotunity cost assessment of decision making.
Microeconomics studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources. Cost, schedule, and technical capabilities are certainly a limited resource.
Those conjecturing decisions can be made in the absence of estimating the cost and impact have yet to show the viability of those ideas in practice, at least outside small projects with low Value at Risk.
The book The Incremental Commitment Spiral Model: Principles and Practices for Successful Systems and Software, Barry Boehm and Jo Ann Lane is a good bridge book between small low value at risk agile, Scaled Agile for Enterprise, and the full up formal DOD 5000.02 acquisition processes that are trying very hard to move into the agile domain.
The book starts with four principles:
There are extensions to these principles:
With all the speculation on what went wrong with the ACA site and all the agile pundits making statements about how agile could have saved the site, here's some actual facts beyond all the opinions - that Daniel Patrick Moynihan would remind us...
Every man is entitled to his own opinion, but not his own facts
The Key Findings are
So when we hear
Think in what domain, with what value at risk, with what complexity of project, and what business process in which these could possibly be applicable. In fact this goes back to the core of the agile manifesto. And when we hear "pure agile," Scrum Masters produce Scrum Slaves, Mob Programming, "we all want a seat at the table with equal voices, and many other "opinions," remember Moynihan and ask for facts, domain, past performance, experience, and examples of success.
As agile starts to scale to larger domains and the government seeks better ways to develop software beyond the failed processes described above - what parts of this manifesto are applicable outside of a small group of people in the same room with the customer directing the work of the developers?
As my colleague (former NASA Cost Director) reminds our team if you see something that doesn't make sense - follow the money. In the case of ACA and in the case of the Work Shop outcomes above.
Developers: We're spending money, consuming resources, producing outputs that the customer likes.
Project Manager: I was more interested in what's our performance against our planned spend, planned resource consumption, and planned outputs of value to the customer?
Developers: What do you mean, we didn't estimate any of that, we're managing this project with #NoEstimates. You know that new alternative to estimates for making decisions in software development. That is ways to make decisions with "No Estimates." of the impacts on the future of those estimates or of our work on the future cost, schedule, or technical performance. You know where we can use Decision making frameworks for projects that do not require estimates, or apply Investment models for software projects that do not require estimates, and have our project management methods for risk management, scope management, progress reporting, not require any of those annoying estimates. 'Cause we kinda suck at them anyway, so we just decided instead of learning how to estimate, we'll just not estimate and get back to coding.
Project Manager: Oh, you mean that approach of managing other peoples money that violates the principles of software microeconomics with Open Loop Control - where our organization can make business decisions on the allocation of our limited resources, without examining how those decisions effect the supply and demand of our resources. You do know about those resources? Like money, people, and time?
Developers: Yea, we don't need any that mumbo jumbo microeconomics that we all learned in school, since we didn't pay attention in that boring the statistics and probability class that tried to teach us that all variables on a project are actually random variables and we should to know something about their behaviour in the future if we're going have a hope in hell of ever managing this project in the presence of uncertainty about those values.
Project Manager: What's that smell? Maybe we'd better start rearranging the deck chairs on our ship here real soon, cause I smell an Iceberg getting closer.
No project can be managed to successful closure in the absence of steering targets defined at periodic intervals for the expenditure of cost, schedule, and technical performance. Knowing what those steering targets should be requires estimating their values, then measures the actual values to develop the needed steering signal - the variance between plan and actual.
The only way out of the need to estimate those intermediate steering targets is to straight line the budget, schedule, and needed technical performance - from start to end, then measure the actual performance.
Like the intended route of the Titanic, our project does not proceed in a straight line, so that idea is a non-starter. And like the Titanic, our project cannot confuse the intended speed with the actual speed, just like we can't confuse the budget - the total planned crossing time - with the actual cost - the actual total crossing time.
Without those pesky intermediate targets to steer toward - those targets created by estimating the needed cost, needed scheduled arrival date, and, needed capabilities on the needed date for the needed cost. We're managing the project Open Loop, driving in a straight line. Never knowing what will pop up in front of our path.
Say good bye to Kate Leonardo, you're gonna get wet.
† Full attribution for the inspiration for this post comes from the very useful blog by Gene Hughson
SEI is focused on helping the DOD improve the development of software.
Here are Pod Casts of the Principles of Agile Development of software in the DOD
For most projects showing up on or near the planned need date, at or near the planned cost, and more or less with the planned capabilities is a good measure of success. Delivering capabilities late and over budget is usually not acceptable to those paying for our work.
So how do we do this? Simple actually.
We start with a Plan. Here's the approach to Planning and the resulting Plan.
The Plan tells us when we need the capabilities to produce the needed business value or accomplish the mission. The Plan is a strategy. This strategy involves setting goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. The strategy describes how the ends (goals) will be achieved by the means (resources) in units of measure meanigful to the decision makers.
Strategy creates fit among a firms activities. For Enterprise IT, this fit is defined by the relationships between the needed capabilities delivered by the project. The success of a strategy depends on doing many things well — not just a few.
The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions.
When this occurs operational effectiveness determines the relative performance of the organization. ["What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.]
As successful Plan describes the order of delivery of value in exchange for cost, the inter-dependencies between these value producing items, and the synergistic outcomes from these value producing items working together to meet the strategy.
With the Plan in hand, we can ask and answer the following questions:
This Post Answers the Last Question
The example below is from our cycling group. The principles are the same for projects. We have a desired outcome in terms of date, cost, and technical performance. These desired outcomes have some end goal. A budget, a go live date, a set of features or capabilities needed to fulfill the business case.
Along the way we need to take corrective actions when we see we are falling behind.
How did we know we were falling behind? Because we have a desired performance at points along the way, that we compare our actual performance to. The difference between our actual performance and the desired performance, creates an "error signal" we can use to make adjustments.
Out thermostat does this, our speed control on our car does this, the Close Loop Control systems used for managing our project does this. So replaces the cycling example with writing software for money. The Peleton for the desired performance of our work. In the presentation below, ignore the guy in the Yellow Jersey at the end. Turns out he's a Dopper and an all around bad person to his fellow riders and fans.
This example can be related to a project.
This is Close Loop Control
You're cruise control does this about every 10 milliseconds. You Nest thermostat does this slower, but still less than a minute. To know how often you need to sample your progress against plan, answer this question
How long are you willing to wait before you find out you're late? Sample at ½ that time.
This is called the Nyquist Rate, one of the starting point for all the process control software I wrote in my youner days for flying and swimming machines. But it's a good question to ask on all projects as well.
There is a suggestion that only the final target of a project's performance is needed to steer toward success. This target can be budget, a finish date, the number of stories or story points in an agile software project. With the target and the measure of performance to date, collected from the measures at each sample point, there is still a missing piece needed to guide the project.
With the target and the samples, no error signal is available to make intermediate corrections to arrive on target. With the target alone, any variances in cost, schedule, or techncial performance can only be discovered when the project arrives at the end. With the target alone, this is an Open Loop control system.
Irreducible Uncertainty can only be handled with Margin. Cost margin, schedule margin, technical margin. This is the type of margin you use when you drive to work. The GPS Navigation system says it 23 ninutes to the office. It's NEVER 23 minutes to the office. Something always interferes with our progress.
Reducible Uncertainty is handled in two way. Spending money to buydown the risk that results from this uncertainty. Management Reserve (budget reserve and schedule contingency) to be used whenm soemthnig goes wrong to pay for the fix when the uncertainty turns into reality.
The next figure (page 28) shows how to manage in the presence of these uncertainties, by measuring actual performance against the desired performance at each step along the way.
In this figure, we measure at each assessment point the progress of the project against the desired progress - the planned progress, the needed progress. This planned, desired, or needed progress is developed by looking at the future effort, duration, risk, uncertainty - the stochastic processes that drive the project - and determining what should be the progress at this point in time to reach our target on or before the need date, at or below the needed cost, and with the needed confidence that the technical capabilities can be delivered along the way? This is closed loop control.
The planned performance, the needed performance, the desired performance is developed early in the project. Maybe on day one, more likely after actual performance has been assessed to calibrate future performance. This is called Reference Class Forecasting. With this information estimates of the needed performance can then be used to establish steering targets along the way to completing the project. These intermediate references - or steering - points provide feedback along the way toward the goal. They provide the error signal needed to keep the project on track. They are the basis of Closed Loop control.
In the US, many highways have rumble strips cut into the asphalt to signal that you are nearing the edge of the road on the right. They make a loud noise that tells you - hey get back in the lane, otherwise you're going to end up in the ditch.
This is the purpose of the intermediate steering targets for the project. When the variance between planned and actual exceeds a defined threshold, this says hey, you're not going to make it to the end on time, on budget, or with your needed capabilities if you keep going like this.
Kent Beck's quote is...
Optimism is the disease of software development. Feedback is the cure.
This feedback must have a reference to compare against if it is to be of any value in steering the project to a successful completion. Knowing it's going to be late, over budget, and doesn't work when we arrive at late, over budget, and not working is of little help to the passengers of the project.
The notion of self-directed development teams has a range of applicable domains. Much of the rancor around agile development these days is about how to apply the core principles of agile software development. Do we need estimates? What's the role of business process in the development life cycle? How are capabilities and requirements elicited? Who has what decision rights for what decisions? How can we made these decisions and what information is needed in order to make the decisions?
Guy Strelitz's post has got me thinking about the spectrum of the world called agile. Here's my take on his diagram. Working in a domain where we're spending other people's money, lots of other people's money, the Winging it approach is simply not accepted. The Kanban approach doesn't either because the inter-dependencies between the backlog items it tight, so picking the next thing off the blog log based on business value may not be possible, since some pre-condition may need to be fulfilled before a higher value item can be started. Softwarr development is not production in our domain. Kanban is a production flow management system, no matter how twisted the logic of the Kanban software advocates make it out to be.
Scrum is a powerful approach to emergent requirements. With those requirements anchored to Needed Capabilities. Capabilities that all have to be in place for the system to be called ready for Go Live. Some more formality is needed as governance and regulatory paradigms are encountered.
Finally we arrive at the Enterprise model of software development. The firm depends on the software system for revenue. PayPal depends on the system for revenue, but not in the way an insurance company does. Or a gas pipeline process control system does.
But at the same time, agile can contribute to not only increasing the probability of project success, but also deal with the emerging requirements traceable to the need capabilities.
Process is King
So not to the point. In that agile enterprise paradigm, the mission critical aspects of the software system demand assuance that the released software is not only Fit for Purpose and it is Fit for Use. That is, the software does what it is supposed to do, in a way it is supposed to work.
One of the critically important process of any enterprise software system is the Change Control (CC) and Release Management (RM) process. The software system is a corporate asset and must be treated as such. This asset is carried on the General Ledger as an asset. A capital investment, governed by the rules of accounting for capital assets. That's essentially the definition of enterprise.
In this enterprise paradigm, the control of these assets starts with CC and RM. Here's a high level flow of how this corporate asset is managed. In this example development occurs in the lower left. The CC and RM process is post development. This development business rhythm can be weekly, monthly, possibly even daily. But once the software is ready for release to production, this is a possible process.
The key here is separation of concerns. The developers of enterprise software our not the approvers of the release of that software, nor are they the involved in the QA, UAT, and Performance Assessment of that software.
So this is the separation from the activities on the left of the top diagram to those on the right. When some suggests a new idea, or has read a book about a new idea and wants to discuss it - ask where on the spectrum of the top diagram they work, where they think their idea would fit in.
In the End
Without first starting with a domain, context, framing assumptions around governance, established decision rights any suggested process has no home for being tested against reality.
In a recent Skype conversation around agile, estimating, Little's Law and the #NoEstimate hashtag, the term agile community was used. My first reaction was whose agile community? The community of sole contributors? The community of $1B weapons systems and all in between?
My thoughts go back to the presentation below. There is a spectrum of project management processes built around agile. My experience starts with 5. Literally 5, since I have time in that aircraft. My software development management experience goes all the way to the end. And aircraft experience to 25.
And guideance for deploying agile
So Now Back To The Core Issues
If you're a sole contributor and have a customer sitting near by, estimating you cost, schedule, and technology outcomes is likely of little value. If you're in the other end, say the flight avionics systems for the 777, then the level of rigor, formality, reporting is different. Both use agile. Not all in the same way, but both write code using the principles of agile development
No credble management process would or should object to these principles and practices. Do to so means Doing Stupid Things on Purpose. So many of the motivators for not doing something are actually bad management. Let's not estimate, because estimates are misused is my favorite DDSTOP example.
Here's an example of how to connect the dots between these principles and practices in a more formal business management process - in this case Earned Value Management.
So when we hear the agile community and those representing the agile community which community is that?
There is a crass American term used in our domain.
When you see dysfunction, see something you don't understand, or see something that is counter to your paradigm - Follow the Money.
This is the basis of microeconomics of writing software for money. What is considered a waste or even evil in one domain is a critical success factor in another domain.
Ask some simple questions to establish this domain:
In The End
Can we have any meanigful disucssion about any topic in the absence of a domain and context? Especially when that topic is driven by Value at Risk, Governance, and business processes?
I'd say it is incumbent on those making a suggestion for example
To show in what domain this statement is applicable, how we would recognize its applicability outside the the domain of those making the suggestion, how we could test the suggestion to see if it is applicable, and most important what are the conditions that allow the suggestion to work in those domains?
These discussions usually start by quoting something from a summary of Little's Law or the Central Limit Theorem.
A critical element of both Little's Law and the CLT is the notion of Identical Independently Distributed (IID) random variables. These variables are the arrival rate to a service - stories selected from the backlog for development, or someone arriving in line at the bank to make a deposit of a check.
Let's start with some math. In probability theory, the central limit theorem (CLT) says,
Using a grocery store cehck-out line or bank teller window example, Little’s law gives the relation between the mean number of customers in the system, E(L), and the mean transit time through the system, E(S), and the average number of customers entering the system per unit time, λ, as E(L) = λE(S).
An Actual Example In Preparation for Developing Software From The Story Queue
Let's pretend we're in line at the grocery store. We'll call the check-out line the resource and the people lining up at the check-out line the customers. If the clerk manning the check-out station is busy checking out customers, a queue will form in the line waiting to check out.
The population of customers that can use the store is usually finite, but this makes the problem harder, so let's assume for the moment the population of customers is infinite. The number of check-out lines can be one or many, but we'll want to assume they are identical in their services for the moment as well. Let's define the capacity of the store as the number of people that can wait in line, plus the person being served by the clerk. In most stores there will be a finite number of people in the queue at check-out, but again if this number is infinite to makes the analysis easier.
We need another simplifying assumption. The distribution of the amount of time each customer stays at check-out (once they arrive) is Independent and Identically Distributed (IID). As well the probability that a customer will arrive at check-out is also an IID variable. This distribution is usually taken to be exponential. This means the longer you wait, the more likely it is someone will show up at the check-out stand ready to check out.
These are critical assumptions for what follows about Little's Law. If the above conditions are not met, Little's Law is not applicable to the problem being described.
So let's have a quick summary of Little's Law:
Mean number of people in line at check-out = Arrival Rate of customers × Check-Out Time
This law can be applied if some other conditions exist:
Now for Software Development
Instead of assuming if Little's Law can be applied to software development, let's first ask are the conditions right to apply the law:
This means, do the jobs - stories - arriving at the service - development (or some other process) - behave like IID variables. That is they have no knowledge of each other and are indistinguishable from each other and when serviced, they can not be distinguished from the other work serviced(meaning developed, tested, installed, etc.)
Let's look at an actual project, a simple one. We want to fly to the moon for the first time, land, and come home.
Doing work on a development project is not the same as work arriving in a queue of a service, it is a network of dependencies, with interconnections, and most importantly - most critical actually - the duration of the work, the time spent in the service - are not independent, identically distributed random variables. A network of work looks like this (notionally).
So does Little's Law apply? Nope!
These networks of work are called Stochastic Networks and are not subject to Little's Law. The proicess in the Little's Law condition can be stochastic, but there has to be independence between the work elements and they have to be identically distirbuted probability distributions.
Production queues of parts going down an assembly line are. Cards being pulled from the the tray in a Kanban furniture manufacturing system are. The notional Kanban system in agile development are - but If and Only IF (IFF) the work pulled from the wall is independent from all other work, and the probability distribution of the durtaion of that work independently distributed from the actual work as well.
If you can find a project where all the features are independent from each other, there work efforts are identical, independently distributed random variables, and you'll be able to apply Little's Law.
Little's Law applies to software development work that looks like productioni flow - like the assembly line at Toyota, or the office furniture production line we designed at a factory in Idaho.
But those types of software projects must be intentionally designed to have no dependencies between the work performed, have the duration of the work in the service cycle (development) have no dependency on the prior work or the work.
This is the condition of Identical Independent Distribution (IID) needed for Little's Law as well as the Central Limit Theorem. So befoe anyone says Little's Law applies to software development, they need to show these conditions exist.
One Final Observation
The slicing proposed by some in the agile community might create the conditions for Little's Law to work. But the effort to slice the stories into equal sized - or at least Independent Identically Distributed work sizes for the entire project duration seems like a lot of work. Especially when there are much easier ways to estimating the total work, total duration, and total cost.
But since this slicing paradigm appears to be anecdotal and untested across a wide variety of projects, domains, and sizes, the population sample size condition is unlikely to be met as well.
More research, based on actual analysis needs to be done, then that research reviewed and tested, before the notion of mathematically slicing has much use outside of anecdotal examples.
Resources (of many)
Much of the objection to SAFe comes from its seemingly Top Down paradigm. Many agile voices object that this approach is not agile, in the way they define agile - individual teams making their own decisions about what to do with their customer.
The domain of this bottom up approach is usually not well defined, other than the classic eXtreme Programming or the Agile Spectrum of Guy Strelitz where Co-Hacking is on the left, where the developers live by the pure agile manifesto.
But what happens when agile is applied to an enterprise development effort. One where the business needs define the capabilities that are not emergent, but rather they are needed to fulfill the business strategy or the mission of the organization. Then another paradigm emerges. One where higher order questions, frameworks, framing assumptions, governance, and other externalities trump the needs of the individual team.
Here's one approach that has served us well over time.
The statement is a bit off, since it's the Capabilities that are defined by the business. These capabilities are then turned into requirements, which may in fact emerge, which themselves are turned into working software. Starting with the capabilities, an enterprise software development effort means re-looking at the agile manifesto statements.
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development.
The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity - the art of maximizing the amount of work not done - is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
So What Does All This Mean?
Without a domain, hard to assess the applicability and appropriateness of much of anything.
What this really means of Scaled Agile Framework is the place to start for the enterprise.
The question is two fold. Can the customer accept the release into use and the other does the customer have the ability to make use of the incremental capabilities of these releases?
Let's start with the incremental release. I know the picture to the left is considered a metaphor by some. But as a metaphor it's a weak. Let's look a a few previous posts. Another Bad Agile Analogy, Use, Misuse, and Danger of Metaphor. Each of these presents some issues with using Metaphors.
But let's be crystal clear here. Incremental development in the style of the bottom picture may be a preferred method, once the customer agrees. Much of the reterotic around agile assumes the customer can behave in this way, without looking outside the ancedotal and many times narrow experiences of the those making that suggestion. For agile to succeed in the enterprise and mission critical product and project domain, testing the applicability of both pictures is needed.
Ask the customer if they are willing to use the skateboard while waiting for the car? Otherwise you have a solution looking for a problem to solve.
Now to the bigger issue. In the picture above, the top series is a linear development and the bottom an iterative or incremental depending on where you work. Iterating on the needed capabilities to arrive at the car. Or incrementally delivering a mode of transporatation.
The bottom's increment shows 5 vehicles produced by the project. The core question that is unanswered is does the customer want a skate board, scooter, bicycle, motorcycle, and then a car for transportation. If yes, no problem. But if the customer actually needs a car to conduct business, drive the kids to school, or arrive at the airport for your vacation trip.
The failure of the metaphor and most metaphors is they don't address the reason for writing software for money
Provide capabilities for the business to accomplish something - Capabilities Based Planning
The customer didn't buy requirements, software, hardware, stories, features, or even the agile development process. They bought a capability to do something. Here's how to start that process.
Here's the outcome and an insurnace provider network enrollemtn ERP system.
Producing skateboards, then scooters, then bicycles and then finally the car isn't going to meet the needs of the customer if they want a car or a fleet of cars. In the figure above the Minimal Viable Features, aren't features they are capabilities. For example this statement is a minimal viable product is likey a good description of a Beta Feature. Could be connected to a business capability, but without a Capabilities Based Plan as in above, can't really tell.
So How Did We Get In This Situation?
Here's a biased opinion informed by my several decades of experience writing code and managing others who write code - we're missing the systems engineering paradigm in commercial software development. That is for software development of mission critical systems, and Enterprise IT is an example of mission critical systems.
Here's some posts:
The patradigm of Systems Engineering fills 1,000's pages and dozen's of books, but it is boiled down to this.
You need to know what DONE looks like in units of measure meaningful to the decision makers. Those units start with Measures of Effectiveness and Measures of Performance.
Each of those measures is probabilistic, driven by the underlying statistical processes of the system. These mean you must be able to estimate not only cost and schedule, but how that cost and schedule will deliver the needed system capabilities measured in MOE's and MOP's.
Software development is microeconomics. Microeconomics is about making decisions - choices - based on knowing something about cost, schedule, and techncial impacts from that decision. In the microeconomics paradigm, this information is part of the normal business process.
This is why when there is conjecture that you can make decisions in the absence of estimating the impacts of that decision, it ignores the principles of business. Or the notion that when numbers are flying around in an organization they lay the seeds for dysfunction, we need to stop and think about how business actually works. Money is used to produce value which is then exchanged for more money. No business will survivie for long in the absence of knowing about the numbers contained in the balance sheet and general ledger.
This book should be mandatory reading anyone thinking about making improvements in what they see as dysfunctions in their work environment. No need to run off and start inventing new untested ideas, they're right here for the using. With this knowledge comes the understanding about why estimates are needed to make decisions. In the microeconomics paradigm, making a choice is about opportunity cost. What will it cost me to NOT do something. The set of choices that can be acted on given their economic behaviour. Value produced from the invested cost. Opportunities created from the cost of development. And other trade space discussions.
To make those decisions with any level of confidence, information is needed. This information is almost always about the future - return on investment, opportunity, risk reduction strategies. That information is almost always probabilistically driven by an underlying statistical process. This is the core motivation for learning to estimate - to make decisions about the future that are most advantageous for the invested cost.
That's the purpose of estimates, to support business decisions.
This decision making processes is the basis of Governance which is the structure, oversight, and management process that ensures delivery of the needed benefits of IT in a controlled way to enhance the long term sustainable success of the enterprise.
The agile notion of delivering early, delivering often is a wonderful platitude, but ignores the underlying business rhythm for accepting the software features into producitive use by the dynamics of any business or market channel. Here's some examples of business rhythms I've worked.
In many discussions of process there is a proposed solution to a problem in the absence of domain and context. Over generalization is usually the result. This is so common in the agile development world, that I built a short briefing to communicate the issues with making suggestions in the absence of domain and context.
The discussion around what processes are applicable in what domain is complex and convoluted. This is painfully obvious in the #NoEstimates discussion where domain and context are many times missing.
One place to start is to develop a taxonomy of project and business types. The presentation below is my attempt to do that, using the paradigm I know best - flying machines. I've been in Page 11 and page 25.
With the notion of governance as a starting point, here's some guideance on applyibng governance once the project moves beyond the solo or small self-contained team
These participants play off each other, react to emergent streams of melody, contribute their own special talent to the music and pretty much work in a a self directed manner over the course of the performance.
While I'm not a fan of analogies, this is a useful one for the purpose here. There are certainly domains where the Jazz analogy describes what is going on in the trio in the picture to the left.
But what about other music? Music that is just as creative, just as moving, just as impactful to the listener. Beethoven's Ninth Symphony's Ode to Joy with the poem from Friedrich Schiller 1785 work is an example.
Words that have moved nations and populations. Following Beethoven's Ninth was a movie we saw over the weekend that described this result.
In the Ninth, each performer has a score to follow, led by the conductor, but also by the concert master and the senior players in each section. The vocals are also led by a senior performer.
In the software business there are likely similar domains and projects. Ones that can be improvised and ones that require conductors, a concert master, and players who follow the score.
In both cases - and this is where the analogy falls apart - is the players are highly skilled, experienced in the art, having played the basic themes 1,000 of times over before improvising or following the score.
The jazz performance is not made up as it goes. OK, fusion is, but that crap makes my head hurt. Melodies, rules for cords are practiced for 10,000 hours (Gladwell), relationships between the players have magical connections not available to mere mortals. The Jazz Trio and the Berlin Philharmonic are populated with highly skilled and experienced professionals. We've all heard our children play in the school band and know what that sounds like. All the platitudes in the world about agile axioms are of no worth without the necessary capabilities to actually get the work done properly.
Applying the notion that agile software development is like jazz makes as much sense as saying I can sit in the 3rd chair of the trombone section (my high school band position) and play my contribution to Beethoven's Ninth without a Curtis Institute degree in performance and 10 years experience (my aunt was a professional pianist in the late 1950's from Curtis).
It's not gonna happen - in both analogies - without the prerequisites of professional performance capabilities. Otherwise the result sounds like we're back in High School Music class with Mr. Meach (my teacher).
So how long will it take us to be capable performing to a level needed to not smell like we're high school kids in the marching band? I don't know let's make an estimate.
These 5 questions need credible answers in units of measure meanigful to the decision makers.
What Does All This Mean?
With these top level questions, many approaches are available, not matter what the domain or technology. But in the end if we don't have answers the probability of success will be reduced.