And the Principles, Practices, and Processes to Increase Probability of Success
Let's start with the simple idea that Requirements are required items from the project. Capabilities that the project must produce in order for it to earn back its cost to produce. User Stories are a narrative of what the User would like the resulting software to do in some way. It's a story of how a problem or a need could be solved. It's not a requirement that it solve it in some specific manner. Requirements state the specific outcome.
In the software development world...
Capabilities describe how the produced system will enable the value to be delivered as planned. Requirements describe how the Capabilities will be delivered. Stories are narrative as to how those Capabilities will be used and provide the raw data as to how the Requirements will emerge.
Now For The Punch Line
Using User Stories to predict project performance is also NOT a role of User Stories. User Stories are vague narratives of a customer's desire. They can change, and likely will change as the project moves forward. They are ersatz models of what the software will do when it is done. The naive notion that ...
You can easily predict the release date of a project by just counting the number of Stories
... can only be the case, If and Only If the User Stories never change in their implementation detail, are near exact representation sof what the user wants in terms of Capabilities and the Requirements needed to provide those Capabilities, and most importantly the future is nearly exactly like the past - so the performance that happened in the past will also happen in the future.
As well there can be no emergent risk, no change in effectiveness of labor, nothing will change. This is not only naive, it ignores - some would say willfully ignores - the fact that all project work is uncertain. In the presence of this uncertainty measuring the probability of meeting the planned need date for the planned cost (remember ROI is a core business assessment of the project's performance), and the probability that the needed Capabilities will in fact be delivered as needed cannot be assessed using a measure that is itself vague, emerging, variable in undefined ways, and not actually representative of Physical Percent Complete for the work being performed.
I'm working two programs where Agile at Scale is the development paradigm. When we start an engagement using other peoples money, in this case the money of a sovereign, we make sure everyone is on the same page. When Agile at Scale is applied, it is usually applied on programs that have tripped the FAR 34.2/DFARS 234.2 levels for Earned Value Management. This means $20M programs are self assessed and $100M and above are validated by the DCMA (Defense Contract Management Agency).
While these programs are applying Agile, usually Scrum, they are also subject to EIA-748-C compliance and a list of other DID's (Data Item Description) and other procurement, development, testing, and operational guidelines . These means there are multiple constraints on how the progress to plan is reported to the customer - the sovereign.
These programs are not 5 guys at the same table as their customer exploring what will be needed for mission success when they're done. These programs are not everyone's cup of tea, but agile is a powerful tool in the right hands of Software Intensive System of Systems for Mission Critical programs. Programs that MUST, deliver the needed Capabilities, at the Needed Time, for the Planned Cost, within the planned Margins for cost, schedule, and technical performance.
One place to start to improve the probability that we're all on the same page is this reading list. This is not an exhaustive list, and it is ever growing. But it's a start. It's hoped this list is the basis of a shared understanding that while Agile is a near universal principle, there are practices that must be tailored to specific domains. And one's experience in one domain may or may not be applicable to other domains.
Like it says in the Scrum Guide.
Scrum (n): A framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value.
And since Scrum is an agile software development framework, Scrum is a framework not a methodology. Scrum of Scrums, Agile At Scale, especially Agile at Scale inside EIA-748-C programs has much different needs than 5 people sitting at the same table with their customer with an emerging set of requirements where the needed capabilities are vague until they appear.
One of the classes every aspiring grad student has to take is research methods. This class teaches the PhD hopefuls (I didn't make the cut and got a consolation prize of a MS), all about doing research and preparing to be a real scientist. A topic in this class is literature search. This makes sure that your cleaver idea of a research topic, in case your advisor hasn't gotten around at actually talking to you, has already been taken, researched, and solved. This is one problem in the physics world - you need an original idea. Replicating old ideas doesn't get you very far.
Here's a start of a literature search on merging Agile at Scale with Earned Value Management. I haven't gotten to the European and Far East journals yet. Instead is a list, I'll just type this once and repurpose the resources here. This PDF is the Resources section of a briefing being used with our clients who are integrating Agile into EVM programs. Go to the LinkedIn Slideshare site - the LI logo in lower right, to open the PDF and follow the links.
A popular Agile phrase goes like this.
...it's important to know what Not to build
This begs the question, who's building things that other people don't want? Where's the process for assessing what people want? Seems pretty obvious that if you're building thing that you shouldn't be building you're not doing a very good job of requirements management.
Software requirements are a weak link in the chain of software engineering technologies. Requirements are usually incomplete and change at rates in excess of 2% per calendar month. For many years one common definition of quality has been “conformance to requirements.” However this definition ignores the fact that some requirements are hazardous or “toxic” and should not be included in software applications. Since clients themselves may not realize the dangers of toxic requirements, software engineers have a professional and ethical responsibility to point out the hazards of dangerous requirements and ensure that they are safely eliminated. An example of a “toxic requirement” is the famous Y2K problem which did not originate as a coding bug but rather as an explicit but dangerous user requirement.
Capers Jones, Vice President and CTO, Namcook Analytics LLC
So how is it decided if a requirement is toxic or not. Well it's pretty straight forward:
This is the role of a requirements elicitation and management process. One based on an actually process, not just some made up, some cockamany approach to writing things done on sticky notes and asking the next passing person what they think. An actual, true to life requirements engineering process.
This starts with a simple concept:
A requirement is a statement that identifies a capability, characteristic, or quality factor for a system for it to have value and utility to a customer or user.
If we don't know these capabilities are, their attributes, characteristics, or other factors are, we won't recognize them before the money runs out. Can we over specify the requirements? Perhaps we can. Can we under specify the requirements? It's done all the time to the detriment of the project. We need to find a way to specify the requirements in a sweet spot. Not too much, not too little.
Research shows the average investment in requirements elicitation and management is 2% to 3% of the total project cost. Research also shows this is wholly inadequate for success and is one of the root causes of failure on any non-trivial project. This research shows that projects that expended 2% to 3% on requirements experienced a 80% to 200% cost overrun. Those projects that expended 8% to 14% on requirements elicitation and management experienced 0% to 50% cost overruns.
We must be crystal clear here. Requirements may emerge, but the needed capabilities at the needed time are a critical success factor for any project, no matter the domain. As Yogi reminds us. If you don't know where you're going, you might not get there.
I'm not going to tell you how to develop requirements,. There are many ways, starting with the guidance listed below and endless other books articles, and papers. But if there is a notion that requirements are not needed - let's let them emerge and we'll start coding to see what comes out - it's going to be a rough ride before the money runs out.
 The Requirements Engineering Handbook, Ralph R. Young, Artech House, 2004
 Requirements Engineering: A Good Practice Guide, Ian Sommerville and Pete Sawyer, John Wiley & Sons, 1997
 Succeeding with Agile: Software Development Using Scrum, Mike Cohn Addison-Wesley, 2010.
 Project Management the Agile Way: Making it Work in the Enterprise, John C. Goodpasture, J. Ross, 2010.
 Agile Estimating and Planning, Mike Cohn, Prentice Hall, 2006
 Agile Project Management for Government, Brian Wernham, Maitland & Strong, 2012.
There a misinformed notions floating around the agile community that products and better than projects. That some how - unspecified and unsubstantiated as usual - that projects are undesirable and focusing on products and their value - again unspecified how to measure that value and unsubstantiated that value measurement isn't what projects do as well.
A Project is a temporary endeavor undertaken to create a unique product or service. There is a definition (possibly emerging) of what needs to be delivered and a target date (with a probabilistic confidence) when it needs to be delivered. A project is unique in that it is not a routine operation (Server Ops is not a project, it's operations), A Project is a specific set of operations designed to accomplish a stated goal. The project team often includes people who don’t usually work together – sometimes from different organizations and across multiple geographies. Projects for example:
Project management is the application of the knowledge, skills, tools, and techniques to the Project's activities to meet the project requirements.
A Product is anything that can be offered to a market to solve a problem or satisfy a want or need. It has a life cycle with multiple stages. A product is conceived, developed, introduced and managed in the market, and retired when the need for it diminishes. A product can only be developed within the context of a project, and multiple projects can occur within a product’s life cycle.
There are multiple roles in the Product development
Product managers and project managers work closely together in high-performance organizations. And both work with the broader product team and executive leaders.
When I hear ...
Project: laced with untested assumptions, based on industrial thinking. Product: cont. tests assumptions, based on lean/agile good practice.
I'm breath taken at the serious lack of understanding of the roles, principles, processes, procedures, and governance frameworks for spending other peoples money to produce products.
A simple summary. A Product is a entity provided to a user. It can be anything: a physical product that you hold in your hands, a software application, or a service that you are delivering. A Project is the series of activities to produce the defined outcome that turns into theProduct.
† I used to work in the Product Lifecycle Management business. PLM integrates people, data, processes and business systems.
All project work is random work. There are three core random variables on all projects, shown below. There are sub-variables as well as all the ...ilities involved in project work, but let's start with the major three.
Fixing, 1, 2, or all 3 of these random variables does NOT make the randomness go away.
These variables are random and all variables on projects are random because of uncertainty. This uncertainty (as mentioned on many other blogs) comes from two sources. Aleatory uncertainty that is the underlying natural randomness of all project activities. This is called irreducible uncertainty. It can't be reduced. Nothing you can do will reduce it. It's there and will always be there. This is a statistical process. The only way to work in the presence of irreducible uncertainty is to have margin. Cost margin, schedule margin, technical margin.
The second is epistemic uncertainty. This is uncertainty that is event based. It's there but can be handled in some ways. Those ways can include buying two of everything in case one breaks, having redundancy in other forms - a backup site for the data center, testing, prototypes, and other activities that provide a Plan B when the probability that something will go wrong becomes true and that thing that went wrong is no longer a probability but has turned into an Issue.
So Here's the Real Problem
When we hear, we don't need to estimate, I can fix time and budget, that doesn't make the randomness go away. It just sets an upper bound on what you CAN spend and when you HAVE TO BE DONE. Those uncertainties that create the randomness are still there. Then fixed time and fixed budget plans, leave open the technical randomness as well. The time and budget are still random inside the constraints set by the project.
There's no getting around this. No matter how often someone says you can. Those someones were asleep in the engineering probability and statistic class. Here's the classic engineering course we were all forced to take as physics grad-student Probability and Statistics. †
This is basic probability and statistics of project work. The probability that something will turn out unfavorable is created by epistemic uncertainty. The statistical variances of everyday life are created by aleatory uncertainty.
Ignoring these uncertainties means it's going to turn out bad for those paying for your work
You need margin to protect from irreducible uncertainty. You need specific actions to protect from reducible uncertainty. So you can in fact fix the cost and schedule IF AND ONLY IF (IFF) you have margin and risk buy down plans. When someone says we've fixed the duration and the budget. two things come to mind.
A third notion is the killer notion
When you fix time and cost, have sufficient risk buy down activities to reduce the epistemic uncertainty that creates the probability of something going wrong to an acceptable level, and have sufficient margin to cover the expected overruns in duration, you still have the technical reducible and irreducible uncertainties that the things you building won't work, won't be what the customer wants, will cause other issues - these are called externalities in the economics of software development, and other unknowns, possibly unknowable at the beginning of the project.
When you fix time and or budget, and don't have protections for reducible and irreducible, you're going to be late and over budget and you have willfully ignored those outcomes. Oh and by the way, there is a probability your little gadget is not likely to meet the needs of those paying you either.
These immutable condition (aleatory and epistemic uncertainty) are completely ignored in agile development. Agile provides rapid feed back to the risk management processes of software development. But agile is NOT a risk management process in and of itself. That's a topic for another time.
If you think you have no uncertainties - reducible or irreducible, and have fixed the budget and duration and maybe even the outcomes. You're likely on a de minimis project. Good luck with that.
† We had to take a few courses outside our major, and this was another. Classical Electrodynamics. This was an engineering course. We had a foundation of electrodynamics from the physics point of view. In that view everything can be solved through Maxwell's equations. A simple set of partial differential equation describing how electromagnetism works. When asked to give a talk on antenna theory in the engineering course, a friend (I was too afraid at that time) went to the chalk (yes no white boards) one wrote done maxwell's equations for the reciprocity theorem of antennas in free space. The Professor at the back of the told him (Steve) to sit down. We're engineers not physicists we want to know HOW things work not WHY things work
There are enough opinions to paper the side of a battle ship. With all these opinions, nobody has a straightforward answer that is applicable to all projects. There are two fundamental understanding though: (1) Everyone has a theory , (2) there is no singular cause that is universally applicable.
In fact most of the suggestions on project failures have little in common. With that said, I'd suggest there is a better way to view the project failure problem.
What are the core principles, processes, and practices for project success?
I will suggest there are three common denominators consistently mentioned in the literature that are key to a project’s success:
Of the 155 defense project failures studied in “The core problem of project failure,” T. Perkins, The Journal of Defense Software Engineering, Vol 3. 11, pp 17, June 2006.
From this research these numbers can be summarized into two larger classes
So where do we start?
Let's start with some principles. But first a recap
Five Immutable Principles of Project Success
With these Principles, here's five Practiuces that can put them to work
The integration of these five Practices are the foundation of Performance–Based Project Management®. Each Practice stands alone and at the same time is coupled with the other Practices areas. Each Practice contains specific steps for producing beneficial outcomes to the project, while establishing the basis for overall project success.
Each Practice can be developed to the level needed for specific projects. All five Practices are critical to the success of any project. If a Practice area is missing or poorly developed, the capability to manage the project will be jeopardized, possibly in ways not know until the project is too far along to be recovered.
Each Practice provides information needed to make decisions about the majority flow of the project. This actionable information is the feedback mechanism needed to keep a project under control. These control processes are not impediments to progress, but are the tools needed to increase the probability of success.
Why All This Formality, Why Not Just Start Coding, Let Customer Tell Us To Stop?
All business works on managing the flow of cost in exchange for value. All business has a fiduciary responsibility to spend wisely. Visibility to the obligated spend is part of Managerial Finance. Opportunity Cost is the basis of Microeconomics of decision making.
The 5 Principles and 5 Practices are the basis of good business management of the scarce resources of all businesses.
This is how adults manage projects
In our domain, Jon Katzenbach's definition of a team informs how we interact with our project members. A Team is defined as ...
A group of qualified individual who hold each other mutually accountable for a shared outcome - Katzenbach, Wisdom of Teams
It has been suggested that ...
The Estimate-Commitment relationship stands in opposition to collaboration. It works against collaboration. It supports conflict, not teamwork.
This position is counter to our Katzenbach based teaming processes. The conjecture that estimates work against collaboration, rather than for collaboration, removes the mutual accountability condition for team success.
This is like speaking with our builder about the bedroom remodel project and him saying...
Oh here's my estimate to complete your bedroom remodel, but I have no intention of meeting that estimate.
Where we work, Estimates provide clarity and understanding of the mutual accountability for the shared outcome between the group of qualified individuals.
Where we work, and apply Agile software development processes, we've adopted Seven Pillars of Program Success. We work hard, every day, to: †
Your domain of course will be different. You or your team may not work on projects must succeed on our before the needed date, at or below the needed budget with the needed capabilities. That is, you can show up late, over budget, and with missing capabilities and the customer will consider that OK. And just to be clear, the notion of the value of incremental delivery is defined by the receiver of those capabilities, not the producer. Ask the customer if the partial outcomes can actually be put to productive use in the business environment. Capabilities Based Planning defines which capabilities are needed in what order to provide business value.
We show up late, over budget, and with missing capabilities many times of course - so no need to point that out - without corrective actions attached. Any number of reports, including bogus reports show this. But a critical understanding is we know we're going to be late, and we know we're going to be over budget, and most of the time we know the delivered capabilities will not meet the intended specifications every reporting period and have a plan (maybe not the right plan) to fix it.
Risk Management is How Adults Manage Projects - Tim Lister
In our domain, being late, over budget, and less than required capabilities it is never acceptable to the customer. Are we late, over budget, and have performance issues? Of course. It's called development. But we know it, have visibility to the root causes, and have corrective action plans. This visibility is part of the process. Without a steering target and actuals, no error signal can be generated to be used for course correction. One of our PMs was a Navy navigator on an air craft carrier. The commanded heading was required for him to carry out is navigation processes. Without estimates of the impediments to be encountered along the way for the course to the desired destination, the productivity of progress, with effort to make progress along the course there is no way to know which path to take to that destination. By the way, measuring past performance and projecting that as future performance only works if the future conditions are like the past conditions. This is rarely the case on any sufficiently complex project.
Yogi Berra reminds us — If you don't know where you are going, you'll end up someplace else.
This poor performance is actually reported in a database for review every reporting period (minimal monthly) and used to adjust award fees and assessment for the next job that significantly impacts the selection process. This is called Closed Loop Control.
When there are no Estimates to Complete (ETC) or Estimates At Completion (EAC) there is an Open Loop Control condition and the corrective actions needed (but not always effective) have no steering target with variance to steer toward to move the project back to GREEN.
So estimates don't stand in the way of cooperation, they are the foundation of mutual accountability for the shared outcome based on cooperation.
† These seven pillars are derived from VADM Joseph Wendell Dyer, USN (Retired), Navy's chief test pilot, F/A-18E/F Program Manager, and Commander, Naval Air Systems Command, plus ten years as an executive at iRobot Corporation. Many of our projects are not VADM Dyer's but they are still mission critical, manifestly important to the success of our customers business success. If they were to fail - cost too much, show up beyond the business need date, or not provide the needed capabilities, the success of the business is in jeopardy. Again you're domain my be significantly different. Use as appropriate.
Customers buy capabilities to accomplish a business goal or successfully complete a mission. Deliverables are the foundation of the ability to provide this capability. Here's how to manage a project focused on Deliverables.
When I hear about requirements churn, bad requirements management - which is really bad business management, emergent requirements that turn over 20% a month for a complete turnover in 4 months - it's clear there is a serious problem in understanding how to manage the development of a non-trivial project.
Let's start here. Start with what capabilities does this project need to produce when it is done? The order of the capabilities is dependent of the business's ability to not only absorb the capability, but the value stream of those capabilities in support of the business strategy.
That picture at the bottom shows a value stream of capabilities for a health insurance provider network system. The notion of INVEST in agile has to be tested for any project. Dependencies exist and are actually required for enterprise projects. See the flow of capabilities chart below. Doing work in independent order would simply not work.
Writing software for money is a Closed Loop Control System.
Without the Desired State, the Current State, the comparison of the two, the Error Signal, the project is running open loop. We'll arrive when we arrive at the rate of progress we are performing at, for the cost we are consuming. There is no information available to show what the needed performance of cost, schedule, or value production needs to be to arrive, on time, on budget, and on value (or near enough to call it close).
And when you hear about control systems and they don't follow the picture at the top, they're not Closed Loop. They may be Open Loop, but they are not Close Loop.
Orion launched today and recovery after two orbits. The test of the launch system, Pad Abort system, and Heat Shield were the main purposes of the flight.
I worked the proposal - after coming off the winning proposal for Hubble Robotic Service Mission. The Crew Exploration Vehicle was the original name of the flight vehicle. The Integrated Master Plan and Integrated Master Schedule described the increasing maturity of the deliverables for the space craft and it's flight support systems. After the contract win, I moved to the flight avionics firm and defined the IMP/IMS and project performance management processes for that major subcontractor. When you get to minute 21:17, Tracking Data Relay Satellite is mentioned. I worked that project as a new graduate student many decades ago.
Starting back on TDRSS, agile - meaning emerging requirements, test driven development, direct customer feedback on short iterations - and the development process were deployed with rolling waves, 44 day rule Work Packages, and emergent technical requirements derived from Mission Capabilities.
Here's the long version of the launch to orbit.
After two orbits, Orion came home. The double boom is the sonic boom. Tests of the heat shield will confirm if it functioned properly.
Recently a statement was made about agile and complexity and it was conjectured if the project is too complex for a physical board - a place to put the stickies for the stories - then we've missed opportunities to simplify. Possibly not realizing that complexity, as well as complex system, are the norm in many domains and complexity management processes using tools - rather than manual means - is also the norm.
If your Agile planning needs are too complex for a physical board, you've probably missed opportunities to simplify / improve.
When I suggested that agile and agile tools are used to deal with complex problem in these environments, without the need to reduce that complexity, there was a conversation of sorts that suggested...
I'd be surprised to hear Orion was using a COTS Agile project management tool in a significant way
Some Necessary Complexity
On Hubble mission, there is a Service Mission Assurance Process that reveals some of the complexity of the System of Systems found in space flight. The Interface Control processes for example for the payload on STS 125.
External knowledge of what tools were used, what processes were applied, how the flight avionics software for Orion was converted from the 777 suite to the spacecraft suite, tested, altered to user needs, simulated, emulated, verified and validated on rolling waves, on 44 day iteration cycles could have only been obtained if you were actually in the building in the vendors shop.
But there are other surprises in the business of space flight. A few good places to start include:
Beyond the outsiders comments of surprise inside space and defense firms, agile tools from Rally, VersionOne, and JIRA are used in a wide variety of domains from embedded systems to intelligence systems, where the requirements don't come from the users, they come from the enemy. Here's an example of agile in the INTEL business.
Maybe those surprised by the many different applications of the principles of agile - developed long before the Agile Manifesto - missed those processes in Building O6, Sepulveda Blvd, Redondo Beach, circa 1978.
In The End
There are numerous approaches to applying agile development in a wide variety of domains. I work in a domain where Systems Engineering and Earned Value Management is the starting point and Agile is used to develop code guided by EAI-748-C and DID 81861.
In these environments, development of software is incremental and iterative, with emerging requirements, with stable capabilities. These programs are complex and tools are the basis of success for managing all the moving parts of the program. Rarely is everyone in the same room, since these are System of Systems programs. As well Integration and Test are done by external sources - V&V for flight safety. So many of the processes found in small commercial projects are not applicable to programs in our domain.
To suggest there is but one way to reduce complexity by putting all the stories on cards on the wall is a bit naive in the absence of establishing the external needs of the project first, then deciding what processes to apply.
Some background on applying agile in the DOD can be found at:
Domain first, Context second, Only then Process
When we read on a blog post that estimates are not meaningful unless you are doing very trivial work, † I wonder if the poster has worked on any non-trivial software domain. Places like GPS OCX, SAP consolidation, Manned Space Flight Avionics, or maybe Health Insurance Provider Networks. Because without some hands on experience in those non-trivial domains, it's be hard to actually knowing what you're talking about when it comes to estimating the spend of other peoples money.
Maybe some background on estimates for nontrivial work will shed light on this ill informed notion that only trivial projects can be estimated.
These are a small sample of papers from one journal on software estimating for misison critical, some times National Asset projects.
Go to Cross Talk, The Journal of Defense Software Engineering, and search for "estimating" to get 10 pages of 10 articles on this topic alone. This notion of estimating in non-trivial domains is well developed, well documented, and many examples of tools, processes, and principles.
If Do Your Homework and the Test is much easier.
It could be that the original poster has little experience in mission critical, national asset, enterprise class, software intensive systems. Or it could be the poster simply doesn't know what making estimates for project that spends other peoples money, many times significant amounts of money, is all about.
And of course most of the problems describes as the basis for Not Estimating - the illogical notion that if we can't do something well, let's stop doing it - starts with not knowing what Done Looks Like in any units of measure meaningful to the decision makers.
So start here with my favorite enterprise architect blog amd his list of books when you follow the link to the bottom.
So when you have some sense of what DONE looks like in terms of capabilities, the estimating process is now on solid ground. From that solid ground you can ask have we done any like this before? Or better yet can we f ind someone who has done something like this before? Or maybe can we look around to see what looks like our problem and figure out how long it took them by simply asking them? I
If the answer to any of those questions is NO and you're NOT working in a research and development domain, then don't start the project because you're not qualified to do the work, you don't know what you're doing and you're going to waste your customers money.
† Scroll to the bottom of http://zuill.us/WoodyZuill/category/estimating/ and search for "A Thing I Can Estimate," to see the phrase, and remember the questions and the answers above. If you're not answering those in some positive way, you're now on a death march project starting day one, because you don't know what done looks like for the needed capabilities. Not the requirements, not the code, not the testing - that's all straight forward. Without some notion of what the system is supposed to do, you're never recognize it if it were ever to come into view. And since the customer doesn't know as well, all the money they're spending to find out has to be written off as IRAD or flushed down the toliet as a waste of time and effort in the end. And then you'll know why Standish (improperly) reports projects fail.
If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. 
In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.
The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts.  This is the case where Data Management is the initial deployment activity followed by more complex system components.
By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:
Peter Kretzman has a nice post in his series on #NoEstimates. Peter and I share a skepticism of "making decisions in the absence of estimating the cost and impact" of those decisions. In Peter's current post there is a quote that is telling.
Let’s use Ron Jeffries’ statement as an example of this stance:
“Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. “
One of my 3 half time jobs is working in the space and defense program performance management domain, both embedded systems and enterprise IT systems. DOD is the largest buyer of ERP on the planet. In this domain we have a formal process for determining what went wrong. The department looking after this is called Performance Assessment and Root Cause Analysis (PARCA). PARCA provides Root Cause Analysis for programs that have gone Nunn McCurdy as we would say.
When you read the reports from Rand and Institute for Defense Analyses on N-M breaches, requirements instability is in the top 5 as root causes.
It seems to me - in my narrow minded program performance management view of the world - that unstable requirements being used as the reason for vague estimates is so obvious a problem that has been completely ignored by the #NoEstimates advocates. It's like the olde saw
Doctor, Doctor it hurts when I do this (make estimates in the presence of vague requirements). Then stop doing that!
The notion of Capabilities Based Planning is missing in many software organizations. So having vague requirements is a natural outcome of not having definitive understanding of what Capabilities the system must provide, in units of measure meanigful to the decision makers. These units are:
Without these requirements have not home, are vague, and therefore create the root cause of bad estimates.
So what would a logical person do when working on a project that spends other peoples money, sometimes lots of other peoples money? Not Estimate? Does that sound like the corrective action to the root cause of the problems with software project success shortfall?
Not to me. It's the doctor, doctor this hurts paradigm. So until the root cause is determined, the corrective actions identified and applied, there can be no credible solution to the estimating problem. And there is a huge estimating problem in our domain, just read the N-M reports at RAND and IDA (Goggle nunn-mcurdy Rand or IDA to find them). Similar assessments of root causes can be found for enterprise IT from many sources.
The #NoEstimates advocates are attempting to solve the wrong problem with the wrong approach. They've yet to connect with the core process of writing software for money - MicroEconomics of software development. Here's a starting point to address the root casue rather than the symptom. Fixing the symptoms does nothing in the end. It just spends money, with no actonable outcomes. And that woudl be very counter to the principles of Agile.
This is one of those pictures tossed out at some conference that drives me crazy. It's uninformed, ignores the disciplines of developing software for money, and is meant to show how smart someone is, without actually understanding the core processes needed for actually being knowledgeable of the topic - in this case statistical processes of project work. Then the picture gets circulated, re-posted, and becomes the basis of all kinds of other misunderstanding, just like the Dilbert cartons that represent cartons of the problem, but have no corrective actions associated.
It is popular in some circles of agile development to construct charts showing the strawman of deterministic and waterfall approaches, then compare them to the stochastic approaches and point out how much better the latter is than the former. Here's an example.
These strawman approaches are of course not only misinformed, they're essentially nonsense in any domain where credible project management is established, and the basis of the their response with Don't Do Stupid Things on Purpose.
Let's look at each strawman statement for the Deterministic View in light of actual project management processes, either simply best practice or mandated practice.
The only explanation here is the intentional ignorance of basic science, math, engineering, and computer science.
In the stochastic View there are equally egregious errors.
In the End
For some reason using charts like this one, re-posting of Dilbert cartons, making statements using buzz words - we're using Real Options and Bayesian Statistics to manage our work - are may favorite ones - seems to be more common the closer we get to the sole contributor point of view. Along with look at my 22 samples of self-selected data with a ±70% variance as how to forecast future performance.
It may be because sole contributors are becoming more prevalent. Sole contributors have certainly changed the world of software development in wasy never possible by larger organizations. But without the foundation of good math, good systems engineering - and I don't mean "data center systems engineering," I mean INCOSE Systems Engineering - those sole contributor points of view simply don't scale.
Always ask when you hear a piece of advice - in what domain have you applied this advice with success?
We're preparing for a Webinar on 25 September 2014, now titled Using Techncial Performance to Inform Earned Value, which addresses the disconnect in EAI-748-C between two statement
Two reconcile these two statement, we need to have a process of informing Earned Value (BCWP) with the Techncial Performance of the products being built. After the Webinar, we'll post the link.
In the mean time here's a list of resources gathered to support this topic.
Much of the objection to SAFe comes from its seemingly Top Down paradigm. Many agile voices object that this approach is not agile, in the way they define agile - individual teams making their own decisions about what to do with their customer.
The domain of this bottom up approach is usually not well defined, other than the classic eXtreme Programming or the Agile Spectrum of Guy Strelitz where Co-Hacking is on the left, where the developers live by the pure agile manifesto.
But what happens when agile is applied to an enterprise development effort. One where the business needs define the capabilities that are not emergent, but rather they are needed to fulfill the business strategy or the mission of the organization. Then another paradigm emerges. One where higher order questions, frameworks, framing assumptions, governance, and other externalities trump the needs of the individual team.
Here's one approach that has served us well over time.
The statement is a bit off, since it's the Capabilities that are defined by the business. These capabilities are then turned into requirements, which may in fact emerge, which themselves are turned into working software. Starting with the capabilities, an enterprise software development effort means re-looking at the agile manifesto statements.
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development.
The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity - the art of maximizing the amount of work not done - is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
So What Does All This Mean?
Without a domain, hard to assess the applicability and appropriateness of much of anything.
What this really means of Scaled Agile Framework is the place to start for the enterprise.
When it is mentioned project management is a control system many in the agile world whince. But in fact project is a control system, a closed loop control system.
Here's how it works.
Each of these elements has some unit of measure:
Here's a small example of incremental delivery of value in an enterprise domain
The accomplishment of a mission or fulfillment of a business strategy can be called the value produced by the project. In the picture above the value delivered to the business is incremental, but fully functional on delivery to accomplish the business goal. These goals are defined in Measures of Effectiveness and Measures of Performance and these measures are derived from the business strategy or mission statement. So if I want a fleet of cars for my taxi service, producing a sketboard, then a bicycle, is not likley to accomplishment the business goal.
But the term value alone is nice, but not sufficient. Value needs to have some unit of measure. Revenue, cost reduction, environmental cleanup, education of students, reduction of disease, the process of sales orders at a lower cost, flying the 747 to it's destination with minimal fuel. Something that can be assessed in tangible units of measure.
In exchange for this value, with it's units of measure, we have the cost of producing this value.
To assess the value or the cost, we need to know the other item. We can't know the value of something without knowing its cost. We can't know if the cost is appropriate without knowing the value produced by the cost.
This is one principle of Microeconomics of software development
The process of deciding between choices about cost and value - the trade space between cost and value - starts with information about both cost and value. This information lives in the realm of uncertainty before and during the project's life-cycle. It is only known on the cost side after the project completes. And for the value may never be known in the absence of some uncertainty as to the actual measure. This is also a principle of microeconomics - the measures we use to make decisions are random variables.
To determine the value of the random variable we need to estimate, since of course they are random. With these random variables - cost of producing value and the value exchanged for the cost, the next step in projects is to define what we want the project to do:
The actual delivery of this value can be incremental, it can be iterative, evolutionary, linear, big bang, or other ways. Software many times can be iterative or incremental, pouring concrete and welding pipe can as well. Building the Interstate might be incremental, the high rise usually needs to wait for the occupancy permit before the value is delivered to the owners. There is no single approach.
For each of these a control system is needed to assure progress to plan is being made. The two types of control systems are Open Loop and Close Loop. The briefing below speaks to those and their use.
Why is it hard to think beyond our short term vision? Rapid delivery of incremental value is common sense, no one would object to that - within the ability of the business to absorb this value of course. This is called the Business Rhythm.
But that rapid redelivery of incremental value is only a means to an end. The end is a set of capabilities of the business that allows that business to accomplish their Mission. To do something as a whole with those incremental features. That is turn the features into a capability.
Think about a voice over IP system, who's feature set was incrementally delivered to 5,000 users at a nation wide firm. This week we can call people, receive calls from people, but we don't have the Hold feature yet. Are you really interested in taking that product and putting it to use?
How about an insurance enrollment system, where you can sign up, provide your financial and health background, choose between policies, but can't see which doctors in your town take the insurance, because the Provider Network piece isn't complete yet.
These are not notional examples, they're real projects I work on. For these type projects - most projects in the enterprise IT world - an All In feature set is needed. Not the Minimum Viable Product (MVP). But the set of Required Capabilities to meet the business case goals of providing a service or product to customers. No half baked release with missing market features.
You might say, that incremental release of features could be a market strategy, but looking at actual products or integrated services, it seems there is little room for partial capabilities in anything, let alone Enterprise class products. Either the target market gets the set of needed capabilities to capture market share or provide the business service or it doesn't and someone else does.
An internal system may have different behaviours, I can't say since I don't work in that domain. But we've heard loud and strident voices telling us deliver fast and deliver often when there is no consideration for the Business Rhythm of the market or user community for those incremental - which is a code word for partially working - capabilities.
Of course the big bang, design, code test, paradigm was nonsense to start with. That's not what I talking about here. I'm talking about the lack of critical assessment of what is the value flow of the business and only then applying a specific set of processes to deliver that value. Outcome first, then method.
So Now The Hard Part
The conversation around software delivery seems to be dominated by those writing software, rather than by those paying for the software to be written. Where are the critical thinking skills to ask those hard nosed business questions:
Questions like that have been replaced with platitudes and simple and many times simple minded phrases.