When we hear about software development in the absence of a domain, it's difficult to have a discussion about the appropriate principles, processes, and practices of that work. Here's one paradigm that has served us well.
Principles, Practices, and Processes to Increase Probability of Success
When we hear about software development in the absence of a domain, it's difficult to have a discussion about the appropriate principles, processes, and practices of that work. Here's one paradigm that has served us well.
There are several paradigms for Systems Thinking. Ranging from Psychobabble to hard core Systems Engineering. A group of colleagues are starting a book with a working title Increasing The Probability of Project Success, several of the chapters are based on Systems Thinking.
But first some background between Systems Theory, Systems Thinking, and Systems Engineering
Systems Theory is the interdisciplinary study of systems in general, with the goal of elucidating principles that can be applied to all types of systems at all nesting levels in all fields of research.Systems Engineering is an interdisciplinary field of engineering that focuses on how to design and manage complex engineering systems over their life cycles.
Systems Management (MSSM, USC, 1980) is an umbrella discipline encompassing systems engineering, managerial finance, contract management, program management, human factors, operations research, in limitary, defense, space, and other complex systems disciplines)
Here's are two books references that inform our thought processes
This book is the basis of Thinking about systems. It's a manufacturing and Industrial Engineering paradigm. Software Intensive Systems fit in here as well, since interfaces between system components define the complexity aspects of all system of systems.
This book opens with an Einstein quote In the brain, thinking is doing. As engineers - yes software engineering is alive and well in many domains, no matter how much we think wqe have to do. We can plan, prepare, and predict, but action occurs through doing.
so when we hear any suggestion, ask how can this be put to work in some measurable way to assess the effectiveness and performance of the outcomes?
This is the companion mapping processes book. Systems Thinking is the process of understanding how systems influence one another withn a world of systems and has been defined as an approach to problem solving by viewing our "problems" as parts of an obverall system, rather than reacting to a specific part or outcome.
There are many kinds of systems. Hard systems, software systems, evolutionary systems. It is popular to mix these, but that creates confusion and removes the ability to connect concepts with actionable outcomes.
Cynefin is one of those popular approaches that has no units of measure of complex, complicated, chaotic, and obvious. Just soft self referencing words.
so in our engineering paradigm this approach is not very useful.
Along with these appoaches are some other seminal works
In The End
Everything's system. Interactions between components is where the action is and where the problems come from. Any non-trivial systems has interactions that must be managed as system interactions. this means modeling these interactions, estimating the impacts of these interactions. defining the behaviors of these interaction before, during, and after their development,
This means recognizing the criteria for a mature and effective method of managing in the presence of uncertainty.
When we hear of a new and exciting approach to old and never ending problems, we need to first ask in what domain is your new and clever solution applicable
No domain, not likely to have been tested outside you personal anecdotal experience.
Here's Mark's Scaling factors. He uses Agile as the starting point, but I'm expanding it to any suggestion that has yet to be tested outside of personal anecdotes
What's the Point?
When a new approach is being proposed, without a domain, it's a solution looking for a problem to solve.
In the agile community it is popular to use the terms complex, complexity, complicated many times interchangeably and and many times wrongly. These terms are many times overloaded with an agenda used to push a process or even a method.
First some definitions
One more item we need is the types of Complexity
And Now To The Point
When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system.
It is common use the terms complex, complicated, and complexity are interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.
We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point.
The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's  with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.
 Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.
Our world is complex and becoming more complex all the time. We are connected and in turn driven by a complex web of interacting technology and processes. These interacting technologies and processes are implemented by information and communication technologies that are themselves complex. It is difficult to apply such a broad topic as complexity to the equally broad topic of developing software systems or even broader topic of engineered systems.
Measuring complexity in engineered systems is a highly varying concept.1
The result of these complex systems many times creates complexity. But care is needed is tossing around words like complex and complexity. If these systems are in fact Engineered, rather than simply left to emerge on their own, we can apply some principles to control the unwanted complexity of these complex systems.
First some definitions. These are not the touchy feely definitions found in places like Cynefin where the units of measure of complex, complexity, and chaos, are no where to be found. Cynefin was developed in the context of management and organizational strategy by David Snowden. We're interested in the system complexity of things, the people who build them, and the environments where they are deployed. But this also means measuring complex and the complexity in units meaningful to the decision makers. These units must somehow be connected to the cost, schedule, and probability of success for those paying for the work.
Complexity has turned out to be very difficult to define. The dozens of definitions that have been offered all fall short in one respect or another, classifying something as complex which we intuitively would see as simple, or denying an obviously complex phenomenon the label of complexity. Moreover, these definitions are either only applicable to a very restricted domain, such as computer algorithms or genomes, or so vague as to be almost meaningless. (From Principia Cybernetica)
Some more background about complex systems and their complexity4
What is complexity then?
The words chaos and complexity have been used to mean disorder and complications for centuries. Only in the last thirty years have they been used to refer to mathematical and scientific bodies of knowledge.6
Often something is called complex when we can't fully understand its structure or behavior. It is uncertain, unpredictable, complicated, or just difficult to understand. Complexity is often described as the inability of a human mind to grasp the whole of a complex problem and predict the outcome as Subjective Complexity.5
The complex and complexity I'm speaking about are for Engineered Systems, products and services used by organizations, but engineered for their use. Their use can be considered complex, even create complexity and many times emergent. But the Cynefin approach to complex is ethereal, without the principled basis found in engineering and more importantly Systems Engineering.3 This appears to be why agilest toss around the terms found in Cynefin, since engineering of the software is not a core principle of agile development, rather emergent design and architecture is the basis of the Agile Manifesto.
Here's an example of the engineering side of complex systems, from Dr. Sheard's presentation "Twelve Roles and Three Types of Systems Engineering," NASA Goddard Space Flight Center, Systems Engineering Seminar, Complexity and Systems Engineering, April 5, 2011,
Before applying these definitions to problems for developing software, there is more work to do.
In complex systems there are entities that participate in the system2
Cynefin does not make these distinctions, instead separates the system into Complex, Complicated, Chaotic, and Obvious, without distinguishing to which engineered portion of the system these are applicable.
So when we hear about Complex Adaptive Systems in the absence of a domain and the mathematics of such a system, care is needed. It is likely no actionable information will be available in units of measure meaningful to the decision makers to help them make decisions. Just a set words.
1 "Complexity Types: from Science to Systems Engineering," Sarah Sheard, ; Mostashari, Ali, Proceedings of the 21th Annual International Symposium of the International Council of Systems Engineering
2 "Systems Engineering in Complexity Context," Sarah Sheard, Proceedings of the 23nd Annual International Symposium of the International Council of Systems Engineering
3 Systems Engineering Principles and Practices, 2nd Edition, Alexander Kossiakoff William N. Sweet Samuel J. Seymour Steven M. Biemer, John Wiley & Sons.
4 The Challenge of Complex Systems, INCOSE Crossroads of America Chapter.
5 “On Systems architects and systems architecting: some thoughts on explaining the art and science of system architecting” H. G. Sillitto, Proceedings of INCOSE IS 2009, Singapore, 20-23 July.
6 Practical Applications of Complexity Theory for Systems Engineers, Sarah Sheard, Proceedings of the 15th Annual International Symposium of the International Council on Systems Engineering
Many in the Agile community like to use words like Self Organizing, Emergent, Complexity, and Complex Adaptive Systems, without actual being able to do the mathematics behind these concepts. They've turned the words into platitudes. This is the definition of popularization - a core idea from science and mathematics (physics many times) without the math.
These popularizations, spawned a small industry of using the words in ways no longer connected with the actual mathematical model of self organizations, complexity, complex adaptive systems, and emergence of the underlying simplicity into a complex outcome.
There is a pop-psychology approach to core mathematics and the physics of complex systems as well.
Self Organization requires several conditions for it to be in place and be observed
The key is coordination across boundaries and the capacity for action. This implies - quite explicitly - a deterministic response to external stimulus. The self-organization properties require structured communication channels to be in place for the systems to posses this property.
So next time you hear self organizing teams are the best ask to see what structures are in place to provide the channels for coordinated actions. What mechanisms are being used for system-wide feedback within that highly structured process framework, and what are the means of transforms small - potentially very small stimuli - into the collective actions of the whole?
In the broader sense, these concepts all live in a world governed in a deterministic manner through...
The complex part of complex systems is the subtle - and poorly understood without the mathematics - property, that a deterministic system can have emergent and very different outcomes from the sensitivities of the starting conditions.
The double pendulum is a nice example. The equations of the Double Pendulum are a classic two year physics student problem. My introduction to FORTRAN 77 was to code the solution to the Double Pendulum problem in the Dynamics course.
Or maybe that person is just fond of using words they don't actually know the meaning of - at the mathematical level. As a classic example self organizing is defined (first used by William Rose Ashby in 1948) as the ability of the system to autonomously (without being guided or managed by an outside source) increase its complexity. So when we hear self-organizing is good, the system we're applying it too is getting MORE complex without any external guidance. Wonder if that's what we actually wanted.
I found this picture on the web. The OP didn't know where it came from so I have no attribution. It speaks volumes to the gap between knowing and doing. The notion of knowing and doing is at the heart of Engineered solutions to complex problems, versus created solutions. Of course engineered solutions are created - in that creativity is mandatory for the engineered solution to provide the needed capabilities the customer ordered. But the engineered aspects are the framework in which that creativity is performed.
But the inverse may not be true. Engineering is a broad term.
Engineering (from Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise") is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes.
Software Engineering is not always stand alone, but might be considered part of Systems Engineering. Software in many cases is embedded in a physical object used by people or other physical objects. Or embedded in a physical processes used by people. These objects are systems and maybe even System of Systems.
Systems Engineering is an interdisciplinary field of engineering that focuses on how to design and manage complex engineering projects over their life cycles. Issues such as reliability, logistics, coordination of different teams (requirements management), evaluation measurements, and other disciplines become more difficult when dealing with large or complex projects. Systems engineering deals with work-processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as control engineering, industrial engineering, organizational studies, and project management. Systems engineering ensures that all likely aspects of a project or system are considered, and integrated into a whole.
Coding software, in the absence of Computer Science or Software Engineering, is not likely part of any engineering discipline, but is a skill of turns ideas into code.
Software Engineering is
The study and an application of engineering to the design, development, and maintenance of software. Typical formal definitions of software engineering are: the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.
So If You're in the Engineered Solutions Business...
If you're in the engineered solutions business, systems engineering, software engineering, business process engineering, would the picture at the top be considered credible by your customers, your professors in college, your business development team, your board of directors?
The picture at the top says several things:
The picture at the botton says several things:
Here's some background on this topic
Systems Engineering has two components
When we work in the Enterprise IT domain or any Software Intensive Systems ...
...systems engineering is focused on the system as a whole; it emphasizes its total operation. It looks at the system from the outside, that is, at its interactions with other systems and the environment, as well as from the inside. It is concerned not only with the engineering design of the system but also with external factors, which can significantly constrain the design. These include the identification of customer needs, the system operational environment, interfacing systems, logistics support requirements, the capabilities of operating personnel, and such other factors as must be correctly reflected in system requirements documents and accommodated in the system design. [Systems Engineering Principles and Practices, Alexander Kossiakoff, John Wiley & Sons]
So what does this mean in practice?
It means when we start without knowing what DONE looks like, no method, no technique, clever process is going to help us discover what DONE looks like, until we spend a pile of money and expend a lot of time trying out various ideas in our search for DONE.
What this means is that emergent requirements, mean wandering around looking for what DONE looks like. We need to state DONE in units that connect with Capabilities to fulfill a mission or deliver success for a business case.
What this doesn't mean is that we need the requirements up front. In fact we may not actually what the requirements up front. If we don't know what DONE means, those requirements must change and that change costs much more money then writing down what DONE looks like in units of measure meaningful to the decision makers.
So Here's Some Simple Examples of What A Capability Sounds like
Here's a more detailed example
Identifying System Capabilities is the starting point for any successful program. Systems Capabilities are not direct requirements, but statements of what the system should provide in terms of “abilities.”
For example there are three capabilities needed for the Hubble Robotic Service Mission:
How is this to be done and what are the technical, operational, safety and mission assurance requirements? Don’t really know yet, but the Capabilities guide their development. The critical reason for starting with capabilities is to establish a home for all the requirements.
To answer the questions:
Capabilities statements can then be used to define the units of measure for program progress. Measuring progress with physical percent complete at each level is mandatory. But measuring how the Capabilities are being fulfilled is most meaningful to the customer. The “meaningful to the customer” unit of measures are critical to the success of any program. Without these measures the program may be cost, schedule, and technically successful but fail to fulfill the mission.
This is the difference between fit for purpose and Fit for Use.
The process flow below is the starting point for identifying the Needed Capabilities and determining their priorities. Starting with the Capabilities prevents the “Bottom Up” requirements gathering process from producing a “list” of requirements – all needed – that is missing a well formed topology. This Requirements Architecture is no different than the Technical or Programmatic architecture of the system.
Capabilities Based Planning (CBP) focuses on “outputs” rather than “inputs.”
These “outputs” are the mission capabilities that are fulfilled by the program. Without the capabilities, it is never clear the mission will be a success, because there is no clear and concise description of what success means. Success means providing the needed capabilities, on or near schedule and cost. The concept of CBP recognizes the interdependence of systems, strategy, organization, and support in delivering the capability, and the need to examine options and trade‒offs in terms of performance, cost and risk to identify optimum development investments. CBP relies on Use Cases and scenarios to provide the context to measure the level of capability.
Here's One Approach For Capturing the Needed Capabilities
In Order To Capture These Needed Capabilities We Need To...
What Does All This Mean?
When we hear of all the failures of IT projects, and other projects for that matter, the first question that must be answered is
What was the root cause of the failure?
Research has shown that unclear, vague, and many times conflicting requirements are the source of confusion about what DONE looks like. In the absence of a definitive description of DONE in units of effectiveness and performance, those requirements have no home to be assessed for their appropriateness.
The book on the left is the latest addition to this topic in the domain of agile software development. The book is based on an Adaptive Complex Project Framework. The notion, a naive notion, that complexity can be reduced and complex systems should be avoided, is just that notional. In practice complex systems can't be avoided in any business or technical domain where mission critical systems exist. That is non-trivial systems are complex.
The book emerged from a 2010 IBM report of 1,541 executives in 60 countries about the preparedness for complex systems work. Capitalizing on Complexity. From the report there are ten Critical Success Factors, in priority order:
On a twitter discussions and email exchanges there is a notion of populist books versus technical books used to address issues and problems encountered in our project management domains. My recent book Performance-Based Project Management® is a populist book. There are principles, practices, and processes in the book that can be put to use on real projects, but very few equations and numbers. It's mostly narrative about increasing the probability of project success. But the to calculate that probability based on other numbers, processes, and systems is not there. That's the realm of Technical books and journal papers.
The content of the book was developed with the help of editors at American Management Association, the publisher. The Acquisition Editor contacted me about writing a book for the customers of AMA. He explained up front AMA is in the money making business of selling books. And that although I may have many good ideas, even ideas that people might want to read about, it's an AMA book and I'll be getting lots of help developing those ideas into a book that will make money for AMA.
The distinction between a populist book and a technical book are the differences between a book that addresses a broad audience with a general approach to the topic and a deep dive book focused on a narrow audience.
But one other disticntion is for most of the technical approaches, some form of calculation takes place to support the materials found in the populist material. One simple example is estimating. There are estimating articles and some books that lay out the principles of estimates. We have those in our domain in the form of guidelines and a few texts. But to calculate the Estimate To Complete in a statistically sound manner, technical knowledge and the underlying mathematics of non-linear, non-stationary, stochastic processes (Monte Carlo Simulation of the projects work structure) is needed.
Two examples of populist versus technical
Two from my past two from my current work.
These two books are about the same topic. General relativity and its description of the shape of our universe. One is a best selling popularization of the topic, found in many home libraries of those interested in this fascinating topic. The one on the left is on my shelf from a graduate school course on General Relativity along with Misner, Thorne, and Wheeler's Gravity.
Dense is an understatement for the math and the results of the book on the left. So if you want to calculate something about a rapidly spinning Black Hole, you're going to need that book. The book on the right will talk about those Black Holes in non-mathematical terms, but no numbers come out from that description.
The book on the left is about probabilistic processes in everyday life that we misunderstand or are biased to misunderstand. The many cognitive biases we use to convince ourselves we are making the right decisions on projects are illustrated through nice charts and graphs.
We use the book on the left in our work with non-stationary stochastic process of complex project cost and schedule modeling. Making these decisions is critical to quantifying how technical and economics risk may affect a system's cost. This book is a treatment of how probability methods are applied to model, measure, and manage risk, schedule, and cost engineering for advanced systems. Garvey's shows how to construct models, do the calculations, and make decisions with these calculations.
Here's The Point - Finally
If you come across a suggestion that decisions can be made in the absence of knowing anything about the future numbers or about actually doing the math, put that suggestion in the class of populist descriptions of a complex topic.
If you can't calculate something, then you can't make a decision based on the evidence represented by numbers. If you can't decide based on the math, then the only way left is to decide on intuition, hunchs, opinion, or some other seriously flawed non-analytical basis.
Just a reminder from Mr. Deming stated in yesterday's post
If it's not your money, there's likley an expectation that those providing the money are intestered in the calculations needed to make those decisions.
In a sufficiently complex project we need measures of progress to plan beyond burning down our list of same sized stories, which by the way require non-trivial work to make same sized and keep same sized. And of course if this same size-ness does not have a sufficiently small variance all that effort is a waste.
But let's assume we're not working on a sufficiently small project where same sized work efforts can be developed, we need measures of progress related to the Effectiveness of the deliverables and the Performance of those deliverable in producing that effectiveness for the customer.
Here's a recent webinar on this topic.
This is one of those pictures tossed out at some conference that drives me crazy. It's uninformed, ignores the disciplines of developing software for money, and is meant to show how smart someone is, without actually understanding the core processes needed for actually being knowledgeable of the topic - in this case statistical processes of project work. Then the picture gets circulated, re-posted, and becomes the basis of all kinds of other misunderstanding, just like the Dilbert cartons that represent cartons of the problem, but have no corrective actions associated.
It is popular in some circles of agile development to construct charts showing the strawman of deterministic and waterfall approaches, then compare them to the stochastic approaches and point out how much better the latter is than the former. Here's an example.
These strawman approaches are of course not only misinformed, they're essentially nonsense in any domain where credible project management is established, and the basis of the their response with Don't Do Stupid Things on Purpose.
Let's look at each strawman statement for the Deterministic View in light of actual project management processes, either simply best practice or mandated practice.
The only explanation here is the intentional ignorance of basic science, math, engineering, and computer science.
In the stochastic View there are equally egregious errors.
In the End
For some reason using charts like this one, re-posting of Dilbert cartons, making statements using buzz words - we're using Real Options and Bayesian Statistics to manage our work - are may favorite ones - seems to be more common the closer we get to the sole contributor point of view. Along with look at my 22 samples of self-selected data with a ±70% variance as how to forecast future performance.
It may be because sole contributors are becoming more prevalent. Sole contributors have certainly changed the world of software development in wasy never possible by larger organizations. But without the foundation of good math, good systems engineering - and I don't mean "data center systems engineering," I mean INCOSE Systems Engineering - those sole contributor points of view simply don't scale.
Always ask when you hear a piece of advice - in what domain have you applied this advice with success?
We're preparing for a Webinar on 25 September 2014, now titled Using Techncial Performance to Inform Earned Value, which addresses the disconnect in EAI-748-C between two statement
Two reconcile these two statement, we need to have a process of informing Earned Value (BCWP) with the Techncial Performance of the products being built. After the Webinar, we'll post the link.
In the mean time here's a list of resources gathered to support this topic.
The term Systems Thinking is many times used in vague and unspecified ways to mean think about the system and all will emerge in a way needed to produce value.
Systems Thinking is a term toosed around many times with little regard for what it actually means and its relationship to Systems Engineering and Engineered Systems.
But systems thinking is much more than that. Systems thinking provides a rigorous way of integrating people, purpose, process and performance and: †
† "How Systems Thinking Contributes to Systems Engineering," INCOSEUK, Z7, Issue 1.0, March 2010.
The question is two fold. Can the customer accept the release into use and the other does the customer have the ability to make use of the incremental capabilities of these releases?
Let's start with the incremental release. I know the picture to the left is considered a metaphor by some. But as a metaphor it's weak. Let's look a a few previous posts. Another Bad Agile Analogy, Use, Misuse, and Danger of Metaphor. Each of these presents some issues with using Metaphors.
But let's be crystal clear here. Incremental development in the style of the bottom picture may be a preferred method, once the customer agrees. Much of the rhetoric around agile assumes the customer can behave in this way, without looking outside the anecdotal and many times narrow experiences of the those making that suggestion. For agile to succeed in the enterprise and mission critical product and project domain, testing the applicability of both pictures is needed.
Ask the customer if they are willing to use the skateboard while waiting for the car? Otherwise you have a solution looking for a problem to solve.
Now to the bigger issue. In the picture above, the top series is a linear development and the bottom an iterative or incremental depending on where you work. Iterating on the needed capabilities to arrive at the car. Or incrementally delivering a mode of transportation.
The bottom's increment shows 5 vehicles produced by the project. The core question that is unanswered is does the customer want a skate board, scooter, bicycle, motorcycle, and then a car for transportation. If yes, no problem. But if the customer actually needs a car to conduct business, drive the kids to school, or arrive at the airport for your vacation trip, then those intermediate modes of transportation are actually worthless.
The failure of the metaphor and most metaphors is they don't address the reason for writing software for money.
Provide capabilities for the business to accomplish something - Capabilities Based Planning
The customer didn't buy requirements, software, hardware, stories, features, or even the agile development process. They bought a capability to do something. Here's how to start that process.
Here's the outcome and an insurance provider network enrollment ERP system.
Producing skateboards, then scooters, then bicycles and then finally the car isn't going to meet the needs of the customer if they want a car or a fleet of cars. In the figure above the Minimal Viable Features, aren't features they are capabilities. For example this statement is a minimal viable product is likely a good description of a Beta Feature. Could be connected to a business capability, but without a Capabilities Based Plan as in above, can't really tell.
So How Did We Get In This Situation?
Here's a biased opinion informed by my several decades of experience writing code and managing others who write code - we're missing the systems engineering paradigm in commercial software development. That is, for software development of mission critical systems, and Enterprise IT is an example of mission critical system.
Here's some posts:
The paradigm of Systems Engineering fills 1,000's pages and dozen's of books, but it is boiled down to this.
You need to know what DONE looks like in units of measure meaningful to the decision makers. Those units start with Measures of Effectiveness and Measures of Performance.
Each of those measures is probabilistic, driven by the underlying statistical processes of the system. These mean you must be able to estimate not only cost and schedule, but how that cost and schedule will deliver the needed system capabilities measured in MOE's and MOP's.
In the estimating discussion there is a popular notion that we can't possibly estimate something we haven't done before. So we have to explore - using the customers money by the way - to discover what we don't know.
The when we hear about we've never done this before and estimating is a waste of time, think about the title of the post.
Everything's a Remix
Other than inventing new physics all software development has been done in some form or another before. The only true original thing in the universe is the Big Bang. Everything else is derived from something that came before.
Now we may not know about this thing in the past, but that's a different story. It as done before in some form, but we didn't realize it. There are endless examples of copying ideas from the past is thinking they are innovative, new and break through. The iPad and all laptops came from Allan Kay's 1972 paper, "A personal computer for childern of all ages." Even how the touch screen on the iPhone works was done before Apple announced it as the biggest breakthrough in the history of computing.
In our formal defense acquisition paradigm there are many programs that are research. This looks like this flow. Making estiimates about the effort and duration is difficult, so blocks of money are provided to find out. But these are not product production or systems development processes. The Systems Design and Development (SDD) is between MS-B and MS-C. We don't confuse exploring from developing. Want to explore work on a DARPA program. Want to develop, work in post-MS-B and know somethiong about what came before.
The Pre-milestone A works is to identify what capabilities will be needed in the final product. The DARPA programs I work are even further to the left of Milestone A.
On the other end of the spectrum from this formal process, a collection of sticky notes on the wall could have similar flow of maturity. But the principles are still the same.
So How To Estimate in the Presence of We've Never Done This Before
Here's a critical concept - we can't introduce anything new until we're fluent in the language of our domain, and we do that through emulation.† This means for us to move forward we have to have done something like this in the past. So if we haven't done something like this in the past, don't know anyone who has, or can't find an example of it being done, we will have little success being innovative. As well, we will hopelessly fail in trying to estimate the cost, schedule, and probability of delivering capabilities. In other words we'll fail and blame it on the estimating process and assume that we'll be successful if we stop estimating.
So stop thinking about we can't know what we don't know and start thinking someone has done this before and we just need to find that someone, somewhere, something. Nobody starts out being original, we need copying to get started. Once copied, transformation is the next step. With the copy we can estimate size and effort. We can now transform it into something that is better, and since we now know about the thing we copied, we have a reference class. Yes that famous Reference Class Forecasting used by all mature estimating shops. With the copy and it's transformed item, we can them combine ideas into something new. The Alto from Xerox and then the Xerox Star for executives, was the basis of the Lisa and Mac.
You can estimate almost anything, and every software system if you do some home work and suspend the belief it can't be done. WHY? because it's not your money, and those providing you the money have an interest in several things about their money - what will it cost, when will you be done, and using the revenue side of the balanced sheet, when will they break even on the exchange of money for value? This is the principle of every for profit business on the planet. The not-for-profits have to pay the electric bill as well, as do the non-profits. So everyone, everywhere needs to know the cost of that value they asked us top produce BEFORE we've spent all their money and ran out of time to reach the target market for that pesky break even equation.
Anyone tells you otherwise is not in the business of business, but just on the expense side and that means not on the decision making side either, just labor doing what they're told to do - which is a noble profession, but unlikely to influence how decisions are made.
The notion of decision rights is the basis of governance. When you hear about doing or not doing something in the absence of who needs this information, ask who needs this information and is it your decision right to decide to fulfill or not fulfill the request for that information? As my colleague, retired NASA Cost Director, says follow the money, that's where you find the decider.
† Everything is a remix, Part 3, Kirby Furgeson.
I'll admit up front I'm hugely biased toward statistical thinking. As one trained in physics and the mathematics that goes with physics, and Systems Engineering and the math that goes with that thinking about statistics is what we do in our firm. We work programs with cost and schedule development, do triage on programs for cost and schedule, guide the development of technology solutions using probabilistic models, assess risk to cost, schedule, and technical performance using probability and statistics, and build business cases, performance models, Estimates To Complete, Estimates At Completion, the probability of program success, the probability of a proposal win, and the probability that the Go-Live date will occur on or before the need date, and be at or below the planned cost, and the probability the mandatory needed capabilities will be there as well.
We use probability and statistics not because we want to, but because we have to. Many intelligent, trained, and educated people in our domain - software intensive systems and the management of projects - find themselves frozen in fear when confronted by any mathematical problem beyond the level of basic arithmetic - especially in the software development domain. The algorithm writers on flight control systems we work with are not actually software developers in the common sense, but are control system engineers who implement their algorithm in Handel-C - so they don't count - at leats not in this sense.
We have to deal with probability and statistics for a simple reason - ever variable on a project is a random variable. Only accountants deal with Point Numbers. The balance in your checking account is not subject to a statistical estimate. The price of General Electric stock in your 401(K) is a random variable. All the world is a non-stationary, stochastic process, and many times a non-linear, non-stationary, stochastic process.
Stochastic processes are everywhere. They are time series subject to random fluctuations. Your heart beat, the stock market, the productivity of your software team, the stability of technical requirements, the performance of the database server, the number of defects in the code you write.
In our software development domain there is an overwhelming need to predict the future. Not for the reason you may think. Because at the same time there is a movement underway to Not Estimate the future. But it turns out this is a need not necessarily a desire. The need to predict - to have some sense of what is going to happen - is based on a few very simple principles of microeconomics
Our natural tendencies are to focus on observation - empirical data - rather than the statistical data that drives the probabilistic aspects of our work.
This approach - the statistical processes and probabilistic outcomes requires we know something about our underlying processes. Our capacity for work, the generated defect rate, the defect fix rate. Without that knowledge the probabilistic answers aren't forth coming and if th ey are forced out in the Dilbert Style management, they'll be bogus at best, and down right lies at worst.
Let's stop here for some critically important points:
So let's look at the core problem of estimating
As humans we are poor at estimating. Fine, does then mean we should not estimate. Hardly. We need to become aware of our built in problem and deal with them. The need for estimating in business is not going away, it is at the heart of business itself and core to all decision making.
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which of these is more probable?
1. Linda is a bank clerk.
2. Linda is a bank clerk and an active member of a feminist movement.
In Kahneman's studies, 90% of all respondents selected the latter option. Why? Because the description of Linda and the word feminism intuitively raise an idea of compatibility. Being outspoken and an activist are not usually associated with the job of a bank clerk. Quickly thinking, the bank clerk is a slightly more probable option if she is at least a feminist. However, the second option is horribly wrong because the probability is lower when there are more variables. Statistical probability is always lower when the target group grows – there are more bank clerks who are not active feminists.
So Now What, We've Confirmed We're Bad At Making Estimates
How do we make it better? First, come to realize the good - meaning credible - estimates are part of good business. Knowing the cost of the value delivered is at the core of all business success. Second, is look for the root causes of poor estimating outcomes. These comes in many sizes. But the Dilbert excuse is not an excuse. It's a carton of bad management. So let's drop that right away. If you work for a Dilbert boss or have a Dilbert boss for a customer, not much good estimating is going to do for you. So let's dispense with the charde of trying too.
So what are some root causes of poor estimates:
Probability theory is nothing but common sense reduced to calculation — Pierre-Simon Laplace 1749-1827)
So when you hear we can't forecast the future, or estimates are evil, or we can't know what we need to do until we start doing, focus on the last part of the quote — if you don't apply probability theory and its partner statistics, they are correct and missing that basic common sense. If we apply basic statistically thinking to project management issues, we can calculate the probability of anything. The resulting probability may not have sufficient confidence levels - but we can calculate non the less.
When you hear we can make decisions without estimating the cost, schedule, or capability impacts of those decisions, consider Laplace and the nonsense of that notion. And a recent example of how to do the math for forecast the future behaviour of a project in a specific domain Earned Value Meets Big Data and the annotated briefing of the same paper.
There's a continuing discussion on LinkedIn and Twitter about project success, the waste of certain activities on projects, and of course the argument without end on estimating the cost of producing the value from projects. It's really a argument without evidence, since some of the protagonist in the estimating discussion have yet to come up with alternatives.
I've come to understand Project Success is multidimensional a few years back after reading "Reinventing Project Management." Aaron Shenhar and Dov Dir, Harvard University Press. The other book that changed my view of the world was IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Peter Weill and Jeanne W. Roos, Harvard University Press.
This last book should put a stake in the heart of #NoEstimates, since the decision rights for those needing and asking for the cost and schedule for the business capabilities belongs to those with the money, not those spending the money.
A summary of the book can be found in the paper, "Project Success: A Multidimensional Strategic Concept," Aaron Shenhar, Dov Dvir, Ofer Levy, and Alan Maltz, Long Range Planning 34, (2001) pp 699-725.
In many cases there is not a "product" per se, but a service. These are wrapped in a larger context in today's enterprise paradigm as "capabilities." Provided the capabilities to accomplish a goal, mission, or business outcome. This is done through products and processes. Both are used by people, other processes, and other products to accomplish other goals, mission, or outcomes. This is the System of Systems view of the "project" paradigm.
Shenhar and Dvir's research along with Levy and Maltz in the paper showed there are 4 success dimensions.
With this paradigm, principles, practices, and processes become the basis of "project management," and the resulting product or service. But the measures of success are better described by Shenhar and Dvir model, since that are the direct consequences of all the enablers of that success.
So Here's the Killer Question(s)
In yesterday's post, the notion of Systems Engineering was suggested as one solutuion to project failure. Here's the next step. The Agile notion started with a manifesto that turned into many interpretations and practices. In the standard project management paradigm, there is a set of principles, practices, and processes described in a variety of ways through several organizations. ITIL, PMI, APM, DOD, DOE, and other owners of project management activities.
When we take the Systems Engineering approach, we can put a wrapper around ALL project management, technical development, and deployment processes, that can be use to assess each practice and process to assure it is providing value. Here's a short overview of this paradigm.