Principles, Practices, and Processes to Increase Probability of Success
I found another paper presented at Newspaper systems journal around architecture in manufacturing and ERP.
One of the 12 Principles of agile says The best architectures, requirements, and designs emerge from self-organizing teams. This is a developers point of view of architecture. The architects point of view looks like.
I was sorting through a desk draw and came across a collection of papers from book chapters and journals done in the early 2000's when I was the architect of an early newspaper editorial system.
Here's one on Risk Management
Risk management is how adults management projects.
I hear all the time estimating is the same as guessing. This is not true mathematically nor is not true business process wise. This is an approach used by many (guessing), not understanding that making decisions in the presence of uncertainty requires we understand the impact of that decision. When that future is uncertain, we need to know that impact in probabilistic terms. And with this, comes confidence, precision, and accuracy of the estimate.
What’s the difference between estimate and guess? The distinction between the two words is one of the degree of care taken in arriving at a conclusion.
The word Estimate is derived from the Latin word aestimare, meaning to value. The term is has the origin of estimable, which means capable of being estimated or worthy of esteem, and of course esteem, which means regard as in High Regard.
To estimate means to judge the extent, nature, or value of something - connected to the regard - he is held in high regard, with the implication that the result is based on expertise or familiarity. An estimate is the resulting calculation or judgment. A related term is approximation, meaning close or near.
In between a guess and an estimate is an educated guess, a more casual estimate. An idiomatic term for this type of middle-ground conclusion is ballpark figure. The origin of this American English idiom, which alludes to a baseball stadium, is not certain, but one conclusion is that it is related to in the ballpark, meaning close in the sense that one at such a location may not be in a precise location but is in the stadium.
To guess is to believe or suppose, to form an opinion based on little or no evidence, or to be correct by chance or conjecture. A guess is a thought or idea arrived at by one of these methods. Synonyms for guess include conjecture and surmise, which like guess can be employed both as verbs and as nouns.
We could have a hunch or an intuition, or we can engage in guesswork or speculation. Dead reckoning is same thing as guesswork. Dead reckoning was originally referred to a navigation process based on reliable information. Near synonyms describing thoughts or ideas developed with more rigor include hypothesis and supposition, as well as theory and thesis.
A guess is a casual, perhaps spontaneous conclusion. An estimate is based on intentional thought processes supported by data.
What Does This Mean For Projects?
If we're guessing we're making uninformed conclusions usually in the absence of data, experience, or any evidence of credibility. If we're estimating we are making informed conclusions based on data, past performance, models - including Monte Carlo models, and parametric models.
When we hear decisions can be made without estimates. Or all estimating is guessing, we now mathematically and business process - neither of this is true.
This post is derived from Daily Writing Tips
I never met Carl Sagan. I've read his materials (both technical and popular) and listened to his talks. Dr. Sagan was a fierce questioner of many things. But in that questioning process is a framework by which answers can be discovered. Here's two nice quotes
The conjecture there is cause and effect without confirming cause and effect is another common naive thought process. We get that from the anti- vaccine , global warming deniers to name a few. We also get this from those who conjecture that estimates are the smell of dysfunction without every stating the dysfunctions, discovering the cause and effect connections for the dysfunction (the unstated dysfunction).
Dr. Sagan primary message was and still is an
I don't what to believe, I want to Know
If we seek to improve the probability of success for our software intensive systems, we can't just believe the unsubstantiated conjecture of a group on unhappy developers tired of be abused by bad managers. We need tangible evidence that their conjectures are not only testable outside their personal anecdotes, but also those conjectures are not violations of the basis of all business decision making.
And just for the record.
A final thought about unsubstantiated opinions, masking as personal anecdotes (thanks for this Peter)
There is a popular notion in the Agile world that with continuous deployment and frequent shipping, dates should cease to matter. The next big feature should be ‘just’ one release of many.
But the Capabilities provided by the software system many times have dependencies on other capabilities. Here's an example of a health insurance provider network system. There is a minimum number of features needed to provide a single Capability that the business can put to work making money. Certainty continuous delivery of features is always a good idea. But the business is looking for capabilities not just features. The Capability to do something of value. This is the Value used - and many times misused - in Agile.
It's not about working software (which is necessary). It's about that working software being able to produce measurable value for the business. That can be revenue, services, operational processes. In the enterprise these Capabilities need to be delivered in the right order at the right time for the right cost for the business to meet it's business goals. Rarely are they Independent in practice.
To discover what capabilities are needed, here's one approach taken from our Capabilities Based Planning paradigm
Here's a more detailed process description.
This of course goes for the #Noestimates notion as well - ask those paying if they have no interest in knowing when those capabilities will be available and how much they will cost. You may get a different answer that the one provided by the developer, who does not own the money, the business accountability, or the balance sheet performance goals.
Many voices in the IT Project Failure domain reference the Standish Reports as the starting point.
These reports have serious flaws in their approach - not the least of which is the respondents are self-selected. Meaning the population of IT projects is not represented in the returned sample. Another popular misrepresentation is the software crisis. Using a 30 year old NATO Report, it is conjectured the crisis can only be fixed by applying a method, without determining the Root Cause - if there ever was one.
These approaches can be found in How to Lie With Statistics. That aside there is another serious flaw in this project failure discussion.
There are solutions looking for a problem to solve. Tools, processes, practices, vendors, consultants. But nearly always the needed Root Cause Analysis is not the starting point. Instead the symptom is used as the target for the solution. But first let's establish the framing assumptions for project success.
Successful execution of Enterprise IT, Aerospace, Defense, and Government Software Intensive Systems (SIS) requires management discipline to identify what “Done” looks like, provide measures of progress toward “Done,” identify and remove risks and impediments to reaching “Done,” and assure timely corrective actions to maintain the planned progress towards “Done.”
I work in a domain where Performance Assessment and Root Cause Analyses is a standard function of program management. Increasing the Probability of Program Success is a business strategy. There are many approaches to increasing the probability of program success. But first what are some Root Causes of failure? Here's the top 4 from research:
There are dozens more from the Root Cause Analysis efforts in software intensive systems, but these four occur most often. Before suggesting any corrective action to any observed problem (undesirable effect), we need to know the Root Cause. Asking 5 Whys is a start, but without some framework for that process, it too becomes a cause for failure. A method we use is Reality Charting. It forces the conversation to cause and effect and prevents the story telling approach where Dilbert Cartoons are descriptions of the cause - the SMELL - of the problem.
One common offender to this tell me a story and I'll tell you a solution is the No Estimates paradigm. Estimates are conjectured to be the smell of dysfunction. No dysfunctions are named, but suggesting we can make decisions with No Estimates is the solution. Besides violating the principles of Microeconomics, not knowing the outcomes of our work in the presence of uncertainty means we have an open loop control system. With Open Loop we don't know where we're going, we don't know if we're getting there, and we don't know when we're done. This in turn lays the groundwork for the Top Four Root Causes of project failure listed above.
So here's the punch line. Dealing directly with the Top 4 Root Causes of project failure starts with making estimates. Estimates of the probability of meeting the expected performance goals, when they are needed for project success.
Estimates of cost and schedule to assure we have enough money, or the cost is not more than the revenue, and that doing to work for the needed cost will show up on the needed time so our revenue stream will pay back that cost. Showing up late and over budget, even with a working product is not project success.
Estimates of risk are the very basis of risk management - managing like an adult. What could go wrong requires we estimate the probability of the risk occurring or the probability distribution function of the natural variances, the probability of impact, the probability of the effectiveness of our mitigation, and the probability of any residual risk.
Unanticipated technical issues are harder. But if we know anything about the technical domain, we can come up with some problems that can be solved before they become problems. This is called Design. If we now nothing about the technical domain, nothing about how to deliver a solution for the customer, nothing about the cost to provide that solution - we're the wrong people on the project.
When there is a discussion around making improvements to anything, trouble starts when we don't have a shared understanding of the outcomes. For example, speculating that something can be done or that something should be stopped in pursuit of improvement has difficulty maintaining traction in the absence of a framework for that discussion.
The discussion falls into he said, she said style or I'll tell you a story (anecdote) of how this worked for me and it'll work for you.
Over the years I've been trained to work on proposals, provide training materials, write guidance documents, and other outlets - PodCasts, conference presentations - all designed to convey a new and sometimes controversial topic. Connecting agile and earned value management is the latest.
There are several guides that have formed the basis of my work. The critical success factor for this work is to move away from personal anecdotes - although those are many time used inside a broader context to make the message more personal. Rather start with a framework for the message.
A good place to start is Cliff Atkinson's Beyond Bullet Points. It's not so much the making of Power Point briefings, but the process of sorting through what are you trying to say. Version 1 of the book is my favorite, because it was simple and actually changed how we thought about communication. Here's a framework from Cliff's 1st edition.
So when we hear about we're exploring or all we want is a conversation and at the same time the suggestion - conjecture actually - that what we're talking about is a desire to change an existing paradigm, make some dysfunction go away, take some correcrtive action - ask some importanrt questions:
There is a nice post from Trent Hone on No Estimates. This triggered some more ideas about why we estimates, what the root cause of the problem #NoEstimates is trying to solve and a summary of the problem
A Few Comments
All project work is probabilistic, driven by the underlying statistical uncertainties. These uncertainties are of two types - reducible and irreducible. Reducible uncertainty is driven by the lack of information. This information can be increased with direct work. We can "buy down" the uncertainty, with testing, alternative designs, redundancy. Reducible uncertainty is "event based." Your power outage for example. DDay being pushed one day by weather.
Irreducible uncertainty is just "part of the environment." It's the natural varaibility embedded in all project work. The "vibrations" of all the variables. This is handled by Margin. Schedule margin, cost margin, technical margin.
Here's an approach to "managing in the presence of uncertainty"
For my experience in Software Intensive Systems in a variety of domains (ERP, Realtime embedded systems, defense, space, nuclear power, pulp and paper, New Drug Development, heavy manufacturing, and more) #NE is a reaction to Bad Management. This inverts the Cause and Effect model of Root Cause Analysis. The conjecture that "estimates are the smell of dysfunction" without stating the dysfunction, the corrective action for that dysfunction, applying that corrective action, then reassessing the conjecture is a hollow statement. So the entire notion of #NE is a house built on sand.
Lastly the Microeconomics of decision making in SWDev in the presence of uncertainty means estimating is needed to "decide" between alternatives - opportunity costs. This paradigm is the basis of any non-trivial business governance process
No Estimates is a solution looking for a problem to solve.
On any project with significant Value at Risk Economic Analysis provides visibility to the data needed for decision making. This Value at Risk paradigm is a critical starting point for applying all processes of decision making. The choice of decision process must be matched to the opportunity cost (actually the value of the loss for the alternative not chosen).
With these probabilistic outcomes driven by the underlying statistical process of all project work, we need to be able to estimate all the values of the random variables and their impact on the processes above.
Next is an example of applying this probabilistic decision making in the presence of uncertainty for cost and schedule assessment. This can be for other probabilistic variables on the project. Technical Performance Measures, Measures of Effectiveness, Measures of Performance, Key Performance Parameters, and many other ...ilities (maintainability, supportability, survivability, etc.)
Project success starts with a simple principle. We have to Know What Done looks like before we start.
In Michelangelo's painting to the left, the two fingers are not touching. In the paradigm of a deity this may be sufficient to complete the job. In our our mortal world this is a nice example of almost done.
It's more than common that we are stuck at 90% complete for a long time after the planned completion date. There are several independent variables here that are the sources of this problem.
So What's a Project Manager to Do?
Here are six steps to creating a credible picture of what done looks like and execution to that understanding.
These six steps can be applied to any project management or product development approach from agile to formal DOD acquisition.
The key here is to connect the Programmatic performance (cost and schedule) with the Technical performance of the project, measure the variances of actuals to plan and take corrective actions to get back on plan or better yet stay on plan.
The #NoEstimates advocates have asked us to see estimates as a smell: an indication of possible decision making dysfunction. It might be useful to explore what's causing the smell.
In the normal business process world, when we encounter a dysfunction, Root Cause Analysis is an approach to discover the cause and effects of the dysfunction.
Since the Primary Effect is described as dysfunction but not stating what this dysfunction is, let's apply RCA in the form of the Apollo Method to the statement of the #NoEstimates advocates.
But first some background on RCA and the Apollo Method.
It's been suggested that asking 5 Whys is a place to start. It is well understood that simply asking may be necessary but far from sufficient to discover the Root Cause of any dysfunction. Source of the problem with the 5 Whys approach starts with our natural story telling approach to problem solving.
Finding the source of any dysfunction is straightforward:
None is this is in palce for the #NoEstimates conjecture that estimates are the smell of dysfunction.
So unless we have some understanding of the Dysfunction, conjecturing that estimates are the smell and the Not Estimating will remove the dysfunction has little chance of actual success.
No Estimates is a solution looking for a problem to solve. And to date that problem has not been identified, and most importantly the conjecture that Not Estimating fixes the problem has no tangible evidence to confirm it will fix the problem.
It is common to confuse strategy with operational effectiveness. Strategy for Information Technology (IT) projects contains three major themes. These form the foundation of the IT Strategy as well as the tactical processes that will be deployed in support of these strategies.
What Is Strategy?
Strategy is creating fit among a company’s activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs operational effectiveness determines the relative performance of the organization. 
Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about the business environment that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.
Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices. In contrast, the strategic agenda is the place for making clear tradeoffs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place.
The concept of fit among functional units is one of the oldest ideas in strategy. Gradually however, it has been supplanted with new concepts of core competencies, critical resources and key success factors. In fact fit is far more critical to the success of the IT systems than realized.  Strategic fit among the various systems components and the business processes they support is fundamental not only to competitive advantage but also to the sustainability of that advantage.
Fit among a company’s activities creates pressures and incentives to improve operational effectiveness. Fit means that poor performance in one activity will degrade performance in others, so that weaknesses are exposed drawing management’s attention. Conversely, with increasing fit, improvements of one activity will pay dividends in other areas.
The challenge now is to create fit among the IT components and their matching business components.
Building A Strategy
To define our Vision, Strategic Objectives, Performance Goals, Critical Success Factors in achieving those, and the measures of effectiveness and performance in pursuit of those strategic goals and objectives, we need a method that collects all of these in a single place.
If we are going to make tradeoffs in pursuit of strategy, we need to know what those tradeoffs are, how much will be the opportunity cost for each trade and how each trade impacts our strategic decision making.
To dive into the details, to make those opportunity cost tradeoffs about future outcomes in the presence of uncertainty we must of course ESTIMATE. There can be no execution of the strategy without make estimates of the benefits of the outcomes of the project that delivers the capabilities that implement the strategy.
The Balanced Scorecard presentation below shows how to build the strategy. Page 49 - 52 shows how to connect the dots between strategy and project execution, where the work is done, at or below the planned cost, on or before the needed time, and with the planned effectiveness and performance of the delivered capabilities. Showing up late, over budget, and with missing capabilities will not enable the strategy to fulfill it's mission and vision. It's a closed loop system - all parts must work in combination for success.
 “What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.
Jack Welch Speaks: Wisdom from the World’s Greatest Business Leader, J. Welch and J. C. Lowe, John Wiley & Sons, 1998.
Control Your Destiny or Someone Else Will: Lessons in Mastering Change–From the Principles Jack Welch Used to Revolutionize GE, N. M. Tichy and S. Sherman, Harpers Business, 1994.
With the principles of Capabilities Based Planning in place from the previous post, here's how to implement it.
The key here is to have a capabilities delivery map in place showing what capabilities need to be delivered in what sequence for what cost to enable the business to receive the planned value in exchange for the cost to produce those capabilities.
Here's an actual example for capabilities delivery. Each capabilities arrives with its dependent capabilities to provide the needed value to the business. This value enables the bsuienss to do something of value in support of the business strategy and the planned revenue that results from the cost to produce that value
Over the years the success rate of traditional project management methods applied to software development projects has been underwhelming. Traditional project management methods are based on a retrospective approach, which measures variance against plan rather than providing a performance–forecast that can be used to guide projects in a chaotic environment.  There are a number of programmatic control issues associated with IT projects that suggest a better approach is needed. 
Using this linear project planning paradigm – sometimes referred to as waterfall – but often derived from PMBOK linear style planning processes – there is little attention given to the forces that negatively impact the project. These project risks have no means of evaluation other than to acknowledge their presence, define mitigations and track the results. The impact on the business value of the capabilities of the system is not part of the project management process.
Capabilities Based Planning is anchored on producing Enterprise and Software Intensive Systems focused on strategic outcomes. Progress is measured through assessment of the effectiveness and performance of the deliverables in meeting those strategic objectives. This approach assures business value is connected with the strategy not just measures of the passage or time and consumption of money and the production of technical features.
In this approach avoiding or controlling change becomes the primary activity of project management. In this traditional model change is undesirable. In reality of business systems development, change is not only natural it is desirable. It is through change that the system can adapt to the needs of the business, which are themselves driven by external forces. These forces are rarely under the control of the project manager let alone the senior management of the business.
One project failure mode is when the participants and leaders of the project fail to recognize the difference between managing in the presence of change and managing change. It is managing in the presence of change that is a critical success factor of any modern business systems development.
Definition of Capability-Based Planning
“… involves a functional analysis of operational requirements. Capabilities are identified based on the tasks required… Once the required capability inventory is defined, the most cost effective and efficient options to satisfy the requirements are sought.”
What Are Capabilities and Why Are They Better at Describing Maturity?
Measuring project and product maturity as a function of effort and time assures that project management adds value to the business. Simply controlling and measuring the expenditure of resources – score keeping – provides little value in the presence of change. We need measures in units meaningful to the decision makers. Physical Percent complete needs to be measured as increasing Effectiveness and Performance, with decreasing Risk to increase the Probability of Project Success.
Capabilities–based planning provides a defined outcome that is not a final conclusion but lays the groundwork for the continued delivery of value. Objectives are reached and the operational value delivered when a defined capability is available for use. Features and functions describe the static and dynamic behaviors of a system, but they are not directly connected to the business strategy. Milestones indicate that a position in a timeline has been reached, but do not forecast what value will be delivered to the business or how this value is traceable to the needs of the user community. Capabilities provide the answer to the following question: in order to achieve our objectives, what capabilities must we possess? 
Capabilities–based planning transforms the delivery of features and functions into the delivery of processes that support a business strategy. Capabilities–based planning is planning, under the conditions of uncertainty, to provide capabilities suitable for a wide range of business challenges and circumstances, while working within an economic framework. This approach emphasizes flexibility, adaptiveness, and robust capabilities, implying a modular building–block approach to the delivery of enterprise applications.
Capabilities are not the same as features and functions; they enable demands to be met without explicit specification of the solution. A capability is the ability to affect an outcome, react to an input, or change an effect.
A capability provides an outcome or an effect without an a priori specification. Features and functions require an a priori specification in order to test for their existence or conformance to the specification. Capabilities–based planning can be understood at the execution level, but it needs to be raised to the level of enterprise process analysis:
Identify a needed capability in operational terms, using the set of capability options to assess the effectiveness in an operations paradigm, and make choices about requirements and the ways to achieve the capability using an integrated portfolio framework to produce an output set of options based on these operational paradigms.
Putting capabilities–based planning to work requires a change in our approach to planning — a set of business process improvement activities focused on assessing increasing maturity of the capabilities needed to fulfill the strategic objectives. Emphasis is placed on operational capabilities rather than features and functions. These operational capabilities become the building blocks of change. The emphasis is also placed on evaluating capabilities under conditions of uncertainty, which requires the deployment of robust building blocks capable of adapting to these changes. In both cases, analysis illuminates the feasibility of alternatives.
Augmenting Our Strategy–Making with Capabilities
Strategy–making is the starting point for project management. It asks and answers the question why are we doing this? Strategy making activities can be augmented through a capabilities–based planning process by mapping strategies to the assessment of maturity evaluation points for each of the emerging capabilities. This approach connects the why of a project with the how. The result is the replacement of the measurement of progress as the passage of time with the measurement of progress as the delivery of capabilities.Capabilities–based planning focuses on assessing the increasing maturity of functionality defined by the strategy. Planning under uncertainty provides capabilities suitable for a wide range of challenges and circumstances while working within an economic framework that necessitates choice, where the focus is on “possible uses” rather than specified features and functions.
With a set of capabilities in mind, a plan for delivering the capabilities is needed. One approach to building this plan is an Event–Based integrated master schedule. This has been discussed in the past, but the next article will describe the details on how to build such a schedule; derived from the capabilities.
 “Uncertainty and Project Management: Beyond the Critical Path Mentality,” Arnoud de Meyer, INSEAD Working Paper, 2001.
 “Analytical Architecture for Capabilities–Based Planning, Mission–System Analysis, and Transformation,” Paul K. Davis, RAND National Defense Research Institute, MR–1513–OSD, 2002.
There's a popular meme going around that asking for estimates and making estimates is the smell of dysfunction. We can assume it's management dysfunction. So what are the dysfunctions of management that they ask for estimates from those spending their money to produce value in exchange?
Turns out there a few obvious ones, when we consider Dilbert-style management.
But this is bad management. Obvious to everyone who has ever attended a probability and statistics class in their engineering, computer science, or hard science education.
So maybe the first dysfunction is those conjecturing estimating is the smell of dysfunction is they don't understand the underlying mathematics of making estimates in the presence of uncertainty. This includes both management and those spending the money provided by management.
Since there is no domain, context, framing assumptions, or principles stated by those conjecturing estimates are the smell of dysfunction let's look at one set of principles of writing software for money - other peoples money.
If we look for the root cause of projects going wrong, let's see how not following the 5 Principles can be a source of the dysfunction.
What Does Done Look Like?
Is there some notion of what capabilities the customer - those paying - need when we're done spending their money? Is there some units of measure of Done that are meaningful to those paying? If the answer is no, then we're likely to have little value for estimates, no matter the quality of the estimate.
The smell of dysfunction is proceed to spend money without knowing what done looks like in any meaningful units of measure for those providing the money
What's the Path to Done?
Do we have any notion of the order of work to be performed? Let's assume there is some dependency in this work. The agile notion of INVEST must be tested first. Any non-trivial project has interdependencies. If there are non, then the work must be simple enough that all the pieces act independently from each other. No order of production, no order of operations, no order of use.
The smell of dysfunction is not having a strategy to reach done on of before the need date for the capabilities that will earn back the investment for the money provided by those paying for your work.
Do We Have Enough Resources to Get to Done?
We need time, money, and resources to produce business value in exchange for the money we've been given. How much money? How much time? What resources?
The smell of dysfunction is not knowing how much of work will cots in the end to some level of confidence. Not knowing when we'll be done to some level of confidence. Our not knowing of what we've been asked to produce for that money and time will actually provide the needed capabilities those paying are expecting.
What Impediments Will We Encounter Along the Way?
All projects have uncertainty. Uncertainty produces risk. Managing in the presence of uncertainty means managing in the presence of risk.
Risk Management is How Adults Manage Projects - Tim Lister
Uncertainty comes in two forms reducible (epistemic) and irreducible (aleatory). Reducible uncertainty and its associated risk can be bought down. How much risk, what is the cost to buy it down? That means estimating.
Irreducible uncertainty and its associated risk cannot be bought down. We need margin - cost margin, schedule margin, technical margin to protect the project from this unfavorable outcome. How much margin? We need an estimate. For both reducible and irreducible uncertainty answering that question comes easily with a Monte Carlo Simulation.
The smell of dysfunction is not having a risk model for all the project work. Not have estimating for the probability of occurance of an event based risks. Not have a Probability Distribution Function of of the naturally occurring variance of irreducible work - duration, cost, performance.
How Are We Going To Measure Progress to Plan?
To measure progress we need a plan. Then we need some assessment of phsycal percent complete. This measurement is an ideal paradigm for agile. Working products that meet the measures of effectiveness, measures of performance, technical performance measures, ley performance parameters are ways to assure what we produced actually does what is needed by those who are paying.
To start with a target to steer toward we need to estimate what are the possible outcomes of the project. That is what are the achievable goals in measures of Effectiveness and Performance. With this starting point and measures of actual performance we can create a error signal used to Close Loop Control to tag corrective actions to steer toward our target. The target can change of course, and many time does.
With the probabilistic target and the actual measurement we have an Open Loop Control system, which provides no steering signal and results in we'll be done when we're done, we'll spend what we spend, and you'll get what you get,. Could be better than planned, could be worse than plan - don't know.
The smell of dysfunction is to not have a probabilistic steering target developed from past performance and models of future performance. Without this model we are operating open loop. Not steering target that can be corrected with actual performance information., But more importantly not steering target telling what our performance must be to meet the business goals of the project.
So Want To Talk About Smells?
Tangible evidence of dysfunction is needed. Variance analysis needed. Tangible corrective actions needed.
Exploring is none of these. Exploring is talking about fixing the smell. Talking and exploring doesn't fix the dysfunction. Looking for waste in the Muda sense - this is Muda. Do something tangible. Measure the result. Compare to plan. Make corrections to both action and plan - closed loop. Repeat till success.
Stop exploring - do something constructive. Correct the dysfunctions with actionable outcomes.
In The End
Conjecturing to NOT do something without first identifying the root cause of the smell is Open Loop decision making.
Conjecturing to NOT do something without saying what that something is so it can be tested is of little value to those paying money that need real help to increase the probability of success of their work efforts.
Conjecturing to explore has little value to those seeking actionable corrections to the problems. Exploring means o real commitment to improve. We're just wandering around looking under rocks for interesting ideas. Having someone pay for that is called pure research. Business that produces products and services in exchange for money are looking for value to result from their investment.
Populist books provide an important role in the processes of "thinking about things." They are simple, understandable in ways that resonant with the those not familiar with a topic, and are hopefully gateway sources to then next level of understanding. Populist books have a down-side as well. They are usually simplified versions of the underlying topic, devoid of the details, which unfortunately have mathematics that may be beyond the casual reader.
I've written about the issues with populist books before. There is a new set of issues that needs to be addressed. The Think Fast Act Slow book is a recent example of a populist book. It has useful materials, but leaves out all the ground work and heavy lifting needed to put these ideas to work.
In graduate school, there are several things you learn before starting your thesis work. Do a literature search. You're bright idea may have already been done. Or worse your bright idea is a cockamamy idea on day one. If everyone tells you it's a cockamamy idea, you may be able to show the world they're wrong. To do that you need to get through a peer review and a test of your idea by strangers using actual data that holds up to ruthless testing by others. There have been a few of those, most have gone on to win the Nobel Prize.
So if you hear some idea that doesn't quite make sense, ask for the data that supports that idea, so you can do independent testing. Better yet if that idea is an obvious violating of the basic principles - either of physics (cold fusion) or of economics (#NoEstimates) ask those proposing the idea for direct evidence of its applicability that can also be independently tested.
Here's a list of supporting papers need to put the populist ideas to work from my library. Goggle will find these for you:
These are a small sample of the background that needs to be examined after read the populist book.
With this example, you can move beyond populist ideas - no matter how valid - to technical ideas and start putting them to work and testing the outcomes for their efficacy in your domain.
Here's a starting point for that effort in Populist versus Technical View of Problems
Making decisions in the presence of uncertainty of a future outcomes resulting from that decision is an important topic in the project management, product development, and engineering domains. The first question in this domain is...
If the future is not identical to the past, how can we make a decision in the presence of this future uncertainty?
The answer is we need some means of taking what we know about the past and the present and turning it into information about the future. This information can be measurements of actual activities - cost, duration of work, risks, dependencies, performance and effectiveness measures, models and simulation of past and future activities, reference classes, parametric models.
If the future is identical to the past and the present, then all this data can show us a simple straight line projection from the past to the future.
But there are some questions:
The answers to these and many other questions can be found in the mathematics of probability and statistics. Here's some popular misconceptions of mathematical concepts
Modeling is the Key to Decision Making
"All models are wrong, some are useful," George Box and Norman R. Draper (1987). Empirical Model-Building and Response Surfaces, p. 424, Wiley. ISBN 0471810339.
We can't possibly estimate activities in the future if we don't already know what they are
We actually do this all the time. But more importantly there are simple step-by-step methods for making credible estimates about unknown - BUT KNOWABLE - outcomes.
This know of unknown but knowable is critical. If we really can't know - it is unknowable - then the work is not a project. It is pure research. So move on, unless you're a PhD researcher.
Here's a little dialog showing how to estimating most anything in the software development world.
With your knowledge and experience in the domain and a reasonable understanding of what the customer wants (no units of measure for reasonable by the way, sorry), let's ask some questions.
I have no pre-defined expectation of the duration. That is I have no anchor to start. If I did and didn't have a credible estimate I'd be a Dilbert manager - and I'm not.
Microeconomics of Decision Making
Making decisions about the future in the presence of uncertainty can be addressed by microeconomics principles. Microeconomics is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources. Projects have limited resources, business has limited resources. All human endeavors have limited resources - time, money, talent, capacity for work, skills, and other unknowns.
The microeconomics of decision making involves several variables
Formally, defining this choice problem is simple: there is a state space S, whose elements are called states of nature and represent all the possible realizations of uncertainty; there is an outcome space X , whose elements represent the possible results of any conceivable decision; and there is a preference relation ⪸ over the mappings from S to X. †
This of course provides little in a way to make a decision on a project. But the point here is making decisions in the presence of uncertainty is a well developed discipline. Conjecturing it can't be done simply ignores this discipline.
The Valuation of Project Deliverables
It's been conjectured that focusing on value is the basis of good software development efforts. When suggested that this value is independent of cost this is misinformed. Valuation and the resulting Value used to compare choices, is the process of determining the economic value of an asset, be it a created product, a service, or a process. Value is defined as the net worth, or the difference between the benefits produced by the asset and the costs to develop or acquire the asset, all adjusted appropriately for probabilistic risk, at some point in time.
This valuation has several difficulties:
The valuation of the outcomes of software projects depends on the analysis of these underlying costs and benefits. A prerequisite for cost-benefit analysis is the identification of the relevant value and cost drivers to produce that value. Both cost and value are probabilistic, driven by uncertainty - both reducible and irreducible uncertainty
In addition to measurable benefits and costs of the software project, the valuation process must consider uncertainty. Uncertainty arises from different sources. Natural uncertainty (aleatory) which is irreducible. This uncertainty relates to variations in the environment variables. Dealing with irreducible uncertainty requires margin for cost, schedule, and the performance of the outcomes. For both value and cost.
Event based uncertainty (epistemic) which is reducible. That is we can buy down this uncertainty with out actions. We can pay money to find things out. We can pay money to improve the value delivered from the cost we invest to produce that value.
Parameter uncertainty relates to the estimation of parameters (e.g., the reliability of the average number of defects). Model uncertainty relates to the validity of specific models used (e.g., the suitability of a certain distribution to model the defects). There is a straightforward taxonomy of uncertainty for software engineering that includes additional sources such as scope error and assumption error. The standard approach of handling uncertainty is by defining probability distributions for the underlying quantities, allowing the application of a standard calculus. Other approaches based on fuzzy measures or Bayesian networks consider different types of prior knowledge. ‡
The Final Point Once Again
The conjecture we can make informed decisions about choices in an uncertain future can be done in the absence of making estimates of the impacts of these choices has no basis in the mathematics of decision making.
This conjecture is simply not true. Any attempt to show this can be done has yet to materialize in any testable manner. This is where the basic math skills come into play. There is no math that supports this conjecture. Therefore there is no way to test this conjecture. It's personal opinion uninformed by any mathematics.
Proceed with caution when you hear this.
† Decision Theory Under Uncertainty, Johanna Etner, Meglena Jeleva, Jean-Marc Tallon, Centre d’Economie de la Sorbonne 2009.64
‡ Estimates, Uncertainty and Risk. IEEE Software, 69-74 (May 1997), Kitchenham and Linkman and "Belief Functions in Business Decisions. In: Studies in Fuzziness and Soft Computing, Vol. 88, Srivastava and Mock
Decision making is hard. Decision making is easy when we know what to do. When we don't know what to do there are conflicting choices that must be balanced in the presence of uncertainty for each of those choices. The bigger issue is that important choices are usually ones where we know the least about the outcomes and the cost and schedule to achieve those outcomes.
Decision science evolved to cope with decision making in the presence of uncertainty. This approach goes back to Bernoulli in the early 1700s, but remained an academic subject into the 20th century, because there was no satisfactory way to deal with the complexity of real life. Just after World War II, the fields of systems analysis and operations research began to develop. With the help of computers, it became possible to analyze problems of great complexity in the presence of uncertainty.
In 1938, Chester Barnard, authored of The Functions of the Executive, and coined the term “decision making” from the lexicon of public administration into the business world. This term replaced narrower descriptions such as “resource allocation” and “policy making.”
Decision analysis functions at four different levels
Each level focuses on different aspects of the problem of making decisions. And it is decision making that we're after. The purpose of the analysis is not to obtain a set of numbers describing decision alternatives. It is to provide the decision-maker the insight needed to choose between alternatives. These insights typically have three elements:
Now To The Problem at Hand
It has been conjectured ...
The key here and the critical unanswered question is how can a decision about an outcome in the future, in the presence of that uncertain future, be made in the absence of estimating the attributes going into that decision?
That is, if we have less than acceptable knowledge about a future outcome, how can we make a decision about the choices involved in that outcome?
Dealing with Uncertainty
All project work operates in the presence of uncertainty. The underlying statistical processes create probabilistic outcomes for future activities. These activities may be probabilistic events, or the naturally occurring variances of the processes that make up the project.
Clarity of discussion through the language of probability is one of the basis of decision analysis. The reality of uncertainty must be confronted and described, and the mathematics of probability is the natural language to describe uncertainty.
When we don't have the clarity of language, when redefining mathematical terms, misusing mathematical terms, enters the conversation, agreeing on the ways - and there are many ways - of making decisions in the presence of an uncertain future - becomes bogged down in approaches that can't be tested in any credible manner. What remains is personal opinion, small sample anecdotes, and attempts to solve complex problems with simple and simple minded approaches.
For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken