In a recent post, there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is for these suggestions from the post.
A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you choose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.
Decision making in the presence of the allocation of limited resources is Microeconomics. These decisions - in the presence of limited resources - involves opportunity costs. That is, what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? The answer is We need to estimate what happens if we don't choose one of the possible allocations since all the outcomes are in the future.
But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by the Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications and at the same time rapidly changing requirements from emerging discovery of new opportunities and very real threats to our success.
What is Strategy?
Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics, which is the case in the post.
Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close knit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm.
Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.
Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.
In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place.
“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.
Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.
I'm going to suggest that each of the five decision processes described in the post are the proper ones - but ones with many approaches to their implementation. But each has ignored the underlying principles of Microeconomics. This principle states that decisions about future outcomes are informed by the opportunity cost and that opportunity cost requires - mandated actually since these costs are in the future - an estimate of the impact of the possible decisions. The trade space paradigm assesses the impact of each decision in the presence of uncertainty of the opportunity cost. These probabilistic outcomes are always present in any real project with significant value at risk. Low value at risk, means low impact from choosing the wrong opportunity.
This is the basis of Real Options, Bayesian statistics, any type of forecasting, and are the very core of business decision making in the presence of uncertainty.
The post then asks...
- How well does this decision proposal help us reach our business goals?
- Does the risk profile resulting from this decision fit our acceptable risk profile?
The 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals? In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal. This goal, of course, is not fixed in the normal sense. It has a probability range in which success can be defined. On or Before, At or Below, Within this Range are terms used in decision making about future outcomes in the presence of uncertainty. Since all elements of all projects are probabilistic, there is no way to say with certainty something about an outcome in the future without speaking in probabilistic units of measure. To determine those units we need data, we need a model of how that data is going to behave in the future. We need to estimate those behaviors given the data and the model.
The 2nd question is about risk. All risk comes from uncertainty, for the same reason as the 1st question. All projects have uncertainty, therefore all projects have risk.
Risk Management is how Adults Manage Projects - Tim Lister
Two types of uncertainty exist on all projects:
Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is, Epistemic uncertainty can be reduced with work. Risk reduction work.
But this leaves open how much time, budget, and performance margin is needed? Or how much cost and schedule is needed to buy down the risk given a probabilistic model of the uncertainty.
ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with models. There are several tools to do this. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.
Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how much margin?
ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.
So let's take one more look at the suggestions before examining further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.
All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your current risk profile?
All risk is probabilistic, based on underlying statistical processes. Either the process is a lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk, we must incorporate these probabilistic and statistical behaviors in our decision-making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances in the past, we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the underlying model is statistical, we need to estimate the impact of this behavior.
Let's Look At The Five Decision Making Processes
1. Do the most important work first - If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.
This is not strategy, it is operational effectiveness.
- Important work first is good operational effectiveness. But importance needs a unit of measure. That unit of measure should be found in the strategy implemented by the operational activities. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
- The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
- One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. This can be done with a Strategy Map, shown below.
- This is the use of strategy as Porter defines it.
2. Do the Highest Technical Risk First - When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.
- This is likely dependent on the technical and programmatic architecture of the project or product.
- We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
- High risk may mean high cost. So doing risky things first has consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing something is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
- So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.
3. Do the Easiest Work First - Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.
- This is also dependent on the technical and programmatic architecture of the project or product.
- It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
- These assessments between risk and work sequence require a trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.
4. Do the legal Requirements First - In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather than trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.
- Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
- Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified.
- This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes.
- Once the plan - a required plan for validation - is in place, the order of the development will be visible.
- Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
- The notion of - Rather than trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - fails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
- As well what type of testing - early trials, full 21 CFR release, human applications, animal testing, etc. is not stated. With some experience in the medical device business, this suggestion seems uninformed by 21CFR guidance.
5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.
- It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
- Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is called. Puts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.
References
- Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
-
General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.
- Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
- Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.