The urge to save humanity is almost always a false front for the urge to rule. - H. L. Mencken
Principles, Practices, and Processes to Increase Probability of Project Success
There are lots of opinions going around on the source of the problems with the ACA. Some are informed by similar experience, some informed by simple observations of the facts in the press, some not so well informed.
In general the problems started, as they do with any large project, with the absence of a single source of integrative management accountability. This is a long term for no single program manager accountable for delivering the capabilities needed to enroll in the health exchanges.
I don't want to dissect each suggestion of the source of the problems, but I would like to clear up many misconceptions of how projects like this get into trouble, by starting with the framework of federal acquisition. The Federal Information Technology Acquisition Reform Act (FITARA) is the framework for how non-DOD agencies buy IT solutions. I'm a DOD person, but the framework for IT acquisition is shared across many agencies, including DoD. DoD does it this way. Let's start with some background on how to sort out fact from fiction.
Just for full disclosure, there are DoD programs that are monumental failures, so I'm not holding up the DoD as the poster child for how to do large IT projects. But there are shining examples of how to do it right, using evolutionary, incremental, iterative, and agile like processes.
The WSJ article has a nice summary as well and I'll make comparisons to our DoD, DOE, and NASA programs which are also software intensive. The ACA had or lacked.
So let's see how to address some common myths around the ACA site and large complex IT projects in general:
So What's the Root Cause of the ACA Site's Failure to Meet Expectations?
At the moment there is no way to know. So speculating that it was one thing or another - especially that estimating was the cause and not estimating would somehow have helped is actually beyond speculation. It's simply an unsubstantiated suggestion. GAO will certaintly wade in soon with their assessment. In June GAO had done some initial assessment in June of this year. So we'll see what comes next.
In The End
In the absence of a domain and context, it's not possible to assess the applicability of any software development method, process, or tool. So when someone, anyone, suggest as better way, suggests they know what the problem is, or suggests that we stop doing one thing and start doing another - please have them answer these simple questions before continuing to discuss the options:
Learn to Estimate
Assuming you actually need to estimate, and actually want to learn how to estimate there are many ways to do that. If you've decided you don't want to estimate, don't need to estimate - for all the right reasons - or simply refuse to estimate, then there is much else to say about estimating. Now you may not need to estimate, because your work is short, of a value that doesn't have much at risk - unlike the ACA site, or can sustain the production of work in the absence of knowing what DONE looks like.
But if making estimates of cost, schedule, and technical performance are part of the project process while spending other peoples money, Google will find you most everything you need to learn that those statements about this can't be done are actually too narrowly focused to be credible outside small, self contained, low risk project.
There are endless discussions about what went wrong with the Affordable Care Act web site development and deployment. It'll be hard to tell at this early point in the project assessment. But what is clear is this was mist likley a failure of project management.
Below is the acquisition life cycle for Business systems in the DOD, not that HHS is a DOD-style shop, but the paradigm of iterative and incremental development is in place. The release cycles shown here are way too long for something like the ACA Site. But to topology of the process is sound.
Looking at this process there is an obvious starting point. The Business Capability Definition. What is the resulting system supposed to do in terms of capabilities. Not the technical and operational requirements, but what business capabilities will the system provide to the stakeholder when it is in full operation? This is called Initial Operating Capability (IOC).
In our domain we start defining the capabilities using the Defense Acquisition Guide. Here is where Measures of Effectiveness (MoE) are defined. The Measure of Effectiveness is assigned to a capability. If we want a capability, how effective does it have to be? This measure is not a technical performance or a requirement. It is an effectiveness measure.
A MoE for a UAV program we work would be The UAV shall be transportable within a 3,000 mile radius via a C-17/C-141/C-5 package. From the MoE there is a Measure of Performance (MoP). For example weight is a MoP that enables the MoE to be fulfilled. Lower down are Technical Performance Measures (TPM). For example the weight of a Electro Optical / Infrared sensor platform must be under 55 pounds for the UAV to operate properly. It can't be too light or it would disrupt the center of gravity and can't be too heavy because the UAV would burn too much fuel to accomplish it's mission.
So for the ACA site, we'd need to know if there were MoE's, MoP's, TPM's defined that enable the Capabilities to be delivered. Here's the Performance Reference Model for federal IT.
Since the ACA site is pretty much all software, I'm going to suggest that this approach of using Capabilities Based Planning, MoE's, MoP's, TPM's has nothing to do with how the software is built. Either traditional or agile methods can be used. Agile is likely faster, but agile can only work in a domain like this if you know what DONE looks like in terms of MoE's, MoP's, and TPM's. This is a fixed launch date, fixed set of requirements guided by all the insurance regulations, and hopefully some not to exceed budget.
It is a common myth that government acquisition is waterfall and big design up front. DoD 5000.02 prescribes an iterative process designed to assess the viability of technologies while simultaneously refining user requirements. (pg 16 of 5000.02).
One starting question of the ACA Site would be - did they apply the iterative acquisition process in some form, no matter the fidelity of the iterations?
Here are some other fundamental questions as well:
If the answer to any of these is no or we don't know, go find out, get project managers who can do this. Otherwise the probability of project success is reduced. In fact look at the Probability of Program Success literature for further guidance.
The final question is did they have an Integrated Master Plan and Integrated Master Schedule for all the work as described in the Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide? This paradigm has been shown to significantly increase the probability of success not matter the domain, context, development method, technology, or business process. It states in clear, concise, and unequivocal terms what DONE looks like at every point in the project in units of measure meaningful to the decision makers.
The final - and killer question is - did the project team ruthlessly manage the changes to the capabilities? This is suspect is the root cause of the failure. Late changes to complex projects are the kiss of death.
As repeated often here...
Don't do stupid things on purpose
So Now What?
We have to wait to see what the Root Cause Analysis (RCA) shows for the failure of the project. But I'd conjecture the program management processes found in large DoD or NASA programs where not applied in any meaningful way. The site is not larger compared to most of the programs we work ($400M is small), but the processes used to manage those programs can be scaled down with ease. The Principles are the same. The Practices are scalable and the Processes scalable as well.
Vasco Duarte asks an important question - what is the value of an estimate to the stakeholder of the project? This seems a simple enough. But the question is couched in the context of #NoEstimates. So taken at face value, it seems to be a question about the value of estimates themsleves, not the value to the stakeholder.
Many in that community start with the presumption that estimating has no value, without first confirming who actully needs estimates and for what reason these estimates might be needed. The assumption that business processes can be executed in the absence of estimating the cost of the work performed by that business doesn't seem to have been addressed yet.
First let's see who the stakeholders of a project are:
Now there are certainly situations that estimates provide little value. Short duration work. Weeks, maybe days. Just do the work - assuming of course you actually understand what work to do. But those types of projects are simple and straight forward. I'm speaking of projects that are not straight forward. Since those suggesting that estimating is a waste, fail to state in what domain and context they can spend other peoples money without an estimate of how much money and when they will be done, it's difficult to see if any of their ideas have merit outside their personal experience. But it seems it is difficult to get that information, so we'll have to move on without it.
So What's the Value of Estimating to the Stakeholder?
First, the notion that stakeholders don't need estimates needs to be challanged. A fundamental axiom of all business is:
Return On Investment = (Gain from the innvestment - Cost of the Investment) / Cost of the Investment
So if there is a claim that a provider of solutions is interested in providing value, then you've got to know the cost of that value before you can determine the worth of delivered value. Let's say it again
You can't determine the value of something unless you know it's cost
Now some might argue this, but those doing so need to read Managerial Finance, Brigham and Weston. This is a core knowledge book for managing other peoples money inside or outside your business. By other peoples money I mean the stockholders, the investors - the STAKEHOLDERS.
So instead of asking what is the value of an estimate to the stakeholder, how about asking the stakeholder what information they need to manage their funding. By the way funding and bidget are not the same. Budget and price are not the same. Read Brigham and Weston to see the differences.
Starting with the ROI equation, the need for estimates, actually the demand for estimates, is to determine the ROI. This can be the end of the discussion and we can switch to how and when to develop estimates. What kind of estimates. How much confidence we need in the estimate. How the estimate is built, assessed for credibility, applied to DECISION making, and all the other aspects of managing a business funded by other peoples money. Spending your own money, no one really cares how you do that.
But there are a few more aspects of making estimates to deliver value to the stakeholder.
If there are ZERO changes in the current project's work activities the EAC and ETC can be developed from measurements of the current project's work. This seems an overly restrictive condition for an actual business management process.
So What's the Point?
Don't ask those doing the work what the value of making estimates of that work is. Ask those spending money on those doing the work what the value of estimating is.
It's their money. They need to know how much will be needed to start the project, sustain the project, complete the project.
It's that simple - follow the money not the work.
Discussion around managing projects and increasing the probability of success is many times focused on making things simpler. But simple only works on simple problems. Simple problems are not really that interesting if you adopt the Edwin Land view of the world.
Don't undertake a project unless it is manifestly important and nearly impossible.
But that quote has a pre-fix - Don't do anything that someone else can do. Working on easy projects means the work is easy, the problem is easy, and the solution is easy. The assumption that simple solutions are scalable to hard problems needs to be tested. Land suggests:
We can be dramatic, even theatrical; we can be persuasive; but the message we are telling must be true.
So how can we test the notion that complex and hard problems have simple solutions. Let's look at a program I am familar with and have worked a small portion of - Future Combat System (FCS).
The program is mostly canceled now - for all the right reasons. It was a bad idea, the face of combat has changed, and the program's organization and execution was wickedly complex, complicated, and hard. But that doesn't mean there are hard projects still waiting for solutions. The Affordable Care Act web site is one. It is a hard problem to integrate 50 states (some opted out), with multiple providers, on a short schedule. Having never done that before, the HHS was probably not prepared to handle the hardness of the project, let alone manage the work processes. Making changes late in the cycle was a fatal mistake from my experience in enterprise ERP. No phased delivery - you don't even need agile - just releases across the lifecycle that worked.
These are likely going to be the root cause of the problem. The project was hard but not treated as hard.
When we encounter hard problems we need to act accordingly. Suggesting there are easy solutions to hard problems is not the first response. There are no easy solutions to hard problems unless you are about to invent a new paradigm. There are very few people who can invent a new paradigm. Many making those suggestions don't work on actual hard problems, like FCS, enterprise ERP with 50 sites around the world, or integrating 50 or so ERP systems into a seamless financial reporting system for monthly status to a government agency. Problems like that. Land has advice on searching for solutions to problems like this.
We use bull's eye empiricism. We try everything, but we try the right thing first!
So what are the right things? Good question. Who gets to say what is right and what can be improved. Depends on the domain and context. If you work in a domain guided by governance, then for the most part, right is already in place. If you work for yourself or with a small group of friends right can be emergent to the needs. If the money you are spending belongs to someone else, they'll have something to say how you spend it.
In the end it comes down to this.
For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken
There is a post related to the #NoEstimates discussion titled, Introducing Deliberate Discovery. It's one of those interesting topics that doesn't use the same words across most domains.
The word Plan in our domain is a Strategy for the successful completion of a program. The Integrated Master Plan (IMP) says what steps need to be taken to increase the maturity of the deliverables over the life cycle of the project. Here's the elements of project success at a minimum. The IMP/IMS is one. But risk, packages of work, the delievrables, Measures of Performance, budget spreads for the work, and other lower level artifacts are used as well.
There is nearly an unlimited amount of discussion on the web about projects, project success factors, project management methods, including software development methods and paradigms, and of course project management tools that support them. All of these are solutions looking for a problem to solve. And there are lots of problems to solve, so these solutions can find an easy home.
Along with these come books, articles, paper, conferences, blogs, twitter feeds, and Face Book pages. All trying - we'll assume with good intent - to improve the probability of success of projects.
So here's the big question, and by big I mean the cost of the project and the value it produces. How can we determine if we are prepared for this success? There are several competency assessments for project managers. From essentially commercial ones, to work done in NASA and DoD. But the readiness for success for project at the data and process level is not usually found in the literature.
Here's one approach that has served me well. There are five process areas for managing projects in the domains I work. These include hardware development, things that fly, swim, and drive. Software development, things that process other peoples money, control the behaviour things that fly, swim, or drive.
Now for the Domain and Context Connections
For projects that are more traditional in nature - construction, metal bending, system integration, capabilities and requirements driven, budget based, expected delivery dates, with multiple stakeholders - it should be obvious how to connect the process above to the activities of the project. I know it's not obvious, it should be. If it was we'd have much better success rates on the projects.
For other projects, where there are not as clear and concise capabilities being stated. Or agile projects, where the customer is paying the team to discover what capabilities and the requirements that implement those capabilities, these processes still have merit.
In the end though, these processes need to be found in ALL projects or there simply isn't enough information to adequately assess where the project is going, when it is going to arrive at done, and how you're going to get there from here. No matter how often it is tried to avoid answering these questions, they just won't go away for all the right reasons?
When we ask the question what software domain are you applying your improvement approach to? and the answer is software, it might be useful to put a framework around the answer to provide a basis for discussion. The figure below is a sample from several sources. An expansion of this picture is at the link below. This is the taxonomy used for the financial assessment of the software business. It's one way to look at the various domains, focused here on commercial products and services.
So when you hear of the next big idea in development of software, and ask what kind of software do you work on, this might be a guide. When the answer is silence, maybe those proffering the solution need to look as well and test their ideas outside their personal anecdotal experience to see of anyone else shares their enthusiasm. Along with this, maybe asking if that wonderful idea has come in contact with any governance processes of spending other peoples money.
The first response should be what do you mean by "estimate?" This sounds like a simple question but in fact without a shared understanding of the estimating process, it's going to be a bumpy road.
Let's start with a quote - expert judgement is usually the crux of cost and schedule estimates, but in the spectrum of risk management processes (this is what estimating is all about - reducing the risk), quantification of expert judgement is the weakest area. Fred Raymond in Quantifying Risk to Manage Cost and Schedule, Acquisition Quarterly Review, Spring 1999.
This is not a new problem. The literature is full of estimating failures provided by experts. It is easy to criticize how estimating has failed to meet the need of business and government to know what something will cost and when it will be done. This need is not a false need, it is how business and government works. To say otherwise, ignores the fundamental premise of a business - income must exceed cost to remain in business. Decision making is founded on microeconomics of Opportunity Cost. Both these require estimates to be made in the presence of uncertainty. To ignore this framework is to willfully ignore how business works.
The picture above is from a blog post on estimating the hiking distance along the coast of California. The map distance for a car is not the same as the foot path. It is much more on foot, if you follow trails, then if you drive on Highway 1 - Pacific Coast Highway. What this analogy says is we must know the domain, the context of that domain, the units of measure (the metric of the measurement), the known variances, the knowable variances, and possibly the unknowable variable variances.
These last variances are the Black Swans in some domains. Finance being one. Projects, not so much. If you're spending other peoples money building something, you'd better have some understanding of the unknowns and a plan to deal with them, otherwise you're going to get your project canceled and your reputation handed to you on a platter.
So Let's Looks At The Spectrum of Estimating
When we say we need an estimate when and where can we push back and when and where had we better be able to produce the estimate with a credible confidence.
So In The End
And the Final Close
When you hear about someones dedication to improving Return on Investment (ROI), think carefully about the formula for ROI.
Cost is in the denominator and numerator.
No estimate of cost, no way to calculate ROI. It's bogus to say we're always focused on ROI, but we don't want to estimate the cost. That's a laughable statement.
Vasco Duarte has a nice presentation about his notion of No Estimates. It's clear and concise and answers the mail for what the heck is No Estimates all about. The answer is - there good ideas for flow based project where work chunks similar sized and arrival rates equal exit rates for service provider (development engine). This is the basis of Little's Law (link below). Can't good wrong here. This by the way is how many processes in many domains work. Workpackages with internal activities performed in the order needed to produce outcomes. Planned and executed by the Work Package team. With the Work Packages on baseline for the Rolling Wave 6-9 months, Work Packages crossing only one accounting period. This of course is not the domain Vasco works in - not surre what that is - but much shorter cycles can be found in many places where agile processes are also found.
Everything in the talk is more or less viable in many domains - decomposing the work into same sized chucks of work, putting this work in a queue of work, servicing the work load in a steady manner. Assuming the capacity for work is constant, AND the number of arrivals on the queue is the same as the number of leaving the queue, then Little's Law holds and you can forecast how long it will take to empty the queue at any point in time.
Notice you can ESTIMATE how long it will take to finish the work in the queue, knowing the length of the queue, the arrival rate and the exit rate. Some in the #NE community assert they only use Little's Law to forecast throughput, but they're leaving 2 of the 3 atributes off the table. Maybe because if they use LL to forecast WIP, exit rates and varainces in service times they'll be doing estimating, and that's not allowed in the No Estimates community. Smily face goes here!
Little's Law say
Using a queuing system consisting of discrete objects (stories), objects arrive at some rate to the system. The system of objects form into a queue (backlog) and receive service (development of the stories) and exit (when story is complete and 100% working).
Little's Law says that under steady state conditions, the avergae number of objects in a queuing system equals the average number of objects leaving the queuing system multiplied by the average time each object spends in the system.
This is one of the obvious statements that can be used to estimate how long it takes to do something.
If we know the service rate, which Vasco has shown later, the arrival rate, which may or may not be under our control, we can know how much Work In Progress there is. And we can forecast how long it will take to complete all the objects waiting in the queue.
The Statistical Process Control notion mentioned on the talk, is by its name Statistical in nature, meaning the arrival rate and the exit rate (as a result of the work being done) each have a probability distribution. As well, SPC assumes - and this must be the case - that a conforming outcome is present. That is no, rework. Or any rework goes back on the queue, and since it is likey to be of different size, while spoil the uniformity. Agile does this well, since working software is one of the conditions for success. But of course agile doesn't speak to the impact of rework, failed quality since that work simply goes back on the queue as another story. This dilutes the performance measures and one of the reasons pure agile is not well matched with Earned Value based performance management processes.
With Little's Law - the structure is laid out with the similar sized work, steady arrival and steady departure, words from my Six Sigma Course work book says one of the uses is:
Estimating Waiting Times: If are in a grocery queue behind 10 persons and estimate that the clerk is taking around 5 minutes/per customer, we can calculate that it will take us 50 minutes (10 persons x 5 minutes/person) to start service. This is essentially Little's law. We take the number of persons in the queue (10) as the "inventory". The inverse of the average time per customer (1/5 customers/minute) provides us the rate of service or the throughput throughput. Finally, we obtain the waiting time as equal to number of persons in the queue divided by the processing rate (10/(1/5) = 50 minutes).
Notice the term estimating - again. The math Vasco is describing is used for Estimating waiting times, throughputs, and departure rates. Here's the first example of redefining a term in common use and then saying we're no Estimating in No Estimates.
The core concepts of the talk are sound, but there are many gaps at the second level:
So while these last few quotes and story serve as a reminder of the failings of the human thought processes, they are likley not atuallly true. Like David Anderson's (Kanban) statement used in the same manner for the Frog siting in a slowly heating water. The frog will jump out when the water reaches a temperature too hot. He will NOT get bolied. I attribute these approaches to the well placed need to tell a story everyone can relate to. But the story rarely has any basis in fact.
This is harsh criticism and likely very annoying to those using this approach to make a point, but if you're going to give advice on how to spend other peoples money, at least get the underlying basis of the reasons straight.
So what should we think of #NoEstimates as explained here?
WE NEED TO ESTIMATE THESE VARIABLES AND THEIR VARIANCES TO HAVE CONFIDENCE THE SYSTEM WILL BE STABLE
Historical data is fine to making the determination of the variables, but then we need to control the arrival rate, size, and capacity for work.
So in the end Vasco's talk is informative, useful, and can be applied in several domains. With the assumptions that arrival rate, size, and work capacity can be defined and the variances known. Good work. But is that the same of Not Making Estimates? Anyone working in the process control business would say either Vasco is redefining the term estimate or pushing the estimating process back upstream to assure the work consumed by the processes meets the constraints of Little's Law.
Here's the Trouble with These Approaches
They're pseudo mathematical - voodoo math some might say - they're based on weak if not wrong analogies - they igniore the condition under which they must perform - and most critically they ignore the mandatory need to have some not to exceed estimate of the funding for the project before it starts. Little's Law based planning system work very well on production lines or development process like production lines. PayPal for example. Maintenance systems for example.
Since there is never a domain, context, or discussion of project authorization processes based on committed funding in governance, we can't really make a determination of the applicability outside of personal anecdotes.
How to move beyond the limitation of No Domain, No Context
The problem poor software development performance is for the most part a bad management problem. A Dilbert boss who has failed to understand the five core tenants of successful projects - (1) what does done look like, (2) do we know how to get to done, (3) do we have enough of what we need to reach done, (4) what impediments will we encounter along the way, and (5) how can we measure phsyical percent complete.
If we don't know what done looks like, we need to start with something we do know about. This is incremental and iterative development of anything. In SW it can be called agile. In building spacecraft (which are software intensive) it is called increasing maturity of the deliverables through progressive elaboration of these deliverables, in construction there is Lean Construction, in Pharma there in progressive development of efficacy of the drug, etc. etc.
If we don't know what our past performance was, either through reference class forecasting or actual past performance, we can't calibrate the needed variables for the queuing system. If we don't have some notion of the underlying statistical processes and the resulting probability distrbutions, we can't have insight into the behaviour of the system and will be surprised by the result. We'll call that a Black Swan, but in fact we were just to lazy to go do out homework.
But we must remember for the approach suggested by Vasco, arrival rate, work size, and capacity for work must be stable and the arrival rate cannot exceed the capacity to exiting the Service. When that is the case, the estimate at completion is available from the system.
So if #NoEstimtes is Vasco's description in the talk, this is very understandable. But estimating the cost before starting is still a need in many domains and estimating the cost at completion requires the stability of the queue and service system. Now we can start to find domains and contexts where this is applicable.
There are several quotes that resonant with me on a near daily basis in the project management domain. All of these come from direct projects or programs.
So when ever you hear a pithy platitude, see if you can put it into the following structure:
Governance is the mechanism that defines the decision rights for the project and determines how those decisions are made using the principles and practices guiding the execution of the project. Decision rights the the rules of how choices pertaining to planning, developing, and executing the project are made by the participants. Once the project is initiated, capabilities, requirements planning, budget, control, and management all create situations in which someone has to decide.
The decision rights process us the mechanism for making those decisions. In the absence of decision rights, the project participants have few mechanisms for resolving the conflicts that normally arise.
The governance of projects is concerned with 5 areas:
Governance tells us what processes, principles, and practices are needed and how they are implemented. Project governance tells us how to make decisions.
So when you hear that a proposed method, concept, or even a hash tag is about making decisions ask what is the mechanism applied, in what domain and context, that actually allows those decisions to be made? In the nuclear power software business NQA-1 is a governance process for developing systems. In the flight avionics business DO-178B is part of the governance process. In enterprise IT ITIL speaks to the governance of IT products and their deployment in the business.
Announcing something is about making decisions does not make it so in the absence of a framework, domain, and context in which to make those decisions.
When we hear about new and innovative methods for managing projects, let's ask some questions:
The check list above is applicable in large complex development programs, where requirements are emerging, but the needed capabilities have some level of stability. This means the customer knows to some level of confidence what done looks like in meaningful units of measure.
The critical few in this picture are:
The others are specific to a project domain and context. For now the critical few are the show stoppers for any complex project, no matter the domain or context.
The notion of successfully managing a project starts - as always - with the 5 Immutable Principles of project success.
So What's the Point?
If these principles are not in place on your project, it'll be a disappointment. So in the end any successful project management process must answer the following:
By the way, the common objections by some in the agile community to the notion of on-budget, on-schedule as a measure of project performance is seriously misinformed. Those paying for the project not only want to know these, they must know this. It's not the developers money. It's the customers money. They must know the final cost and the date when the project is done. They must also - of course - know that what is being produced delivers the needed capabilities. But having those capabilities late is not good. Spending much more than budgeted is not good. All three attributes are needed.
These numbers by the way are probabilistic. Every number on a project is drawn from a statistical process to produce a probabilistic outcome. The notion that NOT forecasting the future performance of cost, schedule, and technical outcomes can't be done is simply nonsense. Of course it can be done. You just have to remember the things you learned in your high school statistics class. Plus a few other things about probability distributions and confidence levels.
Never calculate without first knowing the answer - John Archibald Wheeler
When we work using other peoples money, we have an obligation to know what the all in cost is, or when we plan to be done, and if we have confidence that the technical outcome will be acceptable to those who hve paid for the outcomes of our project.
In order to answer these questions, we need to make estimates for cost, schedule, and technical performance before the project is complete, long before it is complete. These estimates are probabilistic forecasts of future outcomes based on past performance or out best assessment of what could be the outcomes.
We owe it to those providing the money for our. No business can operate for long without knowing what it's future obligations are for cash flow and payments to staff and vendors. To ignore this fundamental aspect of business is to ignore the creation of value for those providing the funding for that creation.
The ability to credibly estimate is a core competency of all engineeting and product development activities.
It is common to call our favorite sofware development method risk management. Agile calls itself risk management. #NoEstimate calls itself risk management.
To be a risk management process, several things have to be in place.
With these uncertainties comes and associated risk. This risk can be of the same two types - reducible and irreducible. These two types of uncertainty have two fancy terms aleatory and epistemic.
A risk management method has processes used to manage the risk. These are usually defined as:
Now when it is said an agile development method is a risk management process, ask which of these 6 process are addressed by the software development method, and what artifacts are produced by the software development method. Same for when someone says #NoEstimates is a risk management process. What aspects of #NoEstiomates are connected to the 6 processes of risk management. If not these 6 processes, what are the process used to manage risk?
The simple answer is neither agile nor #NoEstimates are risk management processes, no matter how much specific advovcates want them to be. They may be participants in the processes of managing risk. But they are not risk management processes.
So when you hear that phrase, listen carfully to how the term risk management has been redefined and how simply those making that statement have skipped over the processes of managing risk created by uncertainty. No matter how much they want their favorite development method or NOT ESTIMATING method to be a risk management method it is not.
Simply start with the definition of Risk Management. Look to SEI, PMI, ISO, NIST, NASA, DOD, ITIL, and other project management sources for expansion on the notion of risk. In the software development domain look to SEI again for management processes to handle risk.
So Here's My Reason For Why We Confuse Development Mehtods With Risk Management
We don't know what actual risk management processes look, so we redefine out favorite development method as risk management. It's that simple. We don't have a clear understanding of what risk management is, how it is defined by those performing risk management, and how projects that depend on risk management for their success actually operate in the presence of the uncertainty that creates risk.
This has been presented before, but never gets old. It's the introduction to how we manage our program in the presence of uncertainty.
Every systematic development of any subject ought to begin with a definition, so that everyone may understand what the discussion is about.
Marcus Tullius Cicero (196BC ‒ 16BC), De Officiis, Book 1, Moral Goodness
When an idea is put forward, it is difficult to engage in a conversation without knowing the domain and context in which the conversation is taking place.
We don't try to predict the future in order to contol it, we try to predict the future to prepare a proper response to the possible emerging events.
Those who are not engaged in estimating the possible outcomes of future events and preparing to manage in the presence of this uncertainty will be managed by those events. Whether we are preparing for them or not, they are coming
Before accepting the notion that estimating, forecasting, and preparation are not needed, ask what business can we work in where the future cost, schedule, and technical performance of our work is of no interest to those providing the money for our efforts?
Forecasting involves making projections about future performance on the basis of historical and current data - from "Time series Forecasting using Holt-Winters Exponential Smoothing,"
Without current data and past performance data, no forecast of future data can be credible. Current data alone is of no use with knowing what this data should be representing at a performance level. This is the core paradigm of closed loop control. Simply taking samples and assuming these samples represent information to the decision makers is ignoring the underlying system that generated those samples. Without the reference point (goal) the feedback has no way of generating the measurement error and therefore no way of providing input to the system to produce the desired output.
All sampled data from the system, needs to be placed in the context of how those numbers came about, how they were generated, and what possible ranges those numbers could have taken on - not just the current set of numbers.
For example writing down your running times for the past 6 weeks over a trail is useful, but without knowing your capacity for how fast you could run, they are of little use in improving your times over the same or similar courses. The only way to know is to go out an try to run faster.
This is fine. But if you earn your living by running, as a few do in our neighborhood, the goal setting process is missing. Knowing how fast you need to run to win the Bolder Boulder in a Professional category is set before training starts in the winter. It's a training to goal. Amateurs don't usually do this in the Citizen category. But every A runner I know, has a target goal for the 10K before the season starts, a plan to reach that target goal by the May 30th race.
As rank amateurs we are lucking just to finish. But when we see Uta Pipping on the open space trail, she is always looking at her watch, she runs to a goal, we run for a different reason. Same for the pro cyclist on the road, always peddling to a pace needed to maintain their goal.
As Yogu says If you don't know where you're going, you'll wind up somewhere else