The discussion (of sorts) on Twitter around "no estimates" - what ever that actually means, since there is no definitive description other than exploring - brings me back to my core program management, project management, writing software for money, designing algorithms for identifying moving targets in radar systems, and other software engineering experiences.
Let's start with a fundamental pricniple of all product or service development, either internal or external. While leading a couple hands full of project managers at a large Department of Energy environmental cleanup site, where software development was a critical success factor - and by the way we introduced eXtreme Programming into an ANSI validated Earned Value Management System - our external consulting firm gave us some good advice. We were bidding our technology and services at another DOE site, with similar cleanup problems. We were working on strategies, balanced scorecards, systems architectures, and the like.
That's all nice boys and girls, but here's some fundamental advice - our customer has money and we want it. What's it gonna cost and when will you be done providing the capabilities to close this site?
That's it, that's the winning strategy. The customer has a need, we want to providea solution to it. How much will it cost and when will we have it. If we can answer those three questions - cost, schedule, delivered capabilities, with attached unassailable beneficial outcomes - we will win. This is a business strategy. All the Balanced Scorecard presentations, examples of past performance, deep references of success, are all for naught if the customer can't afford our solution. It comes down to this - and this is where I learned this from the Managing Partner of the Big 6 (at that time) consulting firm.
You can't know the value of something until you know its cost
That's a fundamental principle of all business transactions. Value is always exchanged for cost. We do this when we buy a Venti Nonfat Latte at Star Bucks. We do this when we pay the lawn care company to mow and trim weekly. We do this when we buy anything, including software or the services that produce the software.
This is an immutable principle of commerce
So when we hear, there are alternative ways of writing software for money that don't involves estimating the cost of doing that work, think again. How did you get around the immutable principle of commerce? Now notice I used the word estimate in the same post as know. Yes, estimating allows us to know something to some level of confidence.
I'll estimate that my 1 hour drive to work everyday, will be extended to 1 hour 20 minutes when the snow storm arrives tomorrow night.
I know I'd better have margin in my drive schedule, if I want to attend the 8:30 stand up.
I estimate that it will take 3 days to install and verify the database for the system, given the historical data from the last 3 times we did this.
This knowledge can then be used to plan the access to the server room, arrange for all the verification and validation data we'll need to certify the contents are ready for use by the customer.
We estimate to a degree of confidence, things (time, cost, performance) we'll need to know about to do our job.
So How Can We Learn To Estimate?
Here's where we start. We start with what has taken place in the past. We've never done this before you say. I'd suggest, working literally in the rocket science world, there is very little in the commercial software world that hasn't been done by someone, somewhere in the past. You may not know these people, but it's been done. And more importantly it's the people issues that muck up the project most of the time, not the technical, unless of course it actually is rocket science, or stealth fighter science, or bioscience.
So with the second best basis of estimate approach - What is this like? We've done similar things in the past, how is this problem like those solutions? Next comes the 10 questions approach. The Planning Game. Then a parametric approach. Function Points, COCOMO, SEER, Price-S, SLIM, CoStar, and a long list of other basis of estimate tools, some free can guide you. So when you hear software can't be estimated, change the phrase to I don't know how to estimate software projects, but I can sure look into learning how.
Finally the least desirable way to estimate is to ask the expert. This only works if the expert has been calibrated with a reference class, has her optimism bias in check, knows all about anchoring and adjustment, and has a track record for producing credible estimates. If not, you're going to be disappointed in the result.
But our management uses our estimates against us. Our management doesn't understand the notion of probability and statistics. Our management behaves like Dilbert's boss. This has nothing to do with the need to estimate the cost, schedule, and technical performance of the product and service needed by your customer. It has everyone to do with managing up. And if that's not possible, producing a credible estimate with those risks baked in to protect yourself. And if that's not possible start looking for a better manager or even a better job because your company is going to be in the ditch before long.
So when we hear that estimating is the smell of dysfunction - without ever listing one single dyfunction - remember there are lots of dysfunctions in business. This is normal, because humans are involved. But that dysfunction is not caused by the need to estimate. The need to estimate is a core business process. Doing bad estimates, doing estimates for the wrong reasons, doing estimates wrong - that's a dysfunction that is universal.
In the end you need to either nut up or shut up as Woody says. Yes, that Woody. Learn to estimate for all the right reasons, then when there is an opportunity to have an enlightened manager at your current firm or a new firm, you'll be prepared to contribute value to the business process in ways that benefit the top line.
Since that top line, minus the costs to produce the goods or services is the bottom line (in it's simplest form) is what writing software for money is all about. Knowing the middle line - costs of goods sold (COGS) is critical to actually staying in business.
There is a post that references a concept I've come to use that puts uncertainty into three classes. This post it not exactly what I said, so let me clarify it is bit.
First some background. I work on an engagement that provides advice to an office inside the Office of Secretary of Defense (OSD). This office, the inside, is responsible for determining the Root Cause of program performance for ACAT1 (Acquisition Category 1) programs.
These are large programs. Larger than $5B. In most domains outside the ACAT1's this numer is ridiculously large. But inside the circle of large defense programs, $5B is really not that much money. Joint Strike in a Congressional Quarterly and the Government Accountability Office indicated a "Total estimated program cost now $400B," nearly twice initial cost. DDG-1000 is $21,214 Million, yes that $21,214,000,000.
No IT or software development project would come within a millionth of that. If you're interested there are reports at Rand and IDA for the current issues. There are certaintly multi-million dollar IT projects. The ACA web site is probably going to be in the range of $85M to several 100 million. The facts are still coming in. So anyone who says they know and doesn't work directly in the program, proably doesn't know and is making up numbers. GAO will get to the real numbers soon we hope.
Principles Rule, Practices Follow, Everything Else is BS
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
The soliloquy in the movie makes a good point -handling the truth is actually very difficult for almost everyone outside the domain - in many instances.
We want the simple answer. We want it all to be fine. We really don't want to do the heavy lifting needed to come up with an answer. We want the simple answer. Many times we don't want an answer at all, we want to just do our job and ignore the fuduciary responsibility to tell others what the cost and schedule impacts are, or even to do our job of discovering that DONE looks like before we start spending other peoples money.
So here's the way out of the trap of at least (1) and (2)
But the words used in the original post that referenced my post are not my intent, nor are they part of any process I work in.
Here's a list of other posts on this topic. It's a crtically important topic. One that deserves deatiled analysis. One that we're obligated to know and use when it's not our money we're spending. It's called Governance.
Here's some more discussion on Estimating for fun and profit.
There are endless discussions about what went wrong with the Affordable Care Act web site development and deployment. It'll be hard to tell at this early point in the project assessment. But what is clear is this was mist likley a failure of project management.
Below is the acquisition life cycle for Business systems in the DOD, not that HHS is a DOD-style shop, but the paradigm of iterative and incremental development is in place. The release cycles shown here are way too long for something like the ACA Site. But to topology of the process is sound.
Looking at this process there is an obvious starting point. The Business Capability Definition. What is the resulting system supposed to do in terms of capabilities. Not the technical and operational requirements, but what business capabilities will the system provide to the stakeholder when it is in full operation? This is called Initial Operating Capability (IOC).
In our domain we start defining the capabilities using the Defense Acquisition Guide. Here is where Measures of Effectiveness (MoE) are defined. The Measure of Effectiveness is assigned to a capability. If we want a capability, how effective does it have to be? This measure is not a technical performance or a requirement. It is an effectiveness measure.
A MoE for a UAV program we work would be The UAV shall be transportable within a 3,000 mile radius via a C-17/C-141/C-5 package. From the MoE there is a Measure of Performance (MoP). For example weight is a MoP that enables the MoE to be fulfilled. Lower down are Technical Performance Measures (TPM). For example the weight of a Electro Optical / Infrared sensor platform must be under 55 pounds for the UAV to operate properly. It can't be too light or it would disrupt the center of gravity and can't be too heavy because the UAV would burn too much fuel to accomplish it's mission.
So for the ACA site, we'd need to know if there were MoE's, MoP's, TPM's defined that enable the Capabilities to be delivered. Here's the Performance Reference Model for federal IT.
Since the ACA site is pretty much all software, I'm going to suggest that this approach of using Capabilities Based Planning, MoE's, MoP's, TPM's has nothing to do with how the software is built. Either traditional or agile methods can be used. Agile is likely faster, but agile can only work in a domain like this if you know what DONE looks like in terms of MoE's, MoP's, and TPM's. This is a fixed launch date, fixed set of requirements guided by all the insurance regulations, and hopefully some not to exceed budget.
It is a common myth that government acquisition is waterfall and big design up front. DoD 5000.02 prescribes an iterative process designed to assess the viability of technologies while simultaneously refining user requirements. (pg 16 of 5000.02).
One starting question of the ACA Site would be - did they apply the iterative acquisition process in some form, no matter the fidelity of the iterations?
Here are some other fundamental questions as well:
If the answer to any of these is no or we don't know, go find out, get project managers who can do this. Otherwise the probability of project success is reduced. In fact look at the Probability of Program Success literature for further guidance.
The final question is did they have an Integrated Master Plan and Integrated Master Schedule for all the work as described in the Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide? This paradigm has been shown to significantly increase the probability of success not matter the domain, context, development method, technology, or business process. It states in clear, concise, and unequivocal terms what DONE looks like at every point in the project in units of measure meaningful to the decision makers.
The final - and killer question is - did the project team ruthlessly manage the changes to the capabilities? This is suspect is the root cause of the failure. Late changes to complex projects are the kiss of death.
As repeated often here...
Don't do stupid things on purpose
So Now What?
We have to wait to see what the Root Cause Analysis (RCA) shows for the failure of the project. But I'd conjecture the program management processes found in large DoD or NASA programs where not applied in any meaningful way. The site is not larger compared to most of the programs we work ($400M is small), but the processes used to manage those programs can be scaled down with ease. The Principles are the same. The Practices are scalable and the Processes scalable as well.
This is from an article about the application of Bayesian Statistics to a civil suit in the UK over the source of a building fire.
The idea that you can assign probabilities to events that have already occurred, but where we are ignorant of the result, forms the basis for the Bayesian view of probability. Put very broadly, the 'classical' view of probability is in terms of genuine unpredictability about future events, popularly known as 'chance' or 'aleatory uncertainty'.
The Bayesian interpretation allows probability also to be used to express our uncertainty due to our ignorance, known as 'epistemic uncertainty', and popularly expressed as betting odds. Of course there are all gradations, from pure chance (think radioactive decay) to processes assumed to be pure chance (lottery draws), to future events whose odds depend on a mixture of genuine unpredictability and ignorance of the facts (whether Oscar Pistorius will be convicted of murder), to pure epistemic uncertainty (whether Oscar Pistorius knowingly shot his girlfriend).
When we build probabilistic models of project performance - cost, schedule, and technical - we assume we understand the underlying statistical processes that drive these probabilistic generating functions. These are the aleatory uncertainties in duration, cost, and performance. We define the Probability Density Function in the Monte Carlo Simulator. Then we apply that to the network of work activities (the Integrated Master Schedule), to produce confidence outcomes for completing on or before a planned date and at or below a planned cost. This is all fine and dandy. But we really don't know the underlying drivers that create coupling, correlation, and cross correlations between the work activities, cost, and technical performance. These can be model by discovering the drivers in the network.
For the Epistemic uncertainties we need another modeling tool. The current tools don't actually use Bayesian statistics, rather they use Monte Carlo Simulation and treat the Probability of an Event as an aleatory process integrated with the other PDF's, ranges, and their shapes (Kurtosis and Skew).
We're missing the tools needed to construct a credible epistemic model of how the program works. Using the Integrated Master Schedule (IMS) as the topology for work, the probabilistic behaviours of the work elements at each node - cost, schedule, and technical performance compliance of the products - and the coupling and cohesion of the nodes. With this information - assuming it is credible, which is a HUGE assumption - we could model the behaviour of the program and ask what if questions.
There is a class of responders on the forums that continue to seek simple and simple minded solutions to complex problems. Claiming the problems can be solved with simple solutions. Of course those solutions cannot be demonstrated with any credibility outside of a limited domain. Let's start with Mencken
For every complex problem there is an answer that is clear, simple, and wrong. - H. L. Mencken
In the project domain most of the problems start with poor estimates of cost and schedule. Hollmann starts it off http://lnkd.in/vjf4X4. Bent Flyvbjerg takes a more radical view, calling estimators liars , but I chalk that up to a language barrier.
The core problem starts with the politics of estimating. Rarely do the buyers of the project know what it should cost. So when they hear a number that is larger than their expectations, they reject the number. They signal to the sellers what they expect to pay for something. This expectation can come from poor knowledge or simply a naive understanding of the problem, or worse a public promise that simply can't be kept.
But a second problem exists. That across domains, there is the wrong headed belief that one domain has the solution to another domains problems. We need to break the loop we're stuck thinking that problems are only found in other domains and making the ill-informed suggestions that "if you just did it my way," there would be no problem. There is a fundamental issues with managing complexity and managing in the presence of uncertainty. We want it to be simple, it is not, but since it is hard, people don't want to actually work hard on the solution. Instead seeking simple and may times "simple minded" solutions - buy this tool, use my excel spread sheet, dumb down the processes.
So What Is The Solution
First is to recognize that ALL domains have similar problems. Cost and Schedule estimates have been discussed for many decades. Lots of literature leads us back to the singular source - the political will to search for the credible estimate, even if that means not starting the project. Or better yet, to start the project knowing the estimates are flawed.
One provocateur voice in our community suggests that PMs that take on project with poor estimates should be held criminally liable for the results. This of course is laughable, but it demonstrates the complete understanding of the problem and the solution.
This approach reminds me of Pauli's quote when presented with a nonsensical submission from a student.
They were worse than wrong because they could not be proven wrong. Famously, he once said o: Das ist nicht nur nicht richtig, es ist nicht einmal falsch! "Not only is it not right, it's not even wrong!"
Thus is where we are now. Not much in the form of research and questioning, lots of pontificating about how the other guy is all messed up, and if you'd only do it my way, you'd have your solution. Reminds me of the current US political problems. Do it my way. Of course that doesn't work, because we have complex - wicked - problems, and there is no single way. Mencken was right, and he was right long before any of the current problems.
We may be doomed to repeat history with first recognizing the problem is us.
I was at a conference for government contractors and the software systems we use to manage those programs. In the federal government agile is moving forward. Greater Accountability and Faster Delivery Through Modular Contracting.
During one of the session there was a quote that says it all about Agile Development
AGILE MEANS YOU'RE BEING PAID FOR DISCOVERING THE SCOPE OF THE PROJECT
This says it all. Agile is a powerful approach to software development in the commercial world. Agile in the government domain, encounters the Federal Acquisition Regulations (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS).
In these environments, Earned Value Management, the Six Business System Rules, and other procurement guidance define how work is performed. This is all being worked out, but the quote above did it for me!!
In the past NAVAIR had a CD with all the SETR Program Event Checklists. Each had an Excel spread sheet that you filled out to determine if you were ready for the Event and a supporting document that describes all the details of the Event, both entry and exit criteria. The picture above has the link to the current spread sheets.
This approach to the Integrated Master Plan spelled out explicitly what "done" looked and there was no way to skip around the check list and call the work "done," without passing all the items with a GREEN. I've requested from NAVAIR the latest, but looks like it has been expanded to be DOD level now.
This paradigm is at the heart of describing what "done" looks like in units of measure for both the buyer and the seller. Measures of Effectiveness for the buyer and Measures of Performance for the seller.
Jeff Sutherland has a post about agile in the Department of Defense. Good stuff, but there are some issue with "going agile" in the Federal IT space. The first is for contracts over $20M require Earned Value Management, per FAR/DFARS flow downs and the OMB Part 7 guides for "FedCiv."
The second is the contract vehicles need to be adapted to how agile does things with requirements, changes, deliverables, progress payments, fee calculations. Here's the "tip of the iceberg" for integrating Scrum with Federal Acquisition. While Scrum is a wonderful concept and applicable to large majorities of projects, but it's not quite as simple as it might seem.
Here is a recent briefing on the top of agile in the federal procurement world is the focus of many organizations in and out of the government. Management Concepts is one source of training for agile in the Fed space, there are others. But in the end this is an acquisition issue.
Without changes in the FAR/DFARS clause flow downs that "put on contract" the behaviors mandated by the regulations of how software is developed, acquired, and put into service. While the agile community has many power tools to improve the problem in the federal government, they need come at this from a acquisition point of view for those benefits to be realized.
Here's some prior discussion
Today is Veterans Day. As a veteran I have a special experience working with and for veterans. It is more than a shared experience, no matter the service, rank, or experience. It is a shared understanding of service, sacrifice, duty, and honor.
Let us remember those who served today. Not because it is a national holiday, but because their service allows us to celebrate this holiday.
I returned 14 August 1970. Many did not, let us remember them most of all.
Management Concepts is hosting a webinar John Driessnack and I will be speaking about Agile in the Federal environment. This is the first ini a series of discussion about integrating Scrum and Scrum-like software development processes into project and programs subject to the Federal Acquisition Regulations and other guidance like Office of Management and Budget (OMB) Part 7.
If you work in the Federal Government or for a contractor that works for the Federal Government and are wondering when the Fed's are going to figure out "agile," please join us to hear more.
|If you wish to update your profile or no longer receive emails from us, please click here
It rolls down with a change in administration. With a failure of a major project that used agile. With a change in the whim of the leaders. With an over promise under deliver approach of some vendors.
I'm working on an upcoming webinar on introducing agile into the federal IT space. This has been going on for a year, and we're nearing the outcomes from our efforts for a webinar, a one day course and a three day course.
There is a useful report that outlines both the problem and some solutions around this problem that is worth the investment. Visit the site under the cover picture to the right, read the report, follow the links, and see where this complex topic is headed.
If any domain is in need of agile approaches it is Federal IT. But not any approach will work. All the hand waving, Cum By Ya talk of self directed teams sitting in the same room with their customer, letting the requirements emerge as the money is being spent, probably isn't going to pass the smell test of Congressional oversight of spending the public's money.
Something else is needed. The integration of Agile with the contractual management of the Federal Acquisition Regulation.
As the webinar progresses, the courses development, and the planned book emerges, along with the hands on deployment, I'll post more.
Mary has a new essay titled Don't Separate Design from Development. It's one of those "looking back, to see forward" style of essays. I suspect Mary delivered software successfully because she was a subject matter expert and knew what the customer wanted.
I designed, developed, and programmed control systems - flight systems - and led the development of a fault tolerant process control computer (www.triconex.com), where we built an operating system, IO board controls using INTEL 8051, and a National Semiconductor (a CMOS version of the M68000) set of triple redundant processes. This was done in C with an embedded Real Time Operating system, WindRiver.
I can't remember having any formal specifications. We developed hardward, firmware, bent metal, developed the Control System OS, invented a programming language, developed a programming workstation on the original Compaq computer. All by simply "knowing" what to do because we had done it before, or we were literally inventing the future. We had SME's from our customers (Exxon, Boise Cascade, Shell, Total, and Elf). We had process engineers (ChemE's), we had RT OS developers from our previous defense systems. We had white hot C/Unix developers (the real UNIX, the one that came on a 9-track tape from Bell Labs).
But those days are over for lots of reasons.
So Mary is right, lots of software can be built without stories and detailed requirements. But that takes place ONLY because all that "stuff" is in the heads of the engineers and developers who have dome it before - succesfully do it before.
But now we work much more complex programs, systems where the lowest level components are well defined from the subject matter experts, but the integration of those components is complex and evolving.
Mary's quotes Gilb and Brooks is right on. In our Aerospace and Defense business this is RARELY the case. Embedded or Software Intensive systems have a tight connections between "engineering" of the systems and the "design" of the systems. These systems are developed incrementally and itertaively. This is the fundamental principle of Systems Engineering, the V-Model, and the Integrated Master Plan / Integrated Master Schedule. As well "process is King," in our domain, so anyone wanting to skip the process can look for work elsewhere.
At the notional level this is how programs are run in the A&D and other Systems Engineering domains. This approach is Rarely found in the enterprise IT would, expect where enterprise IT is inside and A&D firm. When it says detailed design this is context and domain sensitive. On some embedded system, sysMl is used with UML to define all the interfaces between the components. In other domains, detailed design is down to the board level and timing diagram of the firmware controlled system that will be sent to an ASIC fab shop that converts software (Handel-C) to hardware.
The appropriateness of the level of design is always considered, since the programs are on contract for a delivery date, a target budget, and a specific set of performance parameters.
This concept is also missing from enterprise IT - the idea that the project is a "business," with a profit margin, an expected delivery date, and a minimal set of capabilities. I'd suggest that if projects were run that way, better results would appear. Things like "do the appropriate amount" of documentation. Since documentation doesn't fly to orbit, we do what is needed to get the space craft to orbit and that's all. Documentation is needed, but documentation that doesn't support the capabilities - fly to LOE - is a waste. Why this is not well understood outside our domain I don't know.
Bad Project Management is my guess.
The notion that large programs have heavy, unwieldy processes, and that agile methods - both software development and project management - are the solution is many times just that "notional."
Let's start with the top level principle of large program acquisition in the Federal Government
Many if not most large government acquisitions have unwieldy processes, poor track records, and lots of blame to go around. But the first chapter of the Federal Acquisition Regulation (FAR) provides some guidance that can be applied to any program, project, or development effort.
Notice the timely, best value, for the customer, maintaining trust. Add working together as a team, and being empowered to make decisions in their area of responsibility.
Is this guidance followed - rarely. Why? Beacuse guidance alone is not sufficient for success. If that were the case, we woudln't need lighter weight, agile processes to pull us out of the messes we've created on large federal programs.
Agile needs to take this cautionary example - as agile moves into the enterprise, it will be just the latest attempt to lighten the load of processes and formality.
Taking these words as stand alone and looking to agile development and project management methods may provide the mission and vision around improving project performance in smaller domains.
The NDIA Information Systems Summit II held in Baltimore was interesting in many ways:
The prime outcome of the conference is:
How to make Agile work in DoD
The basis for all this is anchored in several sources. The best one is of course Dan Wards F.I.S.T. work. Dan's The Fist Handbook, The Simplicity Code, and Rogue Project Leader, should be mandatory reading for any agile advocate working in the DoD world and also the interview with Dan on POGO.
There will be much more on this topic in the coming months. But here's the core theme...
Agile software development can be used in formal and structured environments - like DoD - if the baseline conditions for the program (FAR/DFAR/OMB) are the starting point, then following the guidance from Dan, and then and only then deploying the agile principles.
I'm going to conjecture this approach will result in the broadest deployment of agile development processes. The IT budget of $80,000,000,000.00 completely overwhelms every other IT project collection on the planet. This is where the maximum impact will take place.
There is a well worn myth in the agile community, that large complex projects are developed using waterfall methods, where all the requirements are defined upfront, (BDUF), rigid processes are used to execution the program and the outcome is defined before the project starts. This of course is great fodder to the replacement of the "devil" waterfall process with agile processes.
For anyone interested in how a program actually works, here a good book that can be found at the Air University Press book store. "The Development of the B-52 and Jet Propulsion: A Case Study in Organizational Innovation," Dr. Mark D. Mandeles, Air University Press, Maxwell Air Force Base, Alabama. March 1998. The Air University Press has many books on management, leadership, strategy, and history.
Quoted from the preface:
The B-52 and Jet Propulsion: A Case Study in Organizational Innovation is a coherent and nonpolemical discussion of the revolution in military affairs, a hot topic in the national security arena. Mark Mandeles examines an interesting topic, how can the military better understand, manage, and evaluate technological development programs. We see Murphy’s Law (anything that can go wrong, will go wrong) in operation. No matter how carefully the military designs, plans, and programs the process of technological development, inevitably, equipment, organizations, and people will challenge the desired expectations. Mandeles argues convincingly that recognizing the inevitability of error may be the single most important factor in the design of effective organizations and procedures to foster and enhance innovative technology and concepts.
The book focuses on the introduction of jet propulsion into the B-52. This case study illustrates the reality that surprises and failures are endemic to development programs where information and knowledge are indeterminate, ambiguous, and imperfect. Mandeles’ choice of the B-52 to illustrate this process is both intriguing and apt. The military had no coherent search process inevitably leading to the choice of a particular technology; nor was decision making concerning the B-52 development program coherent or orderly. Different mixtures of participants, problems, and solutions came together at various times to make decisions about funding or to review the status of performance projections and requirements.
Sounds like an agile project to me.