We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.
We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.
We're preparing for a Webinar on 25 September 2014, now titled Using Techncial Performance to Inform Earned Value, which addresses the disconnect in EAI-748-C between two statement
Two reconcile these two statement, we need to have a process of informing Earned Value (BCWP) with the Techncial Performance of the products being built. After the Webinar, we'll post the link.
In the mean time here's a list of resources gathered to support this topic.
There is a popular notion that Projects are some how not needed to develop software. Just fund the Team and the business outcomes needed to fulfill a strategy and deliver value to the balance sheet will appear.
This may work in domains where the project team, the business and its processes and the funding sources are in fact all the same - a small startup for example. When the business moves beyond this self-contained startup mode, other processes become needed to Govern the use of funds - Stewardship of Funding.
Here's how to approach the process, once the firm moves beyond a collection of individuals personally all interacting with their customer based. These processes, data, and outcomes belong to the realm of Governance. In IT, Governance †
From another source The right principles trump practices everytime - Dean Leffingwell.
This notion that practices and processes can be put forward in the absence of testing them against principles has become popular.
The most visible of course is that decisions can be made in the absence of estimating the cost and impact of that decision. The principle of MicroEconomics of Software development was first stated by Dr. Boehm. Early in that #NoEstimates discussion was a comment that all those ideas are old and no longer applicable. Of course that ignores the principle of Microeconomics along with most every other principle of managing projects while spending other peoples money.
As well there are other principles of project success
Here's how to develop the answers to those Principles questions.
For most projects showing up on or near the planned need date, at or near the planned cost, and more or less with the planned capabilities is a good measure of success. Delivering capabilities late and over budget is usually not acceptable to those paying for our work.
So how do we do this? Simple actually.
We start with a Plan. Here's the approach to Planning and the resulting Plan.
The Plan tells us when we need the capabilities to produce the needed business value or accomplish the mission. The Plan is a strategy. This strategy involves setting goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. The strategy describes how the ends (goals) will be achieved by the means (resources) in units of measure meanigful to the decision makers.
Strategy creates fit among a firms activities. For Enterprise IT, this fit is defined by the relationships between the needed capabilities delivered by the project. The success of a strategy depends on doing many things well — not just a few.
The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions.
When this occurs operational effectiveness determines the relative performance of the organization. ["What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.]
As successful Plan describes the order of delivery of value in exchange for cost, the inter-dependencies between these value producing items, and the synergistic outcomes from these value producing items working together to meet the strategy.
With the Plan in hand, we can ask and answer the following questions:
This Post Answers the Last Question
The example below is from our cycling group. The principles are the same for projects. We have a desired outcome in terms of date, cost, and technical performance. These desired outcomes have some end goal. A budget, a go live date, a set of features or capabilities needed to fulfill the business case.
Along the way we need to take corrective actions when we see we are falling behind.
How did we know we were falling behind? Because we have a desired performance at points along the way, that we compare our actual performance to. The difference between our actual performance and the desired performance, creates an "error signal" we can use to make adjustments.
Out thermostat does this, our speed control on our car does this, the Close Loop Control systems used for managing our project does this. So replaces the cycling example with writing software for money. The Peleton for the desired performance of our work. In the presentation below, ignore the guy in the Yellow Jersey at the end. Turns out he's a Dopper and an all around bad person to his fellow riders and fans.
This example can be related to a project.
This is Close Loop Control
You're cruise control does this about every 10 milliseconds. You Nest thermostat does this slower, but still less than a minute. To know how often you need to sample your progress against plan, answer this question
How long are you willing to wait before you find out you're late? Sample at ½ that time.
This is called the Nyquist Rate, one of the starting point for all the process control software I wrote in my youner days for flying and swimming machines. But it's a good question to ask on all projects as well.
There is a suggestion that only the final target of a project's performance is needed to steer toward success. This target can be budget, a finish date, the number of stories or story points in an agile software project. With the target and the measure of performance to date, collected from the measures at each sample point, there is still a missing piece needed to guide the project.
With the target and the samples, no error signal is available to make intermediate corrections to arrive on target. With the target alone, any variances in cost, schedule, or techncial performance can only be discovered when the project arrives at the end. With the target alone, this is an Open Loop control system.
Irreducible Uncertainty can only be handled with Margin. Cost margin, schedule margin, technical margin. This is the type of margin you use when you drive to work. The GPS Navigation system says it 23 ninutes to the office. It's NEVER 23 minutes to the office. Something always interferes with our progress.
Reducible Uncertainty is handled in two way. Spending money to buydown the risk that results from this uncertainty. Management Reserve (budget reserve and schedule contingency) to be used whenm soemthnig goes wrong to pay for the fix when the uncertainty turns into reality.
The next figure (page 28) shows how to manage in the presence of these uncertainties, by measuring actual performance against the desired performance at each step along the way.
In this figure, we measure at each assessment point the progress of the project against the desired progress - the planned progress, the needed progress. This planned, desired, or needed progress is developed by looking at the future effort, duration, risk, uncertainty - the stochastic processes that drive the project - and determining what should be the progress at this point in time to reach our target on or before the need date, at or below the needed cost, and with the needed confidence that the technical capabilities can be delivered along the way? This is closed loop control.
The planned performance, the needed performance, the desired performance is developed early in the project. Maybe on day one, more likely after actual performance has been assessed to calibrate future performance. This is called Reference Class Forecasting. With this information estimates of the needed performance can then be used to establish steering targets along the way to completing the project. These intermediate references - or steering - points provide feedback along the way toward the goal. They provide the error signal needed to keep the project on track. They are the basis of Closed Loop control.
In the US, many highways have rumble strips cut into the asphalt to signal that you are nearing the edge of the road on the right. They make a loud noise that tells you - hey get back in the lane, otherwise you're going to end up in the ditch.
This is the purpose of the intermediate steering targets for the project. When the variance between planned and actual exceeds a defined threshold, this says hey, you're not going to make it to the end on time, on budget, or with your needed capabilities if you keep going like this.
Kent Beck's quote is...
Optimism is the disease of software development. Feedback is the cure.
This feedback must have a reference to compare against if it is to be of any value in steering the project to a successful completion. Knowing it's going to be late, over budget, and doesn't work when we arrive at late, over budget, and not working is of little help to the passengers of the project.
Much of the objection to SAFe comes from its seemingly Top Down paradigm. Many agile voices object that this approach is not agile, in the way they define agile - individual teams making their own decisions about what to do with their customer.
The domain of this bottom up approach is usually not well defined, other than the classic eXtreme Programming or the Agile Spectrum of Guy Strelitz where Co-Hacking is on the left, where the developers live by the pure agile manifesto.
But what happens when agile is applied to an enterprise development effort. One where the business needs define the capabilities that are not emergent, but rather they are needed to fulfill the business strategy or the mission of the organization. Then another paradigm emerges. One where higher order questions, frameworks, framing assumptions, governance, and other externalities trump the needs of the individual team.
Here's one approach that has served us well over time.
The statement is a bit off, since it's the Capabilities that are defined by the business. These capabilities are then turned into requirements, which may in fact emerge, which themselves are turned into working software. Starting with the capabilities, an enterprise software development effort means re-looking at the agile manifesto statements.
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development.
The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity - the art of maximizing the amount of work not done - is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
So What Does All This Mean?
Without a domain, hard to assess the applicability and appropriateness of much of anything.
What this really means of Scaled Agile Framework is the place to start for the enterprise.
In a previous post on How to Assure Your Project Will Fail, the notion that the current project management processes are obsolete and the phrase of dealing with complexity on projects is a popular one in the software domain. By the way that notion is untested, unreviewed, and is missing comparable examples of it working outside the specific references in the orginal paper.
But here's the simplest approach to deal with project complexity...
Don't Let The Project Get Comnplex
Nice platitude, It's that simple and it's that hard.
Before the nashing of teeth is heard, here's a working example of not letting the project get complex.
So how is this possible? First let's make some assumptions:
Here's the steps to dealing with project complexity that have been shown to work in a variety of domains:
Let's pause here for a process check. If there is no narrative about what DONE looks like in units of measure meaningful to the decision makers (MOE, MOP, TPM, KPP) then the project participants have no way to recognize DONE other than when they run out of money and time.
This is the Yourdon definition of a Death March project. Many who use the term complexity and complex projects are actually speaking about death march projects. We're back to the fundamental problem - we let the project become complex because we don't pay attention to the processes needed to manage the project to keep it from becoming complex. Read Yourdon and the Making the Impossible Possible: Leading Extraordinary Performance - The Rocky Flats Story books to see examples of how to keep out of this condition.
When hear the notion that chaos is the basis of projects in the software world - run away as fast as you can. That is the formula for failure. When failure examples are presented in support of the notion that chaos reigns and there is no actual, verifiable, tangible, correctable Root Causes listed - run away as fast as you can. Those proposing that idea have not done their home work.
But the question of dealing with complexity on projects is still open. The Black Swans, that get misused in the project domain, since the term refers to the economics and finance domain through Taleb, may still be there. There are there because the project management process have choosed to ignore them, can't afford to seek them out, or don't have enough understanding to realize they are actually there.
So if Black Swans are the source of worry on projects, you're not finished with your project management planning, controlling, and corrective actions duties as a manager. Using project complexity as the excuse for project difficulties is easy. Any one can do that.
Taking corrective actions to eliminate all but the Unknowable uncertainties? Now that's much harder.
Four core components must be in place, have credible values, be used to make decisions, be connected to the actual performance of the project in a closed loop feedback system for the project to have any hope of success.
Without a sequence of the work to be performed, some knowledge of how long it will take to perform each of those work elements, there is no way to know when the project will be complete.
There are many ways of discovering the work, its duration and the sequence of this work. The approach to answering these questions is dependent on the domain, value at risk, and the needs of the customer.
It is the needs of the customer that anchor the selection process, not the providers of services to those customers. This is a repeated theme here. Those providing the money to do the work have a vested interest in how the work is performed.
The value of the produced outcomes is dependent of knowing something about their cost. This is a fundamental principle of all business and is the same for project work.
If we have some notion of value, either through the customers opinion or through and actual business modeling process, we need to know the cost of producing that value.
Since all variables on projects are random varaibles, our cost estimating processes must be applied to reveal the could cost, will cost, should cost aspects of these deliverables.
One approach to cost estimating is Reference Class Forecasting. Other approaches can provide credible estimates including parametric modeling. Tools, processes, formal guidance are all available for estimating cost in nearly every business and technical domain.
Guessing is not estimating, ignoring the cost of performing the work is negligence, claiming costs can only be known at the end to the project is ignorance.
All risk comes from uncertainty
Risk management is how adults manage projects - Tim Lister
Unmanaged risk will not go away, it is always there. The management of risk starts with a list of risks, the risk register. This list states the probability of occurrence, the probability of impact, the cost to handle the risk, the residual probability of the risk once it is handled.
The risk management processes is explicit and follows a step-by-step process. Some development processes like agile can increase the visibility to the reduction of risk, but they are not risk management processes, just contributors to the risk reduction and handling processes.
All project variables are random variables, act accordingly.
Knowing the underlying probability distribution functions of the statistical processes driving these random variables is a critical success factor for all project.
Anyone seeking or suggesting there is certainty on projects has failed to pay attention in their High School statistics class.
Every cost, schedule, and technical performance parameter on every project has a probability distribution assigned to it. Without knowing or understanding that distribution, or that the distribution is present, will lead to unanticipated cost, schedule, and technical performance disappointment.
So Now What?
There's a continuing discussion on LinkedIn and Twitter about project success, the waste of certain activities on projects, and of course the argument without end on estimating the cost of producing the value from projects. It's really a argument without evidence, since some of the protagonist in the estimating discussion have yet to come up with alternatives.
I've come to understand Project Success is multidimensional a few years back after reading "Reinventing Project Management." Aaron Shenhar and Dov Dir, Harvard University Press. The other book that changed my view of the world was IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Peter Weill and Jeanne W. Roos, Harvard University Press.
This last book should put a stake in the heart of #NoEstimates, since the decision rights for those needing and asking for the cost and schedule for the business capabilities belongs to those with the money, not those spending the money.
A summary of the book can be found in the paper, "Project Success: A Multidimensional Strategic Concept," Aaron Shenhar, Dov Dvir, Ofer Levy, and Alan Maltz, Long Range Planning 34, (2001) pp 699-725.
In many cases there is not a "product" per se, but a service. These are wrapped in a larger context in today's enterprise paradigm as "capabilities." Provided the capabilities to accomplish a goal, mission, or business outcome. This is done through products and processes. Both are used by people, other processes, and other products to accomplish other goals, mission, or outcomes. This is the System of Systems view of the "project" paradigm.
Shenhar and Dvir's research along with Levy and Maltz in the paper showed there are 4 success dimensions.
With this paradigm, principles, practices, and processes become the basis of "project management," and the resulting product or service. But the measures of success are better described by Shenhar and Dvir model, since that are the direct consequences of all the enablers of that success.
So Here's the Killer Question(s)
The common practice of starting with requirements leads to the common complaint that requirements change, we don't know what we want yet, our users aren't very good at defining requirements so we'll let them emerge. While these are common, they are usually a symptom of a missing piece of information.
We don't know what capabilities are needed and what is the Concept of Operations that those capabilities will implement, the project as likely failed before it starts. If we do know the Capabilities and the Concept of Operations, we can then measure progress of our work effort, not in the passage or time, consumption of resources (including money), or the production of stories or story points (which are unit-less and therefore pretty much meaningless to those paying the our work), but in Measures of Effectiveness, Measures of Performance, and Technical Performance Measures..
Concept of Operations
Let's start with a formal defininton of the Concept of Operations
What this tells us is that we need to start with what DONE looks like. DONE is not a set of features. DONE is not stories or story points. DONE is not modules, databases, bent metal. DONE is the ability, the capability to do something of value in echnage for the money we've spent.
The assessment of a capability is it's Measure of Effectiveness. These are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. We need to define these upfront. The Measures of Effectiveness:
They are not emergement. They are descriptions of success. When we treat them as emergent, our project is chasing a moving target and is headed to the ditch.
Next are Measures of Performance. They characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. The Measures of Performance are:
Next comes the Technical Performance Measures. These are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal. The Technical Performance Measures:
Notice we have not mentioned coding, development methods like Scrum or XP, teams, paired programming of anything to so with building code. With these items in place, all those activities have no reason for being, other than to consume money and pass time. None of those items having anything with moving the project toward DONE, other spend money and pass time. Oh, you'll get a pile of stories implemented. Are they the right stories? How would you know. You'll perform lots of Test Driven Design. Is is the right design. How would you know?
Oh your customer is going to prioritize those stories and features. How are they going to know in the absence of knowing what DONE looks like.
Capabilities Based Planning
This has been presentde before, but now it has a reason - the Concept of Operations.
We Know the Answer To That Rights?
I heard this phrase in a conference call yesterday with a DOD client and thought, how clever I'll write a blog about this. Only to find out there is a Forbes article with same name and several other articles as well.
The Forbes article had a case study about doing it right around a business process. It was the perfect framework (repeated here) for applying Performance-Based Project Management®
In the Forbes article there are five steps:
In the end project success is about knowing what done looks like, knowing how to get there, how to measure progress along the way. And of course knowing impediments to progress and handling them. These concepts are instantiated in two papers from a colleague Pat Barker, What is Your Estimate at Complete and Program Master Scehdule Can Improve Results, on page 20.
There are three key elements of every project on the planet - Cost, Schedule, and the performance of the product or service produced by the project. Each of these has drivers. The connections between Cost, Schedule, and Technical Performance are not Iron as suggested in the Iron Triangle of a PMI view of the project. Instead the connections are elastic, springy, flexible. But they are connected.
Cost is driven by:
These costs are themselves variable, as a function of the project phase, external forces for labor, materials, and overhead. But the cost variable starts with these.
Schedule is driven by:
Technical Performance is driven by:
So What Does All This Mean?
But for project success we need to have several things in place. The random behavior has to be knowable. It can't be chaos. If it is chaos, the project will fail, because there is no corrective action.
The three elements need to be known to some degree of confidence.
Do know these to some degree of confidence, you don't know what DONE looks. The only measure of progress becomes the passage of time and consumption of money. It's unlikely any customer is going to be willing to pay you - at least for very long - to spend their money, without some understand of Cost, Schedule, and the resulting Technical Outcomes.
Matt Heusser's article brings up some interesting points. Let's look to see if there are any limitations from a domain or context point of view. By domain I mean, in what taxonomy are you writing software for money. By context I mean what are the constraints or governance guidelines in that domain.
1. Make the amount of money small
This is a version of time boxing. It limits the value at risk for the development process. This bounds the risk process. In exchange for the total loss of doing the wrong thing information can be found. This is also called tuition cost.
Issue: We may not have all we need to forecast the total cost and schedule. Projects in many domains aren't made up of small chunks of themselves. So we'll need to confirm that the sum of the parts results in the whole. Integration, test, verification, architecture, interface management, and many other Systems Engineering aspects need to be involved in some way.
2. Fund a pilot that delivers working software and use this model to forecast schedule
This is buying a reference class. With the reference class - all be it a class of 1 - a forecast of future cost, schedule, and technical performance. We need all three in the reference class.
Issue: this is a larger issue of Number 1. With the pilot can we be assured that work can be scaled? Verification of that will have to be part of the pilot or a follow on. Then comes the confidence intervals for how that scaling will interact with the other - yet to be developed - components of the system. Is the scaling linear, nonlinear, stochastic interactions, and a whole raft of other discoverable processes. Each needs to be planned and budgeted.The ACA web site is a recent example. A UAV I worked where the engine didn't have enough thrust after the final integration. Etc. etc.
3. Move from contract negoiation to partnership
You've simply transferred to responsibility to estimate the cost and schedule to someone else. A single example - in the article - is not the basis of a syndicatable process. So this example, while interesting, probably isn't going to go too far with someout means to address the Estimate at Complete needed in most non-trivial software development projects.
Issue: Still don't know how much the project will cost in the end.
4. Employ Start Stop Heuristics
Seems like just another version of time boxed budget and schedule.
Issue: still doesn't address the Estimate at Complete before actually reaching the end or near the end of the project.
5. Drop Estimation From Your Estimation Process All Together
Another version of time boxed budget. Someone has to know how much money is needed to run the business on an annual basis. Or how much money will be allocated to do some work on an annual basis. This is called Level of Effort. Work until the money runs out, we give you more money, or tell you to stop. PayPal works in this way - sorta. The prioritization of the work is the responsibility asking for the outcomes. They have a budget. Give that budget (not funds, budget) to the development organization in exchange for delivered code.
Issue: The project will cost a know amount. We don't know what we'll get for that amount. But that might be OK. Once work is been going for awhile, a Reference Class can be built to allow that question to be answered, assuming the requested software fits inside the reference class in some way.
So In The End
The 5 suggestions don't have a domain beyond the single examples. And the suggestions don't seem to have a way to forecast the bounds of the project with an Estimate at Completion beyond the use of the reference class of the project itself. This self referencing reference class seems a bit sporty.
So yes, there are some ways to develop software in the absence of formal estimating. Although 2 of the 5 are actually using reference classes to forecast.
Those paying for the work to be done, still have to come up with some upper bound on cost, schedule, and technical capabilities for that cost and schedule. These 5 suggestions are a start. But we don't yet know where they can be applied. If they have been applied outside of specific anecdotes, and if they are scalable beyond the personal anecdotes.
No problem. This notion of not estimating is still evolving. At some point, answers need to be forth coming and the Yoda-style conversation replaced by business conversation - what did you do with my money?
In a meeting today on the difficulties of increasing the probability of project success in our domain (DOD/NASA). There were three sources of uncertainty around cost, schedule, and technical forecasting
The program runs into things that cause cost and schdeule overruns:
We really shouldn't be doing #1 and #3. #1 means we didn't look hard enough. #3 means we can't handle the truth. #2 is the definition of uncertainty. But is it uncertainty because the project, didn't know or didn't want to know.
Doing stupid things in purpose
There are endless discussions about what went wrong with the Affordable Care Act web site development and deployment. It'll be hard to tell at this early point in the project assessment. But what is clear is this was mist likley a failure of project management.
Below is the acquisition life cycle for Business systems in the DOD, not that HHS is a DOD-style shop, but the paradigm of iterative and incremental development is in place. The release cycles shown here are way too long for something like the ACA Site. But to topology of the process is sound.
Looking at this process there is an obvious starting point. The Business Capability Definition. What is the resulting system supposed to do in terms of capabilities. Not the technical and operational requirements, but what business capabilities will the system provide to the stakeholder when it is in full operation? This is called Initial Operating Capability (IOC).
In our domain we start defining the capabilities using the Defense Acquisition Guide. Here is where Measures of Effectiveness (MoE) are defined. The Measure of Effectiveness is assigned to a capability. If we want a capability, how effective does it have to be? This measure is not a technical performance or a requirement. It is an effectiveness measure.
A MoE for a UAV program we work would be The UAV shall be transportable within a 3,000 mile radius via a C-17/C-141/C-5 package. From the MoE there is a Measure of Performance (MoP). For example weight is a MoP that enables the MoE to be fulfilled. Lower down are Technical Performance Measures (TPM). For example the weight of a Electro Optical / Infrared sensor platform must be under 55 pounds for the UAV to operate properly. It can't be too light or it would disrupt the center of gravity and can't be too heavy because the UAV would burn too much fuel to accomplish it's mission.
So for the ACA site, we'd need to know if there were MoE's, MoP's, TPM's defined that enable the Capabilities to be delivered. Here's the Performance Reference Model for federal IT.
Since the ACA site is pretty much all software, I'm going to suggest that this approach of using Capabilities Based Planning, MoE's, MoP's, TPM's has nothing to do with how the software is built. Either traditional or agile methods can be used. Agile is likely faster, but agile can only work in a domain like this if you know what DONE looks like in terms of MoE's, MoP's, and TPM's. This is a fixed launch date, fixed set of requirements guided by all the insurance regulations, and hopefully some not to exceed budget.
It is a common myth that government acquisition is waterfall and big design up front. DoD 5000.02 prescribes an iterative process designed to assess the viability of technologies while simultaneously refining user requirements. (pg 16 of 5000.02).
One starting question of the ACA Site would be - did they apply the iterative acquisition process in some form, no matter the fidelity of the iterations?
Here are some other fundamental questions as well:
If the answer to any of these is no or we don't know, go find out, get project managers who can do this. Otherwise the probability of project success is reduced. In fact look at the Probability of Program Success literature for further guidance.
The final question is did they have an Integrated Master Plan and Integrated Master Schedule for all the work as described in the Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide? This paradigm has been shown to significantly increase the probability of success not matter the domain, context, development method, technology, or business process. It states in clear, concise, and unequivocal terms what DONE looks like at every point in the project in units of measure meaningful to the decision makers.
The final - and killer question is - did the project team ruthlessly manage the changes to the capabilities? This is suspect is the root cause of the failure. Late changes to complex projects are the kiss of death.
As repeated often here...
Don't do stupid things on purpose
So Now What?
We have to wait to see what the Root Cause Analysis (RCA) shows for the failure of the project. But I'd conjecture the program management processes found in large DoD or NASA programs where not applied in any meaningful way. The site is not larger compared to most of the programs we work ($400M is small), but the processes used to manage those programs can be scaled down with ease. The Principles are the same. The Practices are scalable and the Processes scalable as well.
There's a post on a Deltek implementation partner site about applying Earned Value. It has some good advice, but the premise of the starting point needs to be addressed.
Before going on to suggest things to do on a project using EVM, it's best to have one of those don't do stupid things on purpose discussions. If the customer or potential customer is doing stupid things on purpose, it's important to stop those first, before attempting to make any other improvements. Once that is done, the core paradigm of Earned Value Management is very simple and obvious ...
Turning the Subjective into the Objective
This phrase is not mine, it is a colleague's who said it while we were walking down the hallway of a client site, who is introducing EVM to a $1.9B weapons program that has never had EVM before, but is now being mandated to do so by Congress!!!
So Here's the Introduction to What Not To Do and the Simple Fix
Let's start with a picture of where to start. The are 32 Guidelines in ANSI-748-C. 11 are needed for he success of any project, in any domain. If you aren't doing these 11 in some way, the probability of success for your project is low. This is the case of all project management and product development processes, These are like the 5 Immutable Principles - they have to be there in some way. Here's the 32 from ANSI-748-C and the 11 are colored and listed below.
So Let's See How the 11 Can Address Each Of These Issues?
Here's their list of reasons why Earned Value Management is not readily adopted by contractors.
So What Does All This Mean?
It means applying the 11 Guidelines shown above to your program is the same as being a credible program manager. If you're not doing these 11, you're probably not doing your job as a Program Manager on an EVM program. Think about that. You've been assigned to look after $20,000,000.00 of the governments money. Would you actually use some the phrases above when asked why you're doing a poor job of managing the program? Not for long I suspect.
One Final Comment
Earned Value Management is one of several enabling technologies to turn the subjective into the objective. But it is only a necessary condition for success. It is far from a sufficient condition for success. You need an Integrated Performance Measurement Baseline with these elements
There are five core processes for project success. Each must be in place. Each must be applied. The outcomes of each must be produced for the success of the project.
These are the foundation for increasing the Probability of Project Success (PoPS).
Our daughter is a 2nd grade student teacher this semester. On her board is a poster. It is the framework for the learning process used in the district. I visited today and read in small groups to the cutest 2nd graders on earth. During lunch I saw this poster and it dawned on me - this is exactly what we need for the weekly program status meeting. If it works for 2nd graders, it'll work for Control Account Managers and project participants accountable for project performance.
The Afterburners has a seminar titled High Resolution. Their tag line is
Without high resolution, an organization can not align its tactics and strategic plan. High resolution details ensure the organization can execute and provide results that align with the expectations of those that developed the strategic plan.
In the project and program management domain, this means several things, that are many time missing. Missing means the probability of success is reduced. Missing means the project participants have to return to just doing work. Here's what's missing - the the Afterburners guys provide in ways that are rarely found in the PM world. They spoke at a Rocky Mountain PMI Symposium and I attended a training session. It's not for everyone, but if you want a clear and concise approach to getting things done, they're the one.
Here's where to take their quote:
This all comes together in the Integrated Master Plan and Integrated Master Schedule.
I got an announcement today that Herding Cats was nominated for a spot in the Top 100 Business Blogs in the category of Operations Management. I've never been one for awards. I'm certainty not ungrateful, but being influential in your peer group probably can't be measured in a survey.
The list of PM Blogs includes those I know and some I was unaware of. Here's the list. Herding Cats is in between QuantumLeap and Eight to Late. Both good company.
All these Blogs are worth the reading effort. I have some of my own, in no particular order, other than going down my Google Reader list. I'd recommend: