I presented a briefing to an internal agile development group at a major defense contractor this week. The topic was - as always - integrating Earned Value Management with Agile Software Development. The discussion started with connecting the WBS to the Agile domain. The reason for this is simple. All Federal government contracts come with a Statement of Work (SOW), a Program Work Statement (PWS), a Statement of Objectives (SOO), or similar statements of what DONE looks like.
From these documents, which are part of the contract, a Work Breakdown Structure is either provided or built. A Contractor WBS many times is derived from the Government's WBS. These WBS's define the decomposition of the product and the services that develop the product. There is NEVER a decomposition of the departments or work performed by those departments. Things like Design, Test, Integrate are not part of the WBS. The can be a field in the IMS (Integrated Master Schedule) showing which department is assigned to which WBS elements. This is then used in a pivot table to build the RAM (Responsibility Assignment Matrix).
So here's how to connect Agile to the WBS
In the current vernacular of Agile an Epic is the top level outcome of the Agile project. Per several definition an Epic is a collection of User Stories. These Epics can be put up on the wall and arranged into groups - the Epics. This has a very close connection to the Program Events or Milestone in our defense procurement model.
From the Epic, Stories are decomposed and Features are extracted. These Features are the tangible deliverables of the project. They are what the user bought for the budget of the project. Inside each Feature are Tasks that develop the Feature inside the Iteration.
So this is a nice starting point for connecting the dots between the Agile vocabulary and our more traditional IMP/IMS paradigm.
Here's some background on IMP/IMS:
All of this focuses on building a clear and concise of the Grammar of DONE. If you don't know what DONE looks like in tangible units of measure meaningful to the decision makers, than you'll never recognize it when it walks through the door. With this concise description, the only measure of progress to plan is the consumption of money and the passage of time. Agile does a good job of defining the outcomes of each itertaion, and managing the contents of those iterations during the Planning process. What is needed is an upper level process. This is provided by the Epic.
So starting with these connections, traditional project management structures are connected with agile structures.
What's next is to define the work rhythm. Of course Agile does this with Iterations that run a few weeks. Traditional IMP/IMS does this with Work Packages, running 4 to 6 weeks. These Work Packages are embedded in a Rolling Wave, with multiple Work Packages in each Rolling Wave.
So here's how to flow down the WBS to the Work Packages. In Agile the WBS element can be a User Story (a deliverable capability) or a Feature (also a deliverable capability). In all case these Work Packages are single output work efforts.
It is good that most people don't understand our banking and financial system. For if they did there would be a revolution by morning. - Henry Ford
I'm not a fan of the approach taken by Standish to report success rates in IT for many reasons, not the least of which the lack of transparency for the source data, the statistical processing methods, and the population statistics.
But here a figure from a McKinsey report
With all the whining in the project management community about problems, these numbers look interesting. First since there are no error bands on the chart, so the picture is pretty much worthless for assessing the actual data. It may be that the bars overlap each other in a single stack, and likely overlap each other from 1994 to 2005.
Next it can be said there was a doubling in the success rate in exchange of a halving of the failure rate. Not bad improvement over the time period for enterprise class IT projects.
But in the end there is improvement. But the statistical question that is never answered in these type of lame statistics is what is the best improvement level for enterprise IT projects? Next of course is what is the variances limits on the time and cost overruns? That is not available in the data. In our FAR/DFAR procurement process and DCMA System Description there is always - and must be - a variance on cost and schedule performance. Otherwise we'd be writing variance reports every week as the project progresses. Without that data, the Standish reports are even more Lame.
There was lots of talk awhile ago about the poor results from multitasking. Afterburners is a training organization we've participated in before. At PMI conferences and on a government contract. They can speak directly to multitasking and task saturation in a group setting. These concepts go directly to the management of projects.
Avoiding Task Saturation
Plan the work, work the plan is a starting point. But is the plan credible? Is it executable? Are there enough resources for success? What are the impediments to success? The answers to these questions must be in the Plan. The Plan is the strategy for project success. The Plan is not the Schedule. That's separate. The Schedule shows the sequence of the work that must be performed for the Plan to have a probability of success. Notice the work Probability of Success. All Plans must adapt to the emerging situation.
Subscribe to the Afterburner feed, you'll get great project management advice, as well as business management advice. Stop listening to the "red herring" thought leaders who tell you that the source of the problem is Waterfall, or multitasking, and what ever. And start taking action to be successful.
I am amused, I said, at your fear of the world, which makes you guard against the appliance of insisting upon useless studies; and I quite admit the difficulty of believing that in every man there is an eye of the soul which, when by other pursuits lost and dimmed, is by there purified and re–illuminated and is more precious for then ten thousand bodily eye, for by it alone is truth seen.
— Socrates to Glaucon in Plato’s Republic Book VII
They set only the broadest of objectives and emphasized seizing unforeseen opportunities as they arose… – Jack Welch, Jack, p. 448.
So if Jack can figure this out, why can't other leaders do the same.
So let's start with a risk statement, how to speak "risk." It goes like this
If A occurs because of B, than C will be the result.
There are some common errors in the risk management conversation:
So let's clarify some concepts
Discovering the needed contingency associated with the level risk requires the use of tools. A simple list of risks and the consequences provides little or no information about the credibility of the schedule. Modeling the risks, starting with the variability of the duration and effort, the impact of these variances in the duration and cost of the project schedule.
While this approach may appear complex, it is the minimum level of programmatic control needed to create a credible schedule. Without this approach the schedule is just a best guess of what the forecast cost and schedule will be for the project.
The notion that large programs have heavy, unwieldy processes, and that agile methods - both software development and project management - are the solution is many times just that "notional."
Let's start with the top level principle of large program acquisition in the Federal Government
Many if not most large government acquisitions have unwieldy processes, poor track records, and lots of blame to go around. But the first chapter of the Federal Acquisition Regulation (FAR) provides some guidance that can be applied to any program, project, or development effort.
Notice the timely, best value, for the customer, maintaining trust. Add working together as a team, and being empowered to make decisions in their area of responsibility.
Is this guidance followed - rarely. Why? Beacuse guidance alone is not sufficient for success. If that were the case, we woudln't need lighter weight, agile processes to pull us out of the messes we've created on large federal programs.
Agile needs to take this cautionary example - as agile moves into the enterprise, it will be just the latest attempt to lighten the load of processes and formality.
Taking these words as stand alone and looking to agile development and project management methods may provide the mission and vision around improving project performance in smaller domains.
Through twitter I came across this chapter in Dean Leffingwell's book Scaling Software Agility. This is another example of poor references to old processes and the mis-understanding of how software development processes work in domains (possibly) outside ones own experience. In this case DoD procurement methods versus commercial internal software development.
There are certainly problems in all software projects, no matter what development being used. But when thought leaders produce stuff like this is dilutes the credibility their message when that message needs to be increased.
Here's the starting point. This is the Water Fall paper from 1970. 1970 for gawd's sake. How about a development method for 1980, 1990, 2000, or maybe even 2011? The type of development defined in the 1970 paper is not allowed in the domain the Waterfall paper was created in. And, if the thought leaders would read the paper and stop clipping pictures from the paper, (which Dean did a bit before going back to the Red Herring) they would find out that Walker was stating that this approach was gave poor results for TRW's effort.
I never met Dr. Royce, but I worked in Building O-6, One Space Park during a later period (1980) of his tenure and we had incremental and iterative processes for embedded station holding software in place already. Dean says in the book, this model was never to used ... but now that you're using this model, here's the problems. This is like the news reporter asking the Senator when did you stop beating your wife? It's a bait and switch argument. Poor understanding at best, manipulation of the problem at worse.
Next comes a really bone headed picture. This shows the deadline moving to the left. This violates the Federal Acquisition Regulation (FAR) and the Defense Federal Acquisition Regulation (DFAR). You cannot move anything on baseline without adjustments either by re-baselining or single point adjustments the program.
It's not allowed. It's a baseline. It's "against the law." Moving to right many times happend. Moving the left is complete nonsense in the domain Dean describes these diagrams came from. If you do that on a commercial project you get what you deserve - a failed project.
That means Dean's example is not allowed, it's a non starter. With Figure 2-3 being completly bogus, the rest of the chapter is tainted with naive and misunderstood processes as well.
So why do the agile thought leaders do this nonsense? I can't say, I haven't asked them. But here's some wild speculation:
In all cases there are two down sides.
Some how, some way, those with possible solutions need to gain some level of understanding of how large complex system of systems are developed and stop making these nonsensical examples of how to manage projects badly. It's like Men Behaving Badly, a 1996 -1997 sit-com where the characters do really stupid things and people laugh. The show was canceled part way through the second season for all the right reasons - it was a bad show. Just like doing things shown in Figure 2-3 is really bad project management.
Please Make It Stop
Mary Poppendieck has a post about lean software development describing a typical software development process - at least from her experience.
I'd like to describe how software is developed in domain I work.Mary's post, while informative is one of those domain and context free posts. What domain is being spoken of? SAP roll outs, avionics, process control, personnel management systems, procurement, real-time control in cars, aircraft, ships, refining, pulp and paper. Our 6 guys in the same room with their customer?
Our domain is embedded flight systems, the ground systems and support systems providing functional capabilities to support the "flying machine," and other mission critical applications and the business systems that go around them, because if you can't order parts, receive those parts pay subcontractors, and report to the government, building "flying machines" is not a viable business.
But more importantly, our domain is guided by several processes that direct address the issues spoken to in the post, WITHOUT having to install another process. In fact the many of the principle of Agile are embedded in a process that provides the Systems Engineering principles of complex integration of emerging requirements. The Emergent Design post has an example of this.
So Mary says...
Looking in from the outside...
Here's the obvious errors in the approach described by Mary.
The solution is simple - Don't do this. You don't need a new process, just don't do stupid things on purpose.
Then the post says ...
Again from our domain of mission critical systems... Yes, code needs to be frozen. Other components, systems and subsystems are dependent on this code. Changing code base drives these external systems over the edge and thrashes the entire process of "systems engineering." In the System of Systems world - any non trivial project - management of the code base stability is a critical success factor. Domain and context are important here.
Now the next Red Herring. Continuous integration and test is part of every embedded system paradigm. It's part of CMMI Dev V1.3, it's part of DO-178B, it's part of any credible software development lifecycle. You don't have continuous integration and test, then you're project is in trouble.
No doing continuous integration and test is another prime example of doing stupid things on purpose.I know projects don't, but they're not credible. Yes say no to this practice. You don't neede agile to know this. It's just good project management. Constructors don't do this, landscapers don't do this, house painters don't do this (well, while building out house, they swapped the body and the trim colors without checking the first wall and they had to do the entire house all over again), and certainty anyone building software that controls things like money, fuel feeds, landing controls, has continuous integration and test.
On our NASA manned space flight programs estimates to complete are provided every two weeks. Working software on the full fidelity emulator of the space craft ae dropped every 6 to 12 weeks. This duration is dependent on hardware development.
The rest of the post has good things around feature management and regular Lean management. But why can't thought leaders look outside those teams doing really stupid things to see that all this discovery of how to build software has been done since the late 70's (1978 TRW Redondo Beach development cycle for TDRSS) for embedded control systems, the ground systems, and the launch control systems, and the supporting business systems?
For anyone looking to see how to manage complex emergent systems in the absence of all the agile hype, look to the work of Eberhardt Rechtin. Once you take out the anti-management rhetoric of some recent books and the personalization of the processes that is popular in the small independent team approaches, Rechtin's description of building systems provides credible guidance in many domains - iterative, incremental, emergent, reuse, refactoring, and embedded customers are common in mission critical defense and space systems.
I don't care if they take credit for what was done in 1978. But why repeat Bad Management processes as the starting point? So I've asked Mary what domain and context does 6 months to 1 year releases and does it on purpose?
I'm a power user of Microsoft Project. We work programs with 10's of 1,000's of activities in an Integrated Master Schedule. We use Microsoft Project for most of these programs because that's what the government customer wants us to use. They use Project because it's cheap, works OK, and is easy to use.
Microsoft Project is one of those tools build for one purpose and has evolved to other purposes. It is not used inside of MSFT in the way we use it. If it was they completely bone headed features we encounter every day would have been fixed years ago.
Here's the latest.
A client has 2010 I don't. He sent me a file. I can't open it. No problem, I'll install the convertor. STOP. The 2010 run time environment for VBA is different than the 2007 environment and many of our macros get clobbered. By many I mean 10's of 1,000's of lines of VBA that connect our 2007 instances to ERP systems, Earned Value Engines and other instances of MSFT Project.
If MSFT actually used Project to run their business in the way we use it to manage billions of dollars of defense and space work, they'd fix this crap. This is why any respectable construction firm uses Primevera.
Hey MSFT start eating you're own dog food, just like Larry Ellison does.
A key role for the systems engineering effort on a complex project is to manage the emergent properties of the design leading to an optimum design. This paper examines issues related to the management of the emergent properties of the design and proposes management methods.
Darren Bowes and Matthew Squair, ADI Limited, Major Projects Group, Minehunter, "Managing the Emergent Properties of the Design," INCOSE International Symposium in Brighton England 1999.
The notion that process other than agile do not deal with emergent properties of complex systems is simply not the case. I still hear the red herring of water fall used by agile thought leaders. I still hear very misinformed definitions of complexity, emergent, and adaptive - all academically poorly developed.
No doubt the work processes of a Scrum software development team add measurable value to the product development domain. But there are large gaps in the Scrum only solution to complex software embedded system. These gaps are closed in many domain by Systems Engineering activities. Much can be learned by the agilist software developers about how complex systems in space, defense, process control, embedded systems are delivered using this Systems Engineering paradigm.
Life is pain, highness. Any one saying differently is selling something.
- Wesley, The Princes Bride.
No man thinks more highly than I do of the patriotism, as well as abilities, of the very worthy gentlemen who have just addressed the House. But different men often see the same subject in different lights; and, therefore, I hope it will not be thought disrespectful to those gentlemen if, entertaining as I do opinions of a character... very opposite to theirs, I shall speak forth my sentiments freely and without reserve. - Patrick Henry, speech at St. John’s Church in Richmond, March 23, 1775
There is a great blog with two very powerful pictures about planning, estimating, and revising those estimates. Let's start with the classic picture of forecasting the future. This is similar to the weather track for hurricanes that is often used. The further out in the future the lower the confidence - the higher the variability - of the estimate.
The next picture shows that if we get feedback on periodic boundaries, we can refine the estimates. This iterative estimating is a nice starting point as long as several things are in place:
Alan Shalloway made a post to the [leandevelopment] forum that resonanted with me.
I'm especially connected with notion of a "system," which included people, processes, and tools. This is my Systems Engineering paradigm. This also means that the people, processes, and tools each have to be "tuned" to work in the domain and context (the problem in the domain). By tuned it means there must be Measures of Effectiveness and Measures of Performance produced to provide feedback for correction action to the system.
These two measures have some simple definitions.
This approach provides the mechanism for connecting the dots between people, processes, and tools. The question can be asked "is what is being proposed supporting the best outcome for the system or project?" One source of guidance can be found in Theory of Effectiveness Measurement.
One thing that drives me nuts about the agile community is that the loudest voices come from those with most anecdotal experiences. "Hey it worked for me on my little project, so it must work for you." "Oh yea, what domain to you work in?" The paper above is one of 100's of research sources on the application of processes, methods, and their measures of effectiveness. Add to that a fully formed set of theories around System Engineering and you'll start to see that advice masked as anecdotal experience is just that "masked."
Anecdotes have their place, but they are just that anecdotes. Quotes from populist magazines are in the same class. Ask for validated, independent review of any idea that proffers a new idea. Ask where it has worked outside the personal experience. This is how decisions are made when you're spending someone else's money. If you're spending your own money - have a ball.
Robert Glass's book Software Conflict 2.0 is a collection of essays about the art and science of developing software for money.
In my domain software is a critical success for the safety and mission assurance of the products. Agile development practices are applied within a framework of systems engineering and architecture along with contract and performance management.
Anyone working in a similar domain will benefit from the book.
Saw this "sources sought" post for software I worked on the 80's in FORTRAN 77 - a real programming langauge, where desk checking the code required several days, none of this throwing stuff on the wall to see if it sticks, letting requirements emerge late in the game, and producing "just good enough for now," or "you're not gonna need it," approach. This application defends the US from ICBM attack.
In several management and agile software development books there is mention of "complex adaptive systems," " chaotic systems," and "evolutionary systems". "Most books and articles in the field of complexity are either highly technical and readable only by specialist, or rather shallow popularizations" of the same concepts.
Modeling of dynamic systems is a profession, guided by the mathematical principles. There is a vast literature on this topic. At the same time there is a large amount of "overly simplified" notions about dynamic systems, there use, their outcomes, and their domain and context in a domain.
I'd like to establish some background for such systems, before applying these concepts to actuial projects (in a later post):
It's not my intent to convince those who have landed on the populist view of Complex Adaptive Systems as their paradigm for software development, that they need to go deeper into the principles. Rather to show those interested in the principles to see how those populist views are limited in their applicability to actual problems of managing complex systems. And possibly starting with the populsit view, explore further the actual mathematics and its application to project management problems.
The study of dynamic system dates to ancient times, focused on planetary orbits in the Renaissance. Ptolemy's Almagest provided an elaborate geometric description of the solar system in 140 A.D.
In the mid-20th century, this topic was limited to scientist. The first investigator to observe "chaotic" behavior in a computer program was Edward Lorenz at MIT. James Gleick's popularization Chaos is a good starting point for the history of chaos modeling.
Four Types of Dymanic Systems
By the way the popularly misinformed notion that mathematics came to nonlinear equations recently is just that - misinformed. 1690 Jakob Bernoulli reduced the problem of determining the isochrone (the curve in a vertical plane down which a particle, starting at any initial point on the curve, will descend to the lowest point in the same amount of time) to a first-order nonlinear differential equation.
Newton put these to work in 1665 for kinematics, dynamics, and celestial mechanics. It took nine years after his death (1727) before is was published, but it was in his notebook.
Before proceeding with the linear / nonlinear system discussion, we need to define what we mean by "linear." Linear systems are rich with generalizations from which conclusions can be drawn. Solvable nonlinear as well, but working on the linear solution first provides insight to the behavior and possible solutions in the nonlinear domain.
Perturbation theory is simple. Suppose we have a system S and there are two disturbances one at time t1 and one at a later time t2. We'd like to know what the second disturbance should be to restore the system to its original state S0. Let's call the first disturbance U1 at time t1, and the second disturbance U2 at time t2. We need an expression that shows the operations that move the system from its first state S1 to the second state S2. This is called perturbation theory.
Chaotic systems comprise the transition between solvable and near solvable systems and those that are completely stochastic or random. Random systems are analyzed using "averages." While this "averaging" sounds simple enough it has many difficulties. This is the pitch around probabilistic risk analysis and the avoidance of averages, which usually leads to poor decision making.
Chaotic systems are simplier to the non-mathematical user since there are tools that can perform the analysis. Monte Carlo Simulation is one used in the project management world.
Chaotic systems are shown in popular books as those nice Lorentz spiral plots of systems behaving chaotically - jumping from one unstable equilibrium point to another, converging to a steady state of the two convergence points.
These Type 4 processes are good representations of many natural systems, including human systems writing software for money. There is a logical sequence of events, that are at their core "random variables." Cost, schedule, and technical performance are many places to work. But the human interactions ae stochastic processes as well.
These systems can be characterized as "being driven by noise." Any modeling parameter can be replaced with a random variable. This replacement is allowed because random variables are usually considered to be exogenous - they are independent of the state variables. This is a common observation of human systems. Again this is the natural home for the Monte Carlo Simulations.
So Why Do We Care About All This?
When we hear or read about Complex Adaptive Systems, Chaos, or Complexity in populist texts or forums, we need to take care for several reasons:
In later posts, I'll show how these concepts are applied the real world projects operating in the presence dynamic environments.