David Anderson's presentation at a recent PMI Orange County Chapter confer fence presentation contains some nice slides about managing software development efforts using agile methods. The ideas of natural and unnatural variation in projects is a point worth restating - in fact continuously restating. The work of Christoph Loch and his colleagues in several articles "Managing Project Uncertainty: From Variation to Chaos," *MIT Sloan Management Review*, Winter 2002 and "Adjusting project management techniques to uncertainty," *European Business Forum*, 3:47-51, (2000) and "Matching Project Uncertainty with Management Style," *PROMAC 2002 Proceedings*, July 2002 also speak of this approach as well as the specific processes that deal with the four or five (depending on which paper) that can deal with the uncertainties.

The concept that caught my eye was slide 6, that stated...

Little Law Confirmed in Software Engineering

David, probably meant software development process, since software engineering is a discipline taught in universities as well as attempts to make it a profession in the State of Texas.

Little's Law states...

The average number of things in the system is the product of the average rate at which things leave the system and the average time each one spends in the system.

As well if there is an imbalance in the flow between the things entering and the things exiting, then the rate at which things exit the system is the same rate at which they enter. Stalled traffic on the freeway is an example.

Little's Law is a general description of systems theory and can be applied to any linear system. This idea of variability and Little's Law can be found in a variety of domains - inventory control, queuing systems (general queuing theory is the core concept in many engineering disciplines, not just network routing), capacity planing for all kinds of processes ranging from petrochemical feed stocks to replenishment of artillery shells on the battlefield.

Little's Law is independent of any underlying probability distribution for the arrival of work or the performance of work. The proof of this can be found in "A Proof of the Queuing Formula L = Lambda W,"*Operations Research,* 9, pp. 383-387, 1961. The formal approach (I'm a physics guy by training, sorry) is...

Let

a(t)be the number of arrivals to some system in the closed interval[0, t]. LetB(t)be the number of departures from the same system in the same closed interval,[0, t]. Botha(t)andB(t)are integer valued increasing functions. LetTbe the mean time spent in the system during the closed intervalsubt[0, t]for all customers who were in the system during the interval[0, t]. LetNbe the mean number of customers in the system over the duration of the closed intervalsubt[0, t].If (and this is a big if, BTW) there exists the

limit Lambda = LIM (t -> infinity) {a(t)/t}, delta = LIM (t -> infinity) {B(t)/t}, T = LIM (t -> infinity) T sub t.And further if

Lambda = Delta(this is a big if as well and must be verified for the system under examination), then Little's theorem holds that,

N = LIM (t -> infinity) N sub t,exists, resulting in Little's theorem,

N = Lambda T

The key here is that Little's Law uses the term *average* as time approaches infinity. For most software project infinity is probably outside the window of opportunity for recovering the value of the investment in the software. So be careful when using Little's Law to apply it to stable long lived arrivals and servicing models. Otherwise an M/M/1 or other queuing model will be needed.