Microeconomics is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources.

All engineering is constrained optimization. How do we take the resources we've been given and deliver the best outcomes? That's what microeconomics is. Unlike models of mechanical engineering or classical physics, the models of microeconomics are never precise. They are probabilistic, driven by the underlying statistical processes of the two primary *actors* - suppliers and consumers.

Let's look at both in light of the *allocation of limited resources* paradigm.

- Supplier = development resources - these are limited in both time and capacity for work. And as likely talent and production of latent defects, which cost time and money to remove.
- Consumer = those paying for the development resources have limited time and money. Limited money is obvious, they have a budget. Limited time, since the
*time value of money*of part of the Return in Capital equation used by the business. Committing capital (not real capital, software development is usually carried on the books as an expense), needs a time when that capital investment will start to return*value*.

In both case time, money, capacity for productive value are limited (scarce) and compete with each other and compete with the needs of both the supplier and the consumer. In addition, since the elasticity of labor costs is limited by the market, we can't simply *buy cheaper* to make up for time and capacity. It's done of course but always to the determent of quality and actual productivity.

So cost is inelastic, time is inelastic, capacity for work is inelastic and other attributes of the developed product constrained. The market need is like constrained as well. Business needs are rarely elastic - *oh we really didn't need to pay people in the timekeeping system, let's just collect the time sheets, we'll run payroll when that feature gets implemented.*

**Enough Knowing, Let's Have Some Doing**

With the principles of Microeconomics applied to software development, there is one KILLER issue, that if willfully ignored ends the conversation for any business person trying to operate in the presence of limited resources - time, money, capacity for work.

The decisions being made about these limited resources are being made in the presence of uncertainty. This uncertainty - as mentioned - is based on random processes. The random process produces imprecise data. Data were drawn from random variables. Random variables with variances, instability (stochastic processes), non-linear stochastic processes.

**Quick Diversion Into Random Variables**

There are many mathematical definitions of random variables, but for this post let's use a simple one.

- A variable is an attribute of a system or project that can take on multiple values. If the
*value*of this variable is*fixed*for example*when someone asks what is the number of people on the project can be known by counting then and writing that down*. W*hen someone asked you could count and say 16.* - When the values of the variable are
*random*then the variable can take on a range of values just like the non-random variable, but we don't know exactly what those values will be when we want to use that variable to ask a question. If the variable is a*random*variable and*someone asks what will be the cost of this project when it is done, you'll have to provide a range of values and the confidence for each of the numbers in the range*.

A simple example - silly but illustrative - would be HR wants to buy special shoes for the development team, with the company logo on them. If we could not for some reason (doesn't matter why) measure the shoe size of all the males on our project, we could estimate how many shows of what size would be needed for the statistical distribution of males shoe sizes for a large population of coders.

This would get us close to how many shoes of what size we need to order. This is a notional example, so please don't place an order for actual shoes. But the underlying probability distribution of the values the random variable can take on can tell us about the people working on the project.

Since all the variables on any project are random variables, we can't know the exact value of them at any one time. But we can know about their possible range of values and the probabilities of any specific value when asked to produce that value for making a decision.

The viability of the population values and its analysis should not be seen not as a way of making precise predictions about the project outcomes, but as a way of ensuring that all relevant outcomes produced by these variables have been considered, that they have been evaluated appropriately, and that we have a reasonable sense what will happen for the multitude of values produced by a specific variable. It provides a way of structuring our thinking about the problem.

**Making Decisions In The Presence of Random Variables**

To make a decision - a choice among several choices - means making an *opportunity cost* decision based on random data. And if there is only one choice, then the choice is either take the choice or don't.

This means the factors that go into that decision are themselves random variables. Labor, productivity, defects, capacity, quality, usability, functionality, produced business capability, time. Each is a random variable, interacting in nonlinear ways with the other random variables.

To make a choice in the presence of this paradigm we must make estimates of not only the behavior of the variables but also the behaviors of the outcomes.

**In other words**

**To develop software in the presence of limited resources driven by uncertain processes for each resource (time, money, capacity, technical outcomes), we must ESTIMATE the behaviors of these variables that inform our decision.**

It's that simple and it's that complex. Anyone conjecturing decisions can be made in the absence of estimates of the future outcomes of that decision is **willfully ignoring** the Microeconomics of business decision making in the software development domain.

For those interested in further exploring of the core principle of Software Development business beyond this willful ignorance, here's a starting point.

- MIT OCW Principles of Microeconomics
- Economics of Iterative Software Development
- Estimating Software Intensive Systems
- The Road Map to Software Engineering
- Software Engineering Economics
- Essentials of Managerial Finance
- Making Hard Decisions
- Modelling Complex Projects
- How to Measure Anything
- How Many Licks
- Forecasting and Simulating Software Development Projects

These are the tip of the big pile of books, papers, journal articles on estimating software systems.

**A Final Thought on Empirical Data**

Making choices in the presence of uncertainty can be informed by several means:

- We have data from the past
- We have a model of the system that can be simulated
- We have
*reference classes*from which we can extract similar information

This is empirical data. But there are several critically important questions that must be answered if we are not going to be disappointed with our empirical data outcomes

- Is the past representative of the future?
- Is the sample of data from the past sufficient to make sound forecasts of the future. The number of samples needed greatly influences the confidence intervals on the estimates of the future.

Calculating the number of samples needed for a specific level of confidence requires some statistics. But here's a place to start. Suffice it to say, those conjecturing estimates based on past performance (number of story point in the past) will need to produce the confidence calculation before any non-trivial decisions should be made on their data. Without those calculations, the use of past performance is very sporty when spending other peoples money.

Thanks to Richard Askew for suggesting the addition of the random variable background