Long ago in graduate school, we had a concept for software development that turns out to have been just like the hardware guys do today - reference design. This was the era of the ACM Algorithm Handbook. The contents of that handbook started with Tom Standish's Data Structures class. We worked in industry with embedded control systems and were the test subjects for the data structures and algorithms for the class exercises.
We had a good time explaining how the Computer Science theory could be adapted to our limited memory, limited processor, limited I/O bandwidth problems of radar signal processing and coupled flight control systems. Real Men (not many women) in those days could write a 128 bit Fast Fourier Transform algorithm in 4K of memory in a assembly language for the AMD 2901 bit slice machine. 4 slices made up a 16 bit machine that captured the radar signal, converted it to a spectrum view, looked for the dopler target signal and directed the tail fins of the missile to the target. This was long before the Digital Signal Processing chips of Texas Instruments.
The notion of a reference design was there. There was ZERO reason to invent any code for the core of the FFT. We had a hardware instance of the FFT box that sat in a rack so the first reference was that. Then of course the math was in the book. Signals, Systems, and Communications, B. P. Lathi, 1965 is still on my book shelf. That was the reference design. Just convert the math starting on page 160 to assembly code and see if you can get it to work at a high enough performance to actually grab the target.
The Hardware guys have reference designs. Not sure the general software development community does this any more. In the embedded system world, we have Heritage code and hardware. Know to work, trusted, defect free, ready to go as the basis of new requirements. A Reference Design.
Estimating
So when someone comes and asks how long will it take to add this feature? Or, how much will is cost to develop and integrate a dual train version of this control system? The developer look at their reference design, do some parametric assessments and come up with a cost and schedule estimate with an acceptable confidence level - usually 70% to 80%. That's good enough to get started. The additional features and functions require further estimating, but the baseline of the solution is in hand.
So why all the anxiety over #NoEstimates? Where's the reference design you can make a derived estimate from. We use agile development all the time. Requirements emerge all the time. In the INTEL business the enemy writes the requirements, so forget the requirements elicitation process. But we have reference designs. Unless you're inventing new physics and many times even then, the problem is not new, the solution is around somewhere. Go look for it, seek out the reference design, use that as the baseline for the estimate. Better yet copy the !@#$'ing code and make it better.
But the notion that I can't tell you how long or how much is simply laughable in our domain. Where by the way it actually is Rocket Science. There are two short books that provide insight into the process of writing software in the defense and space business:
- The Martian Principles: for successful Enterprise Systems - 20 Lessons Learned from NASA's Mars Exploration Rover Mission, Ronald Mak, John Wiley & Sons, 2006
- Seven Secrets of How to Think Like a Rocket Scientist, Jim Longuski, Copernicus Books, 2007.
The last book should (could) be the guide for moving beyond the #NoEstimates notion into the approximate estimates as a start and evolutionary discovering of how to reach done within a budget and a set of mission objectives.