One of the references on the estimating discussion is from a paper, Managing programming productivity, D.R. Jeffery, M.J. Lawrence, showing a table where not estimating had a higher productivity.
Of course the post didn't provide any insight to the units of measure of productivity, the variances on these measures, or the auto-correlation between the measures that would allow separation of the drivers for the measures, that have no units of measures. All around C- in the inferential statistics class I'm afraid.
No problem. One of my favorite authors Robert L. Glass speaks about this in his book
To save you the trouble of scrolling through the book, let me clip the important points out of the estimating chapter. Let's start in typical Glass style with a hypothesis through a fact that needs to be tested. This by the way is a good approach to any discussion. Make a statement - not a conjecture but an actual statement. And then support that statement with some facts that can be tested. This is the basis of the Scientific Method. A simple version of course, but much more likely to result in useful information than we're exploring for dysfunction or I've been doing X for awhile and it works for me.
Let's start with the topic of the Linberg study in which the Jeffery - Lawrence study is referenced. The research question is how are projects considered "successful?" One of the questions was asked before, which led J-L to their table.
This is where the J-L study comes in. The last half of the paragraph is not shown in the post about #NoEstimates being a good way to go. It makes sense that not having to estimate would empower the developers. They can focus on doing what they do best - writing code for money. Other people's money by the way. But let's ignore that for now.
This is powerful stuff and the basis of not only all good management, but certaintly the basis of agile software development. Glass goes on in this section to say somethings about sucess and failure. And then the telling sentence at the end of the paragraph...
What Does All This Mean?
Let's start with a simple approach. When attempting to introduce an idea, especially one that is controversial, when there is a suggestions someting is the case - not estimating - poke a little further to see the Domain, Context and the underlying statistics and evem further to see if that idea - presented in a paper - has any references, supporting literature, or is it just a toss off phrase that over the years (1985) didn't hold up under examination. Without this Home Work it makes it hard to make Decisions based on that material. And in the end it is the decisions that we're supposed to be supporting, right.
It's that simple. The notion of let's explore ideas is absolutely wonderful for discussions while leaning on the bar with a drink in hand at the cocktail party after the conference speakers have left the building.
This approach is likely considered harsh, annoying, and less accepting of open ideas outside of the domain of mission critical software systems. But where I work, while we're always interested in new and innovative ideas and in fact are desperatley sought evey month in an attempt to get back on schedule and recovery the lost technical performance - budget is blown by the way. Those ideas need to come in contact with the reality of the situation be improved.
This starts with governance of other people's money. Reporting of how we spend that money to the Board of Directors and the SEC, and most importantly to the enterprise of people dependent of the results of our efforts. Experimenting on their money needs to be classified as research and development. That is done all the time. But it is done explicitly not implicitly.
Read Glass to learn more on how to make actual progress in the very poorly run and managed world of software development for money.