I'm not a fan of the approach taken by Standish to report success rates in IT for many reasons, not the least of which the lack of transparency for the source data, the statistical processing methods, and the population statistics.
But here a figure from a McKinsey report
With all the whining in the project management community about problems, these numbers look interesting. First since there are no error bands on the chart, so the picture is pretty much worthless for assessing the actual data. It may be that the bars overlap each other in a single stack, and likely overlap each other from 1994 to 2005.
Next it can be said there was a doubling in the success rate in exchange of a halving of the failure rate. Not bad improvement over the time period for enterprise class IT projects.
But in the end there is improvement. But the statistical question that is never answered in these type of lame statistics is what is the best improvement level for enterprise IT projects? Next of course is what is the variances limits on the time and cost overruns? That is not available in the data. In our FAR/DFAR procurement process and DCMA System Description there is always - and must be - a variance on cost and schedule performance. Otherwise we'd be writing variance reports every week as the project progresses. Without that data, the Standish reports are even more Lame.