Pithy comments seem to be the norm on some forums and blogs around improving the probability of success for a project or program.
- "Buy my software and your dream of project success will come true."
- "Buy my book or better yet hire me, and your project processes will improve."
- "Listen to my critique of two forms of project management in a domain I have ZERO experience in and you too will understand how smart I am."
It's sad really how we become trapped in seeking simple solutions to complex problems. There is good advice of course. The books referenced in the FAVORITE BOOKS section on the bottom left side of this page are a start. But they are just books. Journal articles provide guidance as well. Refereed articles that is.
So what to do?
- When ever there is advice, ask the adviser if he has direct experience in the domain and the context of that domain of the problem you're facing. More than just "passing through the airport in question," how about working on the team the built the airport in question. Our own Denver International Airport was a classic example of local opinions at the time, with ZERO hands on experience.
- Ask about past performance of those "magic beans" that are conjectured to fix all that is wrong with the type of project you're working. Did those "magic beans" have a quantifiable measures of success from the starting baseline plan? That is, did the plan turn out to be the actual? If so, can the method, process, or advice be attributed to the success of the plan? Or more important;y can the disappointment in the project be attributed to the use of a method or process. This is the classic "red herring" of mis-informed agilest. "I was on a program once that used waterfall and it failed." Likely true, but what else caused the failure? Don't know? Then making the attribution is sporty business.
Which brings me to the final "what to do"
- What was the probability of success on day one? Was there a measure of this probability?
- "We have an 80% chance of completing 'on or before' the 3rd week in November with a 5% error rate"
- Statements like that are a starting point. But look at Program Success Probability for some insight.
In the end there is only ONE piece of evidence of success for a proposed method or process - Past Performance. Past Performance in a "like" domain and a "like" context in that domain. If there is a gap in the context, then the adviser must show how that gap will be closed. Notice "will" not "can."
If there is a gap in the domain, then the adviser has to show how the past performance is "close enough" to be meaningful.
For general class of problems, past performance is easier to use, but even there, care is needed before acting on the advice, to assure that the general class of problem is just that "general class."
One of my favorite issues is the conjecture that competency based assessments are somehow the solution to poor project management staff. In fact the solution to poor project management staff is good project management staff. The idea that you can "test" for competency has merit and of course their are numerous organization ready to sell you courses and assessment vehicles to support their claims that they have the right approach to improving project performance.
There are only a few classes of advise I know of that can be generalized for complex projects, and the staff that manage them:
- Application of ANSI-748-B in a manner validated by an external agency. Not the 3rd party, but an agency tasked with Validation. In the US that is the DCMA (Defence Contract Management Agency).
- Application of probabilistic risk assessment - in a manner prescribed by DID 81650.
- Development of an Integrated Master Plan and Integrated Master Schedule - in a manner prescribed by the US DoD.
- Integration of these into the Performance Measurement Baseline - as prescribed by the DCMA.
- Application of Technical Performance Measures to assess physical percent complete.
There are likely others, but I can't think of any now.