There are endless discussions about what went wrong with the Affordable Care Act web site development and deployment. It'll be hard to tell at this early point in the project assessment. But what is clear is this was mist likley a failure of project management.
Below is the acquisition life cycle for Business systems in the DOD, not that HHS is a DOD-style shop, but the paradigm of iterative and incremental development is in place. The release cycles shown here are way too long for something like the ACA Site. But to topology of the process is sound.
Looking at this process there is an obvious starting point. The Business Capability Definition. What is the resulting system supposed to do in terms of capabilities. Not the technical and operational requirements, but what business capabilities will the system provide to the stakeholder when it is in full operation? This is called Initial Operating Capability (IOC).
In our domain we start defining the capabilities using the Defense Acquisition Guide. Here is where Measures of Effectiveness (MoE) are defined. The Measure of Effectiveness is assigned to a capability. If we want a capability, how effective does it have to be? This measure is not a technical performance or a requirement. It is an effectiveness measure.
A MoE for a UAV program we work would be The UAV shall be transportable within a 3,000 mile radius via a C-17/C-141/C-5 package. From the MoE there is a Measure of Performance (MoP). For example weight is a MoP that enables the MoE to be fulfilled. Lower down are Technical Performance Measures (TPM). For example the weight of a Electro Optical / Infrared sensor platform must be under 55 pounds for the UAV to operate properly. It can't be too light or it would disrupt the center of gravity and can't be too heavy because the UAV would burn too much fuel to accomplish it's mission.
So for the ACA site, we'd need to know if there were MoE's, MoP's, TPM's defined that enable the Capabilities to be delivered. Here's the Performance Reference Model for federal IT.
Since the ACA site is pretty much all software, I'm going to suggest that this approach of using Capabilities Based Planning, MoE's, MoP's, TPM's has nothing to do with how the software is built. Either traditional or agile methods can be used. Agile is likely faster, but agile can only work in a domain like this if you know what DONE looks like in terms of MoE's, MoP's, and TPM's. This is a fixed launch date, fixed set of requirements guided by all the insurance regulations, and hopefully some not to exceed budget.
It is a common myth that government acquisition is waterfall and big design up front. DoD 5000.02 prescribes an iterative process designed to assess the viability of technologies while simultaneously refining user requirements. (pg 16 of 5000.02).
One starting question of the ACA Site would be - did they apply the iterative acquisition process in some form, no matter the fidelity of the iterations?
Here are some other fundamental questions as well:
If the answer to any of these is no or we don't know, go find out, get project managers who can do this. Otherwise the probability of project success is reduced. In fact look at the Probability of Program Success literature for further guidance.
The final question is did they have an Integrated Master Plan and Integrated Master Schedule for all the work as described in the Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide? This paradigm has been shown to significantly increase the probability of success not matter the domain, context, development method, technology, or business process. It states in clear, concise, and unequivocal terms what DONE looks like at every point in the project in units of measure meaningful to the decision makers.
The final - and killer question is - did the project team ruthlessly manage the changes to the capabilities? This is suspect is the root cause of the failure. Late changes to complex projects are the kiss of death.
As repeated often here...
Don't do stupid things on purpose
So Now What?
We have to wait to see what the Root Cause Analysis (RCA) shows for the failure of the project. But I'd conjecture the program management processes found in large DoD or NASA programs where not applied in any meaningful way. The site is not larger compared to most of the programs we work ($400M is small), but the processes used to manage those programs can be scaled down with ease. The Principles are the same. The Practices are scalable and the Processes scalable as well.
This is from an article about the application of Bayesian Statistics to a civil suit in the UK over the source of a building fire.
The idea that you can assign probabilities to events that have already occurred, but where we are ignorant of the result, forms the basis for the Bayesian view of probability. Put very broadly, the 'classical' view of probability is in terms of genuine unpredictability about future events, popularly known as 'chance' or 'aleatory uncertainty'.
The Bayesian interpretation allows probability also to be used to express our uncertainty due to our ignorance, known as 'epistemic uncertainty', and popularly expressed as betting odds. Of course there are all gradations, from pure chance (think radioactive decay) to processes assumed to be pure chance (lottery draws), to future events whose odds depend on a mixture of genuine unpredictability and ignorance of the facts (whether Oscar Pistorius will be convicted of murder), to pure epistemic uncertainty (whether Oscar Pistorius knowingly shot his girlfriend).
When we build probabilistic models of project performance - cost, schedule, and technical - we assume we understand the underlying statistical processes that drive these probabilistic generating functions. These are the aleatory uncertainties in duration, cost, and performance. We define the Probability Density Function in the Monte Carlo Simulator. Then we apply that to the network of work activities (the Integrated Master Schedule), to produce confidence outcomes for completing on or before a planned date and at or below a planned cost. This is all fine and dandy. But we really don't know the underlying drivers that create coupling, correlation, and cross correlations between the work activities, cost, and technical performance. These can be model by discovering the drivers in the network.
For the Epistemic uncertainties we need another modeling tool. The current tools don't actually use Bayesian statistics, rather they use Monte Carlo Simulation and treat the Probability of an Event as an aleatory process integrated with the other PDF's, ranges, and their shapes (Kurtosis and Skew).
We're missing the tools needed to construct a credible epistemic model of how the program works. Using the Integrated Master Schedule (IMS) as the topology for work, the probabilistic behaviours of the work elements at each node - cost, schedule, and technical performance compliance of the products - and the coupling and cohesion of the nodes. With this information - assuming it is credible, which is a HUGE assumption - we could model the behaviour of the program and ask what if questions.
There is a class of responders on the forums that continue to seek simple and simple minded solutions to complex problems. Claiming the problems can be solved with simple solutions. Of course those solutions cannot be demonstrated with any credibility outside of a limited domain. Let's start with Mencken
For every complex problem there is an answer that is clear, simple, and wrong. - H. L. Mencken
In the project domain most of the problems start with poor estimates of cost and schedule. Hollmann starts it off http://lnkd.in/vjf4X4. Bent Flyvbjerg takes a more radical view, calling estimators liars , but I chalk that up to a language barrier.
The core problem starts with the politics of estimating. Rarely do the buyers of the project know what it should cost. So when they hear a number that is larger than their expectations, they reject the number. They signal to the sellers what they expect to pay for something. This expectation can come from poor knowledge or simply a naive understanding of the problem, or worse a public promise that simply can't be kept.
But a second problem exists. That across domains, there is the wrong headed belief that one domain has the solution to another domains problems. We need to break the loop we're stuck thinking that problems are only found in other domains and making the ill-informed suggestions that "if you just did it my way," there would be no problem. There is a fundamental issues with managing complexity and managing in the presence of uncertainty. We want it to be simple, it is not, but since it is hard, people don't want to actually work hard on the solution. Instead seeking simple and may times "simple minded" solutions - buy this tool, use my excel spread sheet, dumb down the processes.
So What Is The Solution
First is to recognize that ALL domains have similar problems. Cost and Schedule estimates have been discussed for many decades. Lots of literature leads us back to the singular source - the political will to search for the credible estimate, even if that means not starting the project. Or better yet, to start the project knowing the estimates are flawed.
One provocateur voice in our community suggests that PMs that take on project with poor estimates should be held criminally liable for the results. This of course is laughable, but it demonstrates the complete understanding of the problem and the solution.
This approach reminds me of Pauli's quote when presented with a nonsensical submission from a student.
They were worse than wrong because they could not be proven wrong. Famously, he once said o: Das ist nicht nur nicht richtig, es ist nicht einmal falsch! "Not only is it not right, it's not even wrong!"
Thus is where we are now. Not much in the form of research and questioning, lots of pontificating about how the other guy is all messed up, and if you'd only do it my way, you'd have your solution. Reminds me of the current US political problems. Do it my way. Of course that doesn't work, because we have complex - wicked - problems, and there is no single way. Mencken was right, and he was right long before any of the current problems.
We may be doomed to repeat history with first recognizing the problem is us.
I was at a conference for government contractors and the software systems we use to manage those programs. In the federal government agile is moving forward. Greater Accountability and Faster Delivery Through Modular Contracting.
During one of the session there was a quote that says it all about Agile Development
AGILE MEANS YOU'RE BEING PAID FOR DISCOVERING THE SCOPE OF THE PROJECT
This says it all. Agile is a powerful approach to software development in the commercial world. Agile in the government domain, encounters the Federal Acquisition Regulations (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS).
In these environments, Earned Value Management, the Six Business System Rules, and other procurement guidance define how work is performed. This is all being worked out, but the quote above did it for me!!
In the past NAVAIR had a CD with all the SETR Program Event Checklists. Each had an Excel spread sheet that you filled out to determine if you were ready for the Event and a supporting document that describes all the details of the Event, both entry and exit criteria. The picture above has the link to the current spread sheets.
This approach to the Integrated Master Plan spelled out explicitly what "done" looked and there was no way to skip around the check list and call the work "done," without passing all the items with a GREEN. I've requested from NAVAIR the latest, but looks like it has been expanded to be DOD level now.
This paradigm is at the heart of describing what "done" looks like in units of measure for both the buyer and the seller. Measures of Effectiveness for the buyer and Measures of Performance for the seller.
Jeff Sutherland has a post about agile in the Department of Defense. Good stuff, but there are some issue with "going agile" in the Federal IT space. The first is for contracts over $20M require Earned Value Management, per FAR/DFARS flow downs and the OMB Part 7 guides for "FedCiv."
The second is the contract vehicles need to be adapted to how agile does things with requirements, changes, deliverables, progress payments, fee calculations. Here's the "tip of the iceberg" for integrating Scrum with Federal Acquisition. While Scrum is a wonderful concept and applicable to large majorities of projects, but it's not quite as simple as it might seem.
Here is a recent briefing on the top of agile in the federal procurement world is the focus of many organizations in and out of the government. Management Concepts is one source of training for agile in the Fed space, there are others. But in the end this is an acquisition issue.
Without changes in the FAR/DFARS clause flow downs that "put on contract" the behaviors mandated by the regulations of how software is developed, acquired, and put into service. While the agile community has many power tools to improve the problem in the federal government, they need come at this from a acquisition point of view for those benefits to be realized.
Here's some prior discussion
Today is Veterans Day. As a veteran I have a special experience working with and for veterans. It is more than a shared experience, no matter the service, rank, or experience. It is a shared understanding of service, sacrifice, duty, and honor.
Let us remember those who served today. Not because it is a national holiday, but because their service allows us to celebrate this holiday.
I returned 14 August 1970. Many did not, let us remember them most of all.
Management Concepts is hosting a webinar John Driessnack and I will be speaking about Agile in the Federal environment. This is the first ini a series of discussion about integrating Scrum and Scrum-like software development processes into project and programs subject to the Federal Acquisition Regulations and other guidance like Office of Management and Budget (OMB) Part 7.
If you work in the Federal Government or for a contractor that works for the Federal Government and are wondering when the Fed's are going to figure out "agile," please join us to hear more.
|If you wish to update your profile or no longer receive emails from us, please click here
It rolls down with a change in administration. With a failure of a major project that used agile. With a change in the whim of the leaders. With an over promise under deliver approach of some vendors.
I'm working on an upcoming webinar on introducing agile into the federal IT space. This has been going on for a year, and we're nearing the outcomes from our efforts for a webinar, a one day course and a three day course.
There is a useful report that outlines both the problem and some solutions around this problem that is worth the investment. Visit the site under the cover picture to the right, read the report, follow the links, and see where this complex topic is headed.
If any domain is in need of agile approaches it is Federal IT. But not any approach will work. All the hand waving, Cum By Ya talk of self directed teams sitting in the same room with their customer, letting the requirements emerge as the money is being spent, probably isn't going to pass the smell test of Congressional oversight of spending the public's money.
Something else is needed. The integration of Agile with the contractual management of the Federal Acquisition Regulation.
As the webinar progresses, the courses development, and the planned book emerges, along with the hands on deployment, I'll post more.
Mary has a new essay titled Don't Separate Design from Development. It's one of those "looking back, to see forward" style of essays. I suspect Mary delivered software successfully because she was a subject matter expert and knew what the customer wanted.
I designed, developed, and programmed control systems - flight systems - and led the development of a fault tolerant process control computer (www.triconex.com), where we built an operating system, IO board controls using INTEL 8051, and a National Semiconductor (a CMOS version of the M68000) set of triple redundant processes. This was done in C with an embedded Real Time Operating system, WindRiver.
I can't remember having any formal specifications. We developed hardward, firmware, bent metal, developed the Control System OS, invented a programming language, developed a programming workstation on the original Compaq computer. All by simply "knowing" what to do because we had done it before, or we were literally inventing the future. We had SME's from our customers (Exxon, Boise Cascade, Shell, Total, and Elf). We had process engineers (ChemE's), we had RT OS developers from our previous defense systems. We had white hot C/Unix developers (the real UNIX, the one that came on a 9-track tape from Bell Labs).
But those days are over for lots of reasons.
So Mary is right, lots of software can be built without stories and detailed requirements. But that takes place ONLY because all that "stuff" is in the heads of the engineers and developers who have dome it before - succesfully do it before.
But now we work much more complex programs, systems where the lowest level components are well defined from the subject matter experts, but the integration of those components is complex and evolving.
Mary's quotes Gilb and Brooks is right on. In our Aerospace and Defense business this is RARELY the case. Embedded or Software Intensive systems have a tight connections between "engineering" of the systems and the "design" of the systems. These systems are developed incrementally and itertaively. This is the fundamental principle of Systems Engineering, the V-Model, and the Integrated Master Plan / Integrated Master Schedule. As well "process is King," in our domain, so anyone wanting to skip the process can look for work elsewhere.
At the notional level this is how programs are run in the A&D and other Systems Engineering domains. This approach is Rarely found in the enterprise IT would, expect where enterprise IT is inside and A&D firm. When it says detailed design this is context and domain sensitive. On some embedded system, sysMl is used with UML to define all the interfaces between the components. In other domains, detailed design is down to the board level and timing diagram of the firmware controlled system that will be sent to an ASIC fab shop that converts software (Handel-C) to hardware.
The appropriateness of the level of design is always considered, since the programs are on contract for a delivery date, a target budget, and a specific set of performance parameters.
This concept is also missing from enterprise IT - the idea that the project is a "business," with a profit margin, an expected delivery date, and a minimal set of capabilities. I'd suggest that if projects were run that way, better results would appear. Things like "do the appropriate amount" of documentation. Since documentation doesn't fly to orbit, we do what is needed to get the space craft to orbit and that's all. Documentation is needed, but documentation that doesn't support the capabilities - fly to LOE - is a waste. Why this is not well understood outside our domain I don't know.
Bad Project Management is my guess.
The notion that large programs have heavy, unwieldy processes, and that agile methods - both software development and project management - are the solution is many times just that "notional."
Let's start with the top level principle of large program acquisition in the Federal Government
Many if not most large government acquisitions have unwieldy processes, poor track records, and lots of blame to go around. But the first chapter of the Federal Acquisition Regulation (FAR) provides some guidance that can be applied to any program, project, or development effort.
Notice the timely, best value, for the customer, maintaining trust. Add working together as a team, and being empowered to make decisions in their area of responsibility.
Is this guidance followed - rarely. Why? Beacuse guidance alone is not sufficient for success. If that were the case, we woudln't need lighter weight, agile processes to pull us out of the messes we've created on large federal programs.
Agile needs to take this cautionary example - as agile moves into the enterprise, it will be just the latest attempt to lighten the load of processes and formality.
Taking these words as stand alone and looking to agile development and project management methods may provide the mission and vision around improving project performance in smaller domains.
The NDIA Information Systems Summit II held in Baltimore was interesting in many ways:
The prime outcome of the conference is:
How to make Agile work in DoD
The basis for all this is anchored in several sources. The best one is of course Dan Wards F.I.S.T. work. Dan's The Fist Handbook, The Simplicity Code, and Rogue Project Leader, should be mandatory reading for any agile advocate working in the DoD world and also the interview with Dan on POGO.
There will be much more on this topic in the coming months. But here's the core theme...
Agile software development can be used in formal and structured environments - like DoD - if the baseline conditions for the program (FAR/DFAR/OMB) are the starting point, then following the guidance from Dan, and then and only then deploying the agile principles.
I'm going to conjecture this approach will result in the broadest deployment of agile development processes. The IT budget of $80,000,000,000.00 completely overwhelms every other IT project collection on the planet. This is where the maximum impact will take place.
There is a well worn myth in the agile community, that large complex projects are developed using waterfall methods, where all the requirements are defined upfront, (BDUF), rigid processes are used to execution the program and the outcome is defined before the project starts. This of course is great fodder to the replacement of the "devil" waterfall process with agile processes.
For anyone interested in how a program actually works, here a good book that can be found at the Air University Press book store. "The Development of the B-52 and Jet Propulsion: A Case Study in Organizational Innovation," Dr. Mark D. Mandeles, Air University Press, Maxwell Air Force Base, Alabama. March 1998. The Air University Press has many books on management, leadership, strategy, and history.
Quoted from the preface:
The B-52 and Jet Propulsion: A Case Study in Organizational Innovation is a coherent and nonpolemical discussion of the revolution in military affairs, a hot topic in the national security arena. Mark Mandeles examines an interesting topic, how can the military better understand, manage, and evaluate technological development programs. We see Murphy’s Law (anything that can go wrong, will go wrong) in operation. No matter how carefully the military designs, plans, and programs the process of technological development, inevitably, equipment, organizations, and people will challenge the desired expectations. Mandeles argues convincingly that recognizing the inevitability of error may be the single most important factor in the design of effective organizations and procedures to foster and enhance innovative technology and concepts.
The book focuses on the introduction of jet propulsion into the B-52. This case study illustrates the reality that surprises and failures are endemic to development programs where information and knowledge are indeterminate, ambiguous, and imperfect. Mandeles’ choice of the B-52 to illustrate this process is both intriguing and apt. The military had no coherent search process inevitably leading to the choice of a particular technology; nor was decision making concerning the B-52 development program coherent or orderly. Different mixtures of participants, problems, and solutions came together at various times to make decisions about funding or to review the status of performance projections and requirements.
Sounds like an agile project to me.
There was a suggestion that Agile should drop the name Agile.
What just when we're starting to get that name syndicated in our large, cumbersome, domain. No Way.
When someone asks whats in the name, I've started using two pictures.
A traditional approach
and to focus the idea of agile even more, here is a simple phrase.
What do we mean when we say agile? Being able to turn inside the loop of unfolding events.
The person making that statement wasn't some self styled agile thought leader, or the owner of a leading Scrum training organization, or even one of the original appostles of the Agile Manifesto.
It is Dr. Ashton Carter, Under Secretary of Defense in Acquisition, Technology and Logistics, in the Sep/Oct, 2010 of the journal Defense AT&L.
It's not time to go changing anything aroudn a name. It's time to move Agile Software Devloment into the "real" mainstream - the US Department of Defense. If you want to see software intensive systems that MUST respond to emerging and complex environments, then look no further than software intensive weapons and their support systems.
In our domain and context the anchor paradigm for agile is Col Boyd's OOAD
The components of agile are all here. It is itertaive, adaptive, can be made to be incremental, and defines the "loop" processes. But most of all it is "field proven." People can be trained to follow this loop process. People can receive direct feedback when they are not following the loop. The loop focuses on the outcomes of work effort.
In Boyd's paradigm there are three critical success factors applicable to project and program management:
Boyd used this paradigm in warfare, focused on fighter aircraft air-to-aor combat. But the concepts are directly applicable to the "combat" of getting the product out the door in the presence of an emerging environment of chaging requirements, emerging technology, internal and external risk, and the politics of people when they are connected to money.
In the defense and space business the Control Account Manager (CAM) is as close to a Project Manager as you can find. The role of the CAM is basically:
These are deliverbles management functions. They are not focused (primarly) on the techncial aspects of the Control Account (the project inside a program).
Now this may be a distinction from the traditional IT or SW development project, where the PM is a techncial expert and a participant on the development team. And certaintly distict from the agile software development where there is not a difference between the "manager" of the team and the "members" of the team.
But there is important issues with the "less organized" project teams - teams where "managing the work" and many times indistinguishable from "doing the work."
For domains where this distinction is clear and bright, there are a set of questions that need answered to assess the credibility of the project's ability to be successful.There is a set of questions that come from the Defense Contract Management Agency to guide this assessment.
This starts with ...
Work/ Budget Authorization Process
Managerial Analysis Process
Grady Booch spoke in CrossTalk about the current state of DoD software. John Goodpasture has the clip, here's the magazine with the article. I certaintly don't have the credentials the way Grady does to speak of the broader issues of architecture and similar topics.
But, as one working on portions of DoD (and NASA) software intensive programs, I found his approach a bit unsettling. Booch's basic message is DoD is behind the curve and needs to catch up.
What Booch does not mention - on the weapons side - is DoD is primary a "procurement" house. They buy thing others build.
Booch mentions "open source," but not the Information Assurance (IA) and secure systems aspects of weapons. Vetting open source is likely less costly than writing it from scratch. But it is still a non-trivial process for an integrated weapons platform.
There is a mix of issues in the article. SOA, for one, is being used for large infrastructure programs - NETCENTS2 is one we work - in the Program Management Office. But embedded systems have special needs. Grady mentions languages and their need for improvement, but doesn't isolate the domain or context. Flight systems is another side of the DoD (and NASA) side we work. A missing critical understanding around programming languages is the heritage of the code base. C, C++, and Handel-C are common sources of heritage applications. The avionics suite on Mars missions has decades of V&V. The code base is not going to be converted to the "next new thing." The flight software on vendors "black boxes" is not going to be re-written in the lastest language. The current approach is "incremental upgrades." The site for the journal Booch writes in manages efforts like these.
It is not well understood that systems like GN&C (Guidance Navigation & Control), C&DH (Command & Data Handling) and GPS (Global Position Systems) all have decades of "proven" reliability and functionality on the code base. The source code for these systems are "trusted" in ways unimaginable in the commercial world.
There is no doubt - especially among those working the real problems - of the difficulties of developing weapon systems software, flight, launch, ground, on-orbit - all have great difficulties. But many times the source of those difficulties starts with requirements and capabilities definitions. These attributes (capabilities and requirements) are many times driven by politics as much as adversaries.
Grady doesn't speak to these issues. GAO does.
What troubles me most is the final answer in the article.
I think the major change is in education. I don’t mean to be critical, but in many ways the DoD’s expertise has, frankly, been outsourced to its contractors. It is not to say that is a horrible, terrible thing, but a lot of the things that happened in old warfighting systems came through intrinsic expertise inside the DoD. I would strongly encourage the increase of education of the DoD’s intrinsic forces with regards to decision engineering and software engineering—and draw back into the DoD more of that intellectual property. Ultimately, delivering for the warfighters is what the DoD is all about, and that requires an intensely educated staff to make that happen. How does one make that manifest? I think there is work to be done in acquisition policy, in processes for delivery in the use of things like DoDAF. I think the DoD itself can lead and should lead this, and it needs to make this change in the interspatial spaces of its training, in its service academies, and in its colleges as well.
DoD is an acquisition organization. It does write "some" software but not much - at least not on the weapons side. DoD "acquires" weapons systems from the industrial base. A good starting point is SoftwareTech News. Read through some recent issues and articles about the "software acquisition" issues. Take a look at the site of the person tasked to manage these acquisitions - USD(AT&L),
then some related webs sites.
The Issue Is More Complex Than Languages, Modeling, and Architecture
The issue does start with Systems Engineering. But systems engineering starts with definitions of needed capabilities and the development of technical, operational, and managerial requirements.
Booch's article contributes to the discussion of building complex systems in the presence of emerging needs. But it's not about software development - at least in the beginning. It's about defining what DONE looks like in units of measure meaningful to the stakeholders. This is about Systems Engineering (which Booch states). But it's also about capabilities. This is where to start. There are processes in place to do this, but not always used.
One place to start this approach is the Rand Corporations work on Capabilities Based Planning. IT Infrastructure could take a lesson from this approach. The weapons side does in some cases. In others "not so much." Future Combat System (one where we work on a piece), Joint Strike Fighter, NASA Orion, and other System of Systems each have varying degrees of success in this domain (SoS).
The Problems Are Great, So Should the Solutions
Huge amounts of waste are in the DoD procurement process. There is no doubt about this. The SecDef speaks directly to the problem and proposes solutions. Please subscribe to the Defense Acquisition Portal to read almost daily news on these issues. At the same time there are professional organizations focusing on the issues of procurement and development of software during those procurements. The Software Engineering Institute is one. There approach is described in Understanding Common Acquisition Problems. Acquisition, like all complex problems, is fundamentally a risk management problem. In order to manage risk, we needed measures of risk and along with that measures of performance. This paradigm is at the heart of Systems Engineering. The problems Grady speaks about are Systems Engineering problem from day one. Architecture, development, operational test and evaluation, deployment, and sustainment are systems problems.
If there is to be encouragement in education let's start with encouraging Systems Engineering as a foundation. Without the system architecture being right, all the subsystems are going to hampered in their ability to provide solutions to the mission capabilities.
The Other Side of Software Based Procurement
All DoD procurements greater than $20M must have Earned Value Management processes. For procurements greater than $50M, a validated Earned Value Management System is mandated. Reform in the Earned Value world is starting to take place. Inclusion of Techncial Performance Measures (TPM) is one path for improvement. Paul Solomon provides insight into this issue