using Principles, Practices, and Processes to Increase Probability of Success
I'm working two programs where Agile at Scale is the development paradigm. When we start an engagement using other peoples money, in this case the money of a sovereign, we make sure everyone is on the same page. When Agile at Scale is applied, it is usually applied on programs that have tripped the FAR 34.2/DFARS 234.2 levels for Earned Value Management. This means $20M programs are self assessed and $100M and above are validated by the DCMA (Defense Contract Management Agency).
While these programs are applying Agile, usually Scrum, they are also subject to EIA-748-C compliance and a list of other DID's (Data Item Description) and other procurement, development, testing, and operational guidelines . These means there are multiple constraints on how the progress to plan is reported to the customer - the sovereign.
These programs are not 5 guys at the same table as their customer exploring what will be needed for mission success when they're done. These programs are not everyone's cup of tea, but agile is a powerful tool in the right hands of Software Intensive System of Systems for Mission Critical programs. Programs that MUST, deliver the needed Capabilities, at the Needed Time, for the Planned Cost, within the planned Margins for cost, schedule, and technical performance.
One place to start to improve the probability that we're all on the same page is this reading list. This is not an exhaustive list, and it is ever growing. But it's a start. It's hoped this list is the basis of a shared understanding that while Agile is a near universal principle, there are practices that must be tailored to specific domains. And one's experience in one domain may or may not be applicable to other domains.
Like it says in the Scrum Guide.
Scrum (n): A framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value.
And since Scrum is an agile software development framework, Scrum is a framework not a methodology. Scrum of Scrums, Agile At Scale, especially Agile at Scale inside EIA-748-C programs has much different needs than 5 people sitting at the same table with their customer with an emerging set of requirements where the needed capabilities are vague until they appear.
One of the classes every aspiring grad student has to take is research methods. This class teaches the PhD hopefuls (I didn't make the cut and got a consolation prize of a MS), all about doing research and preparing to be a real scientist. A topic in this class is literature search. This makes sure that your cleaver idea of a research topic, in case your advisor hasn't gotten around at actually talking to you, has already been taken, researched, and solved. This is one problem in the physics world - you need an original idea. Replicating old ideas doesn't get you very far.
Here's a start of a literature search on merging Agile at Scale with Earned Value Management. I haven't gotten to the European and Far East journals yet. Instead is a list, I'll just type this once and repurpose the resources here. This PDF is the Resources section of a briefing being used with our clients who are integrating Agile into EVM programs. Go to the LinkedIn Slideshare site - the LI logo in lower right, to open the PDF and follow the links.
When we read about a big IT problem, like...
The first impulse is to use this information to support some or other position, like here's an example of estimate driven bad management. Without the logical conclusion of finding the actual Root Cause of the problem, as shown in the IDA report. Other examples usually start with bogus Standish data. Take a look on page iv below to see the real root cause, and see if Not Estimating would have been able to address the issues with ECSS? Not likley.
This document seems to not load everytime, refresh the page if it doesn't
So we're back to the same place we always seem to come to. Domain and knowledge of the domain, before conjecturing any solution to any problem and the conjectured solution that occurs outside the domain of experience.
For those not able to read the details here's a summary from the final section.
The notion of some that "estimates" are the root cause of the problems and that making decision in the absence of Estimates is the solution to the problem based on un-informed opinion.
So down load the report, read it in it's entirety, then assess other opinions in the light of actually reading the Root Cause Analysis, then drawing conclusions from the actual data and its analysis by the professionals tasked by the sponsors of the RCA who are accountable for looking after the money and the measurable outcomes.
And then the quote that says it all...
"Everyone is entitled to his own opinion, but not his own facts." - Daniel Patrick Moynihan
Orion launched today and recovery after two orbits. The test of the launch system, Pad Abort system, and Heat Shield were the main purposes of the flight.
I worked the proposal - after coming off the winning proposal for Hubble Robotic Service Mission. The Crew Exploration Vehicle was the original name of the flight vehicle. The Integrated Master Plan and Integrated Master Schedule described the increasing maturity of the deliverables for the space craft and it's flight support systems. After the contract win, I moved to the flight avionics firm and defined the IMP/IMS and project performance management processes for that major subcontractor. When you get to minute 21:17, Tracking Data Relay Satellite is mentioned. I worked that project as a new graduate student many decades ago.
Starting back on TDRSS, agile - meaning emerging requirements, test driven development, direct customer feedback on short iterations - and the development process were deployed with rolling waves, 44 day rule Work Packages, and emergent technical requirements derived from Mission Capabilities.
Here's the long version of the launch to orbit.
After two orbits, Orion came home. The double boom is the sonic boom. Tests of the heat shield will confirm if it functioned properly.
Recently a statement was made about agile and complexity and it was conjectured if the project is too complex for a physical board - a place to put the stickies for the stories - then we've missed opportunities to simplify. Possibly not realizing that complexity, as well as complex system, are the norm in many domains and complexity management processes using tools - rather than manual means - is also the norm.
If your Agile planning needs are too complex for a physical board, you've probably missed opportunities to simplify / improve.
When I suggested that agile and agile tools are used to deal with complex problem in these environments, without the need to reduce that complexity, there was a conversation of sorts that suggested...
I'd be surprised to hear Orion was using a COTS Agile project management tool in a significant way
Some Necessary Complexity
On Hubble mission, there is a Service Mission Assurance Process that reveals some of the complexity of the System of Systems found in space flight. The Interface Control processes for example for the payload on STS 125.
External knowledge of what tools were used, what processes were applied, how the flight avionics software for Orion was converted from the 777 suite to the spacecraft suite, tested, altered to user needs, simulated, emulated, verified and validated on rolling waves, on 44 day iteration cycles could have only been obtained if you were actually in the building in the vendors shop.
But there are other surprises in the business of space flight. A few good places to start include:
Beyond the outsiders comments of surprise inside space and defense firms, agile tools from Rally, VersionOne, and JIRA are used in a wide variety of domains from embedded systems to intelligence systems, where the requirements don't come from the users, they come from the enemy. Here's an example of agile in the INTEL business.
Maybe those surprised by the many different applications of the principles of agile - developed long before the Agile Manifesto - missed those processes in Building O6, Sepulveda Blvd, Redondo Beach, circa 1978.
In The End
There are numerous approaches to applying agile development in a wide variety of domains. I work in a domain where Systems Engineering and Earned Value Management is the starting point and Agile is used to develop code guided by EAI-748-C and DID 81861.
In these environments, development of software is incremental and iterative, with emerging requirements, with stable capabilities. These programs are complex and tools are the basis of success for managing all the moving parts of the program. Rarely is everyone in the same room, since these are System of Systems programs. As well Integration and Test are done by external sources - V&V for flight safety. So many of the processes found in small commercial projects are not applicable to programs in our domain.
To suggest there is but one way to reduce complexity by putting all the stories on cards on the wall is a bit naive in the absence of establishing the external needs of the project first, then deciding what processes to apply.
Some background on applying agile in the DOD can be found at:
Domain first, Context second, Only then Process
With all the speculation on what went wrong with the ACA site and all the agile pundits making statements about how agile could have saved the site, here's some actual facts beyond all the opinions - that Daniel Patrick Moynihan would remind us...
Every man is entitled to his own opinion, but not his own facts
The Key Findings are
So when we hear
Think in what domain, with what value at risk, with what complexity of project, and what business process in which these could possibly be applicable. In fact this goes back to the core of the agile manifesto. And when we hear "pure agile," Scrum Masters produce Scrum Slaves, Mob Programming, "we all want a seat at the table with equal voices, and many other "opinions," remember Moynihan and ask for facts, domain, past performance, experience, and examples of success.
As agile starts to scale to larger domains and the government seeks better ways to develop software beyond the failed processes described above - what parts of this manifesto are applicable outside of a small group of people in the same room with the customer directing the work of the developers?
As my colleague (former NASA Cost Director) reminds our team if you see something that doesn't make sense - follow the money. In the case of ACA and in the case of the Work Shop outcomes above.
The discussion (of sorts) on Twitter around "no estimates" - what ever that actually means, since there is no definitive description other than exploring - brings me back to my core program management, project management, writing software for money, designing algorithms for identifying moving targets in radar systems, and other software engineering experiences.
Let's start with a fundamental pricniple of all product or service development, either internal or external. While leading a couple hands full of project managers at a large Department of Energy environmental cleanup site, where software development was a critical success factor - and by the way we introduced eXtreme Programming into an ANSI validated Earned Value Management System - our external consulting firm gave us some good advice. We were bidding our technology and services at another DOE site, with similar cleanup problems. We were working on strategies, balanced scorecards, systems architectures, and the like.
That's all nice boys and girls, but here's some fundamental advice - our customer has money and we want it. What's it gonna cost and when will you be done providing the capabilities to close this site?
That's it, that's the winning strategy. The customer has a need, we want to providea solution to it. How much will it cost and when will we have it. If we can answer those three questions - cost, schedule, delivered capabilities, with attached unassailable beneficial outcomes - we will win. This is a business strategy. All the Balanced Scorecard presentations, examples of past performance, deep references of success, are all for naught if the customer can't afford our solution. It comes down to this - and this is where I learned this from the Managing Partner of the Big 6 (at that time) consulting firm.
You can't know the value of something until you know its cost
That's a fundamental principle of all business transactions. Value is always exchanged for cost. We do this when we buy a Venti Nonfat Latte at Star Bucks. We do this when we pay the lawn care company to mow and trim weekly. We do this when we buy anything, including software or the services that produce the software.
This is an immutable principle of commerce
So when we hear, there are alternative ways of writing software for money that don't involves estimating the cost of doing that work, think again. How did you get around the immutable principle of commerce? Now notice I used the word estimate in the same post as know. Yes, estimating allows us to know something to some level of confidence.
I'll estimate that my 1 hour drive to work everyday, will be extended to 1 hour 20 minutes when the snow storm arrives tomorrow night.
I know I'd better have margin in my drive schedule, if I want to attend the 8:30 stand up.
I estimate that it will take 3 days to install and verify the database for the system, given the historical data from the last 3 times we did this.
This knowledge can then be used to plan the access to the server room, arrange for all the verification and validation data we'll need to certify the contents are ready for use by the customer.
We estimate to a degree of confidence, things (time, cost, performance) we'll need to know about to do our job.
So How Can We Learn To Estimate?
Here's where we start. We start with what has taken place in the past. We've never done this before you say. I'd suggest, working literally in the rocket science world, there is very little in the commercial software world that hasn't been done by someone, somewhere in the past. You may not know these people, but it's been done. And more importantly it's the people issues that muck up the project most of the time, not the technical, unless of course it actually is rocket science, or stealth fighter science, or bioscience.
So with the second best basis of estimate approach - What is this like? We've done similar things in the past, how is this problem like those solutions? Next comes the 10 questions approach. The Planning Game. Then a parametric approach. Function Points, COCOMO, SEER, Price-S, SLIM, CoStar, and a long list of other basis of estimate tools, some free can guide you. So when you hear software can't be estimated, change the phrase to I don't know how to estimate software projects, but I can sure look into learning how.
Finally the least desirable way to estimate is to ask the expert. This only works if the expert has been calibrated with a reference class, has her optimism bias in check, knows all about anchoring and adjustment, and has a track record for producing credible estimates. If not, you're going to be disappointed in the result.
But our management uses our estimates against us. Our management doesn't understand the notion of probability and statistics. Our management behaves like Dilbert's boss. This has nothing to do with the need to estimate the cost, schedule, and technical performance of the product and service needed by your customer. It has everyone to do with managing up. And if that's not possible, producing a credible estimate with those risks baked in to protect yourself. And if that's not possible start looking for a better manager or even a better job because your company is going to be in the ditch before long.
So when we hear that estimating is the smell of dysfunction - without ever listing one single dyfunction - remember there are lots of dysfunctions in business. This is normal, because humans are involved. But that dysfunction is not caused by the need to estimate. The need to estimate is a core business process. Doing bad estimates, doing estimates for the wrong reasons, doing estimates wrong - that's a dysfunction that is universal.
In the end you need to either nut up or shut up as Woody says. Yes, that Woody. Learn to estimate for all the right reasons, then when there is an opportunity to have an enlightened manager at your current firm or a new firm, you'll be prepared to contribute value to the business process in ways that benefit the top line.
Since that top line, minus the costs to produce the goods or services is the bottom line (in it's simplest form) is what writing software for money is all about. Knowing the middle line - costs of goods sold (COGS) is critical to actually staying in business.
There is a post that references a concept I've come to use that puts uncertainty into three classes. This post it not exactly what I said, so let me clarify it is bit.
First some background. I work on an engagement that provides advice to an office inside the Office of Secretary of Defense (OSD). This office, the inside, is responsible for determining the Root Cause of program performance for ACAT1 (Acquisition Category 1) programs.
These are large programs. Larger than $5B. In most domains outside the ACAT1's this numer is ridiculously large. But inside the circle of large defense programs, $5B is really not that much money. Joint Strike in a Congressional Quarterly and the Government Accountability Office indicated a "Total estimated program cost now $400B," nearly twice initial cost. DDG-1000 is $21,214 Million, yes that $21,214,000,000.
No IT or software development project would come within a millionth of that. If you're interested there are reports at Rand and IDA for the current issues. There are certaintly multi-million dollar IT projects. The ACA web site is probably going to be in the range of $85M to several 100 million. The facts are still coming in. So anyone who says they know and doesn't work directly in the program, proably doesn't know and is making up numbers. GAO will get to the real numbers soon we hope.
Principles Rule, Practices Follow, Everything Else is BS
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
The soliloquy in the movie makes a good point -handling the truth is actually very difficult for almost everyone outside the domain - in many instances.
We want the simple answer. We want it all to be fine. We really don't want to do the heavy lifting needed to come up with an answer. We want the simple answer. Many times we don't want an answer at all, we want to just do our job and ignore the fuduciary responsibility to tell others what the cost and schedule impacts are, or even to do our job of discovering that DONE looks like before we start spending other peoples money.
So here's the way out of the trap of at least (1) and (2)
But the words used in the original post that referenced my post are not my intent, nor are they part of any process I work in.
Here's a list of other posts on this topic. It's a crtically important topic. One that deserves deatiled analysis. One that we're obligated to know and use when it's not our money we're spending. It's called Governance.
Here's some more discussion on Estimating for fun and profit.
There are endless discussions about what went wrong with the Affordable Care Act web site development and deployment. It'll be hard to tell at this early point in the project assessment. But what is clear is this was mist likley a failure of project management.
Below is the acquisition life cycle for Business systems in the DOD, not that HHS is a DOD-style shop, but the paradigm of iterative and incremental development is in place. The release cycles shown here are way too long for something like the ACA Site. But to topology of the process is sound.
Looking at this process there is an obvious starting point. The Business Capability Definition. What is the resulting system supposed to do in terms of capabilities. Not the technical and operational requirements, but what business capabilities will the system provide to the stakeholder when it is in full operation? This is called Initial Operating Capability (IOC).
In our domain we start defining the capabilities using the Defense Acquisition Guide. Here is where Measures of Effectiveness (MoE) are defined. The Measure of Effectiveness is assigned to a capability. If we want a capability, how effective does it have to be? This measure is not a technical performance or a requirement. It is an effectiveness measure.
A MoE for a UAV program we work would be The UAV shall be transportable within a 3,000 mile radius via a C-17/C-141/C-5 package. From the MoE there is a Measure of Performance (MoP). For example weight is a MoP that enables the MoE to be fulfilled. Lower down are Technical Performance Measures (TPM). For example the weight of a Electro Optical / Infrared sensor platform must be under 55 pounds for the UAV to operate properly. It can't be too light or it would disrupt the center of gravity and can't be too heavy because the UAV would burn too much fuel to accomplish it's mission.
So for the ACA site, we'd need to know if there were MoE's, MoP's, TPM's defined that enable the Capabilities to be delivered. Here's the Performance Reference Model for federal IT.
Since the ACA site is pretty much all software, I'm going to suggest that this approach of using Capabilities Based Planning, MoE's, MoP's, TPM's has nothing to do with how the software is built. Either traditional or agile methods can be used. Agile is likely faster, but agile can only work in a domain like this if you know what DONE looks like in terms of MoE's, MoP's, and TPM's. This is a fixed launch date, fixed set of requirements guided by all the insurance regulations, and hopefully some not to exceed budget.
It is a common myth that government acquisition is waterfall and big design up front. DoD 5000.02 prescribes an iterative process designed to assess the viability of technologies while simultaneously refining user requirements. (pg 16 of 5000.02).
One starting question of the ACA Site would be - did they apply the iterative acquisition process in some form, no matter the fidelity of the iterations?
Here are some other fundamental questions as well:
If the answer to any of these is no or we don't know, go find out, get project managers who can do this. Otherwise the probability of project success is reduced. In fact look at the Probability of Program Success literature for further guidance.
The final question is did they have an Integrated Master Plan and Integrated Master Schedule for all the work as described in the Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide? This paradigm has been shown to significantly increase the probability of success not matter the domain, context, development method, technology, or business process. It states in clear, concise, and unequivocal terms what DONE looks like at every point in the project in units of measure meaningful to the decision makers.
The final - and killer question is - did the project team ruthlessly manage the changes to the capabilities? This is suspect is the root cause of the failure. Late changes to complex projects are the kiss of death.
As repeated often here...
Don't do stupid things on purpose
So Now What?
We have to wait to see what the Root Cause Analysis (RCA) shows for the failure of the project. But I'd conjecture the program management processes found in large DoD or NASA programs where not applied in any meaningful way. The site is not larger compared to most of the programs we work ($400M is small), but the processes used to manage those programs can be scaled down with ease. The Principles are the same. The Practices are scalable and the Processes scalable as well.
This is from an article about the application of Bayesian Statistics to a civil suit in the UK over the source of a building fire.
The idea that you can assign probabilities to events that have already occurred, but where we are ignorant of the result, forms the basis for the Bayesian view of probability. Put very broadly, the 'classical' view of probability is in terms of genuine unpredictability about future events, popularly known as 'chance' or 'aleatory uncertainty'.
The Bayesian interpretation allows probability also to be used to express our uncertainty due to our ignorance, known as 'epistemic uncertainty', and popularly expressed as betting odds. Of course there are all gradations, from pure chance (think radioactive decay) to processes assumed to be pure chance (lottery draws), to future events whose odds depend on a mixture of genuine unpredictability and ignorance of the facts (whether Oscar Pistorius will be convicted of murder), to pure epistemic uncertainty (whether Oscar Pistorius knowingly shot his girlfriend).
When we build probabilistic models of project performance - cost, schedule, and technical - we assume we understand the underlying statistical processes that drive these probabilistic generating functions. These are the aleatory uncertainties in duration, cost, and performance. We define the Probability Density Function in the Monte Carlo Simulator. Then we apply that to the network of work activities (the Integrated Master Schedule), to produce confidence outcomes for completing on or before a planned date and at or below a planned cost. This is all fine and dandy. But we really don't know the underlying drivers that create coupling, correlation, and cross correlations between the work activities, cost, and technical performance. These can be model by discovering the drivers in the network.
For the Epistemic uncertainties we need another modeling tool. The current tools don't actually use Bayesian statistics, rather they use Monte Carlo Simulation and treat the Probability of an Event as an aleatory process integrated with the other PDF's, ranges, and their shapes (Kurtosis and Skew).
We're missing the tools needed to construct a credible epistemic model of how the program works. Using the Integrated Master Schedule (IMS) as the topology for work, the probabilistic behaviours of the work elements at each node - cost, schedule, and technical performance compliance of the products - and the coupling and cohesion of the nodes. With this information - assuming it is credible, which is a HUGE assumption - we could model the behaviour of the program and ask what if questions.
There is a class of responders on the forums that continue to seek simple and simple minded solutions to complex problems. Claiming the problems can be solved with simple solutions. Of course those solutions cannot be demonstrated with any credibility outside of a limited domain. Let's start with Mencken
For every complex problem there is an answer that is clear, simple, and wrong. - H. L. Mencken
In the project domain most of the problems start with poor estimates of cost and schedule. Hollmann starts it off http://lnkd.in/vjf4X4. Bent Flyvbjerg takes a more radical view, calling estimators liars , but I chalk that up to a language barrier.
The core problem starts with the politics of estimating. Rarely do the buyers of the project know what it should cost. So when they hear a number that is larger than their expectations, they reject the number. They signal to the sellers what they expect to pay for something. This expectation can come from poor knowledge or simply a naive understanding of the problem, or worse a public promise that simply can't be kept.
But a second problem exists. That across domains, there is the wrong headed belief that one domain has the solution to another domains problems. We need to break the loop we're stuck thinking that problems are only found in other domains and making the ill-informed suggestions that "if you just did it my way," there would be no problem. There is a fundamental issues with managing complexity and managing in the presence of uncertainty. We want it to be simple, it is not, but since it is hard, people don't want to actually work hard on the solution. Instead seeking simple and may times "simple minded" solutions - buy this tool, use my excel spread sheet, dumb down the processes.
So What Is The Solution
First is to recognize that ALL domains have similar problems. Cost and Schedule estimates have been discussed for many decades. Lots of literature leads us back to the singular source - the political will to search for the credible estimate, even if that means not starting the project. Or better yet, to start the project knowing the estimates are flawed.
One provocateur voice in our community suggests that PMs that take on project with poor estimates should be held criminally liable for the results. This of course is laughable, but it demonstrates the complete understanding of the problem and the solution.
This approach reminds me of Pauli's quote when presented with a nonsensical submission from a student.
They were worse than wrong because they could not be proven wrong. Famously, he once said o: Das ist nicht nur nicht richtig, es ist nicht einmal falsch! "Not only is it not right, it's not even wrong!"
Thus is where we are now. Not much in the form of research and questioning, lots of pontificating about how the other guy is all messed up, and if you'd only do it my way, you'd have your solution. Reminds me of the current US political problems. Do it my way. Of course that doesn't work, because we have complex - wicked - problems, and there is no single way. Mencken was right, and he was right long before any of the current problems.
We may be doomed to repeat history with first recognizing the problem is us.
I was at a conference for government contractors and the software systems we use to manage those programs. In the federal government agile is moving forward. Greater Accountability and Faster Delivery Through Modular Contracting.
During one of the session there was a quote that says it all about Agile Development
AGILE MEANS YOU'RE BEING PAID FOR DISCOVERING THE SCOPE OF THE PROJECT
This says it all. Agile is a powerful approach to software development in the commercial world. Agile in the government domain, encounters the Federal Acquisition Regulations (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS).
In these environments, Earned Value Management, the Six Business System Rules, and other procurement guidance define how work is performed. This is all being worked out, but the quote above did it for me!!
In the past NAVAIR had a CD with all the SETR Program Event Checklists. Each had an Excel spread sheet that you filled out to determine if you were ready for the Event and a supporting document that describes all the details of the Event, both entry and exit criteria. The picture above has the link to the current spread sheets.
This approach to the Integrated Master Plan spelled out explicitly what "done" looked and there was no way to skip around the check list and call the work "done," without passing all the items with a GREEN. I've requested from NAVAIR the latest, but looks like it has been expanded to be DOD level now.
This paradigm is at the heart of describing what "done" looks like in units of measure for both the buyer and the seller. Measures of Effectiveness for the buyer and Measures of Performance for the seller.
Jeff Sutherland has a post about agile in the Department of Defense. Good stuff, but there are some issue with "going agile" in the Federal IT space. The first is for contracts over $20M require Earned Value Management, per FAR/DFARS flow downs and the OMB Part 7 guides for "FedCiv."
The second is the contract vehicles need to be adapted to how agile does things with requirements, changes, deliverables, progress payments, fee calculations. Here's the "tip of the iceberg" for integrating Scrum with Federal Acquisition. While Scrum is a wonderful concept and applicable to large majorities of projects, but it's not quite as simple as it might seem.
Here is a recent briefing on the top of agile in the federal procurement world is the focus of many organizations in and out of the government. Management Concepts is one source of training for agile in the Fed space, there are others. But in the end this is an acquisition issue.
Without changes in the FAR/DFARS clause flow downs that "put on contract" the behaviors mandated by the regulations of how software is developed, acquired, and put into service. While the agile community has many power tools to improve the problem in the federal government, they need come at this from a acquisition point of view for those benefits to be realized.
Here's some prior discussion
Today is Veterans Day. As a veteran I have a special experience working with and for veterans. It is more than a shared experience, no matter the service, rank, or experience. It is a shared understanding of service, sacrifice, duty, and honor.
Let us remember those who served today. Not because it is a national holiday, but because their service allows us to celebrate this holiday.
I returned 14 August 1970. Many did not, let us remember them most of all.
Management Concepts is hosting a webinar John Driessnack and I will be speaking about Agile in the Federal environment. This is the first ini a series of discussion about integrating Scrum and Scrum-like software development processes into project and programs subject to the Federal Acquisition Regulations and other guidance like Office of Management and Budget (OMB) Part 7.
If you work in the Federal Government or for a contractor that works for the Federal Government and are wondering when the Fed's are going to figure out "agile," please join us to hear more.
|If you wish to update your profile or no longer receive emails from us, please click here