I hear all the time, in agile we focus on value generation. Any viable business focuses on producing value for its products or services. Saying that is a tautology. There can be no viable business without some kind of value proposition that is acceptable to some number of customers to produce enough revenue to cover the costs of producing that value.
But not knowing the cost of that value, the cost to produce that value, managing the cost to produce that value, and controlling the costs during the value production process - There is not sustainable business.
And by the way, the date on which that value will appear, since revenue stream or benefit stream needed to pay for the cost of that value needs to start on the planned time to produce the planned business performance needed to stay in business. Nice products late or nice products for too much cost is a going out of business strategy.
The next time you hear, we focus on value first, call BS. Focusing on value in the absence of focusing on cost of that value is a going out of business strategy (unless you're a sovereign).
In the bottom line management of any viable business you get a divide by zero error if you don't know the cost of the value produced.
Since both value and cost are random variables in any development business - and random variables on production business as well - we need to have estimates for both cost and value at the same time. These estimates need confidence levels, cumulative probabilities of being at or below and on or before conversations. And an understanding of how the work processes drives these random variables as the project or service proceeds.
It can't be said any more clearly
Both Value and the Cost of that Value are needed for any hope of staying in Business
So don't let those asserting we focus on value to get away with that weak statement. Producing products for money is driven by the business processes first. Those producing those products need to get paid.
And when we here, don't estimate budget, then we're fixing one for the three variables of all projects, and the other are still free to vary in random ways. These ways need to be estimated as well, otherwise that fixed budget, may or may not be enough to deliver the needed capabilities on the needed date to deliver that needed value.
I worked a program that had a simple goal - remove all the nuclear waste from a weapons production plant and send it to secure storage, cleanup up all the normal hazards and send them to their assigned disposal sites, uninstall all the infrastructure (heating ventilating, communications and security) while 5,000 people were still working there and not kill anyone along the way. The infrastructure remove program meant installing a complete wireless umbrella across several 100 square miles of property with full encrypted security while maintaining voice, data, first responder, physical, and cyber control.
There was in Incident in which a fire started in an elevator shaft several stories below ground when foam filler was used without reading the directions on its applicable. The result was a safety standown for everyone, including all us office workers, to get the message about health, safety, and safeguards of the materials and processes on site.
There was a banner campaign to get the message across. In our building a banner 15 feet high and probably 40 fee long hung in the lobby hi-bay
DON'T DO STUPID THINGS ON PURPOSE (DDSTOP)
I'm reminded of that when I hear suggested processes - in less threatening environments - like:
We attended a performance of Gustav Holst's The Planets at CU Boulder this week. Its was a combined media show, with music from the CU Orchestra (undergrad and grad music students) and a visual presentation of planetary pictures from the NASA missions to all the planets narrated by NASA astronaut Joseph Tanner, senior instructor in Aerospace Engineering Sciences at CU-Boulder.
What struck me was the following:
What struck me was ever one of the Orchestra members, the visual effects people, Mr. Tanner's experience as a Navy Pilot, Flight Instructor, Astronaut; Maestro Lewis' efforts to lead, mentor, and grow the students skills and experiences on the their path professional musicians or other careers was guided by the sense of mission.
My experience observing space flight missions (I'm in the program planning and controls side of that work), is if you want to see grown men cry, be in the control room when their space craft lands on Mars, enters orbits at Saturan, or crashes in the desert (some of the missions I've been around for). Why do grown men - and women - have tears for these events? Because they are watching their childern perform their job. Just like real children.
Here's the real point. Those missions, those participants, those efforts are not about ME, they are about the mission. Sure there are egos involved. Talented people have egos. But you never, and I mean never, hear them talking about It's all about me, what I need, what I want to do.
These types of programs are focused on the external outcomes - mission success, rather than self actualization of the work. The self actualization happened long before arriving on the team. And that team that so many in the software development world talk about is not a team for themselves, it's a team for the mission - beyond the customer, the MISSION. Here's a mix of animation and real footage for a Mars Lander. Eeveyone in the room is there for the mission.
There's a continuing discussion on LinkedIn and Twitter about project success, the waste of certain activities on projects, and of course the argument without end on estimating the cost of producing the value from projects. It's really a argument without evidence, since some of the protagonist in the estimating discussion have yet to come up with alternatives.
I've come to understand Project Success is multidimensional a few years back after reading "Reinventing Project Management." Aaron Shenhar and Dov Dir, Harvard University Press. The other book that changed my view of the world was IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Peter Weill and Jeanne W. Roos, Harvard University Press.
This last book should put a stake in the heart of #NoEstimates, since the decision rights for those needing and asking for the cost and schedule for the business capabilities belongs to those with the money, not those spending the money.
A summary of the book can be found in the paper, "Project Success: A Multidimensional Strategic Concept," Aaron Shenhar, Dov Dvir, Ofer Levy, and Alan Maltz, Long Range Planning 34, (2001) pp 699-725.
In many cases there is not a "product" per se, but a service. These are wrapped in a larger context in today's enterprise paradigm as "capabilities." Provided the capabilities to accomplish a goal, mission, or business outcome. This is done through products and processes. Both are used by people, other processes, and other products to accomplish other goals, mission, or outcomes. This is the System of Systems view of the "project" paradigm.
Shenhar and Dvir's research along with Levy and Maltz in the paper showed there are 4 success dimensions.
With this paradigm, principles, practices, and processes become the basis of "project management," and the resulting product or service. But the measures of success are better described by Shenhar and Dvir model, since that are the direct consequences of all the enablers of that success.
So Here's the Killer Question(s)
Our current work efforts on forecasting Estimate A Completion and connecting the dots between Earned Value Management and Technical Performance Measures and the sources of Measures of Performance an Measures of Effectiveness is proceeding. Conference and journal papers are coming in the May and June. Here are some books that have informed that effort.
Making Multiple-Objective Decisions, Manssoreh Mollaghasemi and Julia Pet-Edwards is a good starting point when faced with deciding anything on a project.
The book is a handbook for decision making, with examples, and step-by-step processes for multi-criteria decision making. Almost all decisions involve consideration of multiple objectives that often conflict. Cost, technical capabilities, deadlines, safety, appearance, efficiency, etc. In order to decide, information is needed about the tradeoffs involved in the selection processes. This of course is why estimating many of the parameters is mandatory for any cedible decision making process and suggestion we can make decision without estimates is essentially nonsense. A companion book is Making Hard Decisions: An Introduction to Decision Analysis, 2nd Edition Robert Cleman, This book is about value focused thinking and decision making. So when we hear about value and spending other peoples money, this is a good place to look.
Forecasting and Simulating Software Development Projects, Troy Magennis. This is a book about forecasting software cost and schedule for Kanban and Scrum projects. Starting with Scrum and Kanban, Troy shows how to estimate cost and schedule using a what if paradigm and his Monte Carlo Simulation tool.
Since all variables in all projects are random variables, Monte Carlo is one approach to simulating the outcomes. Method of Moments is another, but MCS is a straight forward approach.
Modeling is the basis of decision making as well. With the model we can ask questions about the future and generate confidence intervals on those answers. The George Box quote - nearly universally misused - all models are wrong some are useful is in play here. A model is an approximation of a process - in this case writing software for money. All models are useful to the extent we understand the processes by which the model was developed and applied.
This is a core process of all estimating and replaces guessing with modeling.
We hear the term emperical all the time, but like Ignio Montoya says You keep using that word. I don't think it means what you think it means. Emperical data is gathered from observation. But in the mangement of projects that data must be used to create error signals from that observed performance - the emperical data - when compared to the target data for the projects desired outcomes.
Failing to have a target to steer toward is called Open Loop Control and a very good way to drive straight into the ditch. So once again, estimates to future desired performance, comparted to the past statistical performance (a few samples used to compute the mean with a 2 standard deviation of the means variance is not credible by the way), must be in place to forecast the future performance.
Troy's book shows how to deal with all this.
Forecasting Methods and Applications, 3rd Edition, Spyros Makridakis, Steven Wheelwright, and Rob Hyndman. Hydman's site has everything you need to start forecasting the future using your collected empirical data and the R programming language.
Forecasting has been around since the 1950's with George Box's methods. The same George Box people misquote about all models are wrong. Forecasting is all about decision making again.
The distinction between external - uncontrolled events - and controllable events is many times not made. This creates not only confusion, it's lays the ground work for bad decision making. The much quoted Taleb Black Swans are uncontrolled external event - Externalities in the financial market place. Projects are rarely impacted by externalities if the proper risk management processes are in place. When they are not in place and the project is not Managed then those Black Swans will appear more often. But this is simply bad project management - don't do that.
This book shows how to forecast the future given the past.
So What Can We Do With This Knowledge?
The first thing to do is realize that decision making is a probabilistic process based on the underlying statistics of the processes we are trying to make decisions about. Thinking that we can make decision in the absence of some form of knowledge about cost, schedule, and technical outcomes is simply not possible. Saying so does not make it so.
Exploring on how to make decisions in the absence of estimating - the kind of statistical estimating described in all these books - is unnecessary. These books are a start, but there is a nearly unlimited wealth of information on how to make informed decisions in the presence of uncertainty.
Re-posting Scott Adams Dilbert cartoons of bad management practices is probably good for Scott Adams, but does ZERO to provide corrective actions for that Bad Management. We all know the problems, how about some solutions? It's trivial to point out the problem actually. And since it's trivial, it's also intellectually lazy.
Read these books, read other books, read papers, explore how other people have addressed the problems of increasing the probability of project success, put in the effort needed to make that increase possible on your project. Stop reading Dilbert and start fixing the problems.
The common practice of starting with requirements leads to the common complaint that requirements change, we don't know what we want yet, our users aren't very good at defining requirements so we'll let them emerge. While these are common, they are usually a symptom of a missing piece of information.
We don't know what capabilities are needed and what is the Concept of Operations that those capabilities will implement, the project as likely failed before it starts. If we do know the Capabilities and the Concept of Operations, we can then measure progress of our work effort, not in the passage or time, consumption of resources (including money), or the production of stories or story points (which are unit-less and therefore pretty much meaningless to those paying the our work), but in Measures of Effectiveness, Measures of Performance, and Technical Performance Measures..
Concept of Operations
Let's start with a formal defininton of the Concept of Operations
What this tells us is that we need to start with what DONE looks like. DONE is not a set of features. DONE is not stories or story points. DONE is not modules, databases, bent metal. DONE is the ability, the capability to do something of value in echnage for the money we've spent.
The assessment of a capability is it's Measure of Effectiveness. These are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. We need to define these upfront. The Measures of Effectiveness:
They are not emergement. They are descriptions of success. When we treat them as emergent, our project is chasing a moving target and is headed to the ditch.
Next are Measures of Performance. They characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. The Measures of Performance are:
Next comes the Technical Performance Measures. These are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal. The Technical Performance Measures:
Notice we have not mentioned coding, development methods like Scrum or XP, teams, paired programming of anything to so with building code. With these items in place, all those activities have no reason for being, other than to consume money and pass time. None of those items having anything with moving the project toward DONE, other spend money and pass time. Oh, you'll get a pile of stories implemented. Are they the right stories? How would you know. You'll perform lots of Test Driven Design. Is is the right design. How would you know?
Oh your customer is going to prioritize those stories and features. How are they going to know in the absence of knowing what DONE looks like.
Capabilities Based Planning
This has been presentde before, but now it has a reason - the Concept of Operations.
We Know the Answer To That Rights?
Much of the discussion around making improvements in processes fails to address the governance aspects of a business or organization. Instead it focuses on the personal aspects. Agile development of a software team, without the corporate impacts. The desire to stop doing something without an actual replacement, under the guise of we're exploring. Our the mention of the term intent of the commander without understanding that filling out that intent requires complete capabilities to act in in the absence of direct supervision.
My project management maturity was changed forever at Rocky Flats, under the management of senior leaders with experience and skill formed in the US Navy. The book Making the Impossible Possible is the story of that project and the learnings that can be applied in any high risk, complex, high reward domain.
What do we plan to get done at the end of this period (day, week, month? Write that down, establish some measures of performance and effectiveness for those ourcomes. Measure them, take corrective actions if they don't match the expectations. Repeat until done.
When we hear about all the methods of managing projects, the PMI Body of Knowledge, PRINCE2, home grown and commercial solutions - always look at them in the light of these 5 Immutable Principles and the 5 Practices then implement the principles.
In a recent post titled #NoEstimates - Really? there was an interesting comment.
Clearly, the business value of any feature or project can not be known with much certainty in advance of it being implemented. Still, for the purpose of keeping the analysis simple for now, let’s table this issue for a bit.
This is not the case in a governance based organization or a Capabilities Based Planning organization, where the "valuation" of the resulting product, service, or purchased or built product is part of the planning process.
It's a "build to valuation"
Knowing - to some probabilistic level of confidence - what business value or mission fulfillment the project or product will produce is the core of any decision making process. Knowing the cost of the value is about making decisions, analysis of alterntaives, or assessing the trade space.
With the "estimated" value and the "estimated" cost for that value, a decision can be made in what is called "analysis of alternatives" in our software intensive domain.
Only by having both estimates - value and cost of acheiving that value, their most likely numbers and the probabilistic range of those numbers (measured usually in dollars), can we make that "analysis of alternatives," or "trade space" needed to Govern both the business and the project and products that enable the business to meet its goals.
So there are popular myths around the estimating of cost and value discussion, and a few that are just flat out bogus:
The right question for any suggested improvement, change, or suggesting we stop doing that need to produce an answer to ...
Does this suggestion increase the probability of project success?
This means tracing the suggestion to the outcome of the project being better, faster, cheaper, or some other tangible measure of improvement.
What does project success look like?
The delivery of agreed-upon capability within established resource constriants, e.g. funding, schedule, facilities.
Five factors are used to assess the success
So Now What?
When we hear about something new, anything new, how can we test it that suggestion against business needs, mission needs, governance, or strategy. This starts with establishing the domain in which the suggestion might possibly work. Then proposing the framework in which it has worked or might work. Then a proposed way to assess the possible benefits of performing the sugegsted solution.
There is a continuing discussion in the agile community about delivering value in the order set by the customer. Along with this discussion is the use of the word DONE. A popular phrase is no requirement or piece of software can be considered DONE until it is put to use.
This is a software developers point of view of course. But there is another view of software based products. It starts with the Measures of Effectiveness for the resulting product. These Measures of Effectiveness are:
Operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in an operation environment, under specific conditions.
This measure of DONE is not directly related to code, testing, requirements or anything like that. It is related to how Effective the software is in delivering the capabilities needed to fulfill the mission or business case.
The individual requirements and pieces of code that implement them can be - or should be - traceable to these capabilities. For each Measure of Effectiveness, we then need a Measure of Performance. These measures characterise:
The functional or physical attributes relating to the system's operation, measured or estimated under specific conditions.
These are also not direclty related to producing code, running tests, or other direct software activities.
All the software design, testing, integration, etc. supports the creation of the ability to produce these Measures of Performance and Effectiveness. For the end user, all the development work is behind the scenes. What the customer actually bought was the ability to do something useful. To put a capability of the software system to work accomplishing a business need. Make money with this capability of course.
So What Does All This Mean?
It means that if you start at the bottom - with the software development processes - you're likley not going to see what the real picture is. This picture is that the customer paid for capabilities, measured in units of effectiveness and performance.
When we start with methods, paradigms, even cockamamie ones like not estimating the cost of the work effort needed to produce the capabilities, we loose the connection to why we are here. We are here to produce software that provides a capability. Likely more than one capability.
So when we hear words like - we can manage projects without knowing the cost or we'll let the requirements emerge, or the customer doesn't really know what they want, so we'll get started so they can decide along the way, ask how you are going to recognize DONE, before running out of time and money?
How Do We Discover the Needed Capabilities?
Once we've decided that capabilities are in fact the place to start, how are they gathered. Here's the top level set of activities.
Once we have these, we can start to elicit the technical and operational requirements that will fullfill these capabilities.
These requirements can be emergent, they can evolve, they can be elicited incrementally and iteratively. But what ever way to appear they need to have a home. They need a reason for being here. They need to enable a capability to be available to the user.
When I hear the phrase we're exploring I'm reminded that in fact many who explore without a plan, measures of their progress progress against this plan, a risk management Plan-B for getting home when things go wrong, and without insufficient resources to survive the trip - come home empty handed or many time don't come home at all. Exploring without these items is called wandering around in the wilderness looking for something to eat.
Here's a simple tale about an actual explorer, Ernest Shackleton, who experienced failure and near death on their first expedition to the South Pole (ADM Scott), that informed his attempt the reach the Pole a second time, only to experience failure again. In the first example prepartion was weak, management inconsistent, and lacking an actual strategy, no Plan-B. The second attempt, without Scott, was well planned, well provisioned, well staffed. When trouble started, Plan-B and then Plan-C were put in place and executed.
These 5 questions need credible answers in units of measure meanigful to the decision makers.
What Does All This Mean?
With these top level questions, many approaches are available, not matter what the domain or technology. But in the end if we don't have answers the probability of success will be reduced.
Extensive research has shown given that a current project is more than fifteen percent complete, the overrun at completion will not be less than the overrun incurred to date; and the percent overrun at completion will be greater than the percent overrun incurred to date. Assuming no change in scope or reduction in delivered capabilities, this overrun is locked.
Without knowing the original Estimate At Complete (EAC), the funders of the project have no way of making decisions about the project's total cost, its incremental cost, or how to adjust scope and duration to meet the expected cost incurred in exchange for the expected value produced from this cost. Without this cost information, and related schedule, and techncial performanc information, the notion of decision making is nonsense.
We can't make a decision without knowing the cost and benefits of the resulting decision
The absence of an estimated cost, duration, and delivered capability prevents the business from knowing if they are making the right decision about the investment in the project. So if decision-making is what management does, not knowing this information prevents credible decisions from being made
Not estimating these three data items – cost, schedule, and technical performance (delivered capabilities) – is simply bad business management. What ever unfavorable outcomes – overruns, failed business capabilities, unhappy customers (ACA the most recent example) – is well deserved.
The notion that we can make decision in the absence of estimates of their cost and benefit, appears to be unfounded conjecture, with no evidence of its validity.
Another definition of economics, closer to software developemnt is the study of how people make decisions in resource-limited situations. This definition matches the classical use of the term and is the basis of software economics. Since software product development always takes place in the presence of limited resoruces. Time, money, capabilities, even knowledge. And since software development always is the exchange of those resources for the production of value, looking at development from an economics point of view is a good start for any discussion around improving the process.
Two other definitions are needed before continuing. Macroeconomics is the study of how people make decisions in a resource-limited situation on a national or global scale. Microeconomics is the study of how people make decisions in a resource-limited situation on an individual or organization scale.
For software development, microeconomics deals more with the type of decision making needed for successful projects. And since much of the discussion these days is about making decisions on projects, let's see how the microeconomics paradigm may improve the communication.
There have been suggestions that the book above is old school and no longer applicable to the modern world of software development - e.g. Agile. Since the book is actually about engineering economics not about software development methods, let's see what the book actually says for those who have not read it, heard Dr. Boehm speak, or in my case worked for the same firm where Dr. Boehm lead our process improvement management effort.
This book was a working text, when I attended USC as a Master's student while working at TRW (Boehm's home) for an Engineering Economics course. The book is still in print and available in used form for low prices. So those wishing to comment on the book, without having first read all 765 pages, can now do so at a low cost.
The preface of the book starts with the usual qualifiers, but contains three important objectives
The major objective of the book is to provide a basis for a software engineering course, intended to be taken at the college senior or first year graduate level
So let's look at chapters to get a feel of the concepts of software engineering economics. My comments on the chapter are in italics.
So What's the Point of All This?
When we hear estimating can't be done for software, we actually know better. It is being done in every software domain. Tools, processes, books, papers, conferences, vendors, professional organizations will show you how.
When we hear this, we now know better.
 "Software Engineering Economics," Barry W. Boehm, IEEE Transactions on Software Engineering, Volume SE-10, Number 1, Januarym 1984, pages 4-21.
 This is the crux of the post, the book and the discussion around the conjecture that we can make decisions about how to spend other peoples money with estimating that spend.
From the anarchy of gaming coders sitting in the basement of the incubator on 28th and Pearl Street here in Boulder to the full verification and validation of ballistic missile defense system software, 7 miles up the road.
When I hear about how software should be written, how teams should be organized, how budgeting, planning, testing, deployment, maintenance, transiston to business, transistion to production, sustainment, and the myriad of other activities around software development should be done - the first question is always - what's the domain you're speaking about.
Then - have you tested these ideas outside our personal experience. And finally have you tested these ideas in another domain to see if they carry over? If you're just exploring ideas, no problem. But that limits the credibility of the idea to being just and idea with no actionable outcome, other than a conversation. Those paying for the software you are writing for money, usually don't like paying for you to explore using their cash - unless that effort is actually in the plan.
There are of course fundamental - immutable actually - principles for any project based endeavor. These are the Five Immutable Principles of all project success, shown over and again in the root cause analysis of failed projects.
All five of these principles need answers if we're going to have any hope of success. No matter how often it is repeated, insisted upon, or how clever the message is trying to avoid these principles, they're not going away. They are immutable. They need to be answered on day one and on every day until the project is over.
So if we are writing software for money - internal money, external money, maybe even our own money - ask these questions and see if our answers are credible.
More in next post about the economics of writing software for money.
I heard this phrase in a conference call yesterday with a DOD client and thought, how clever I'll write a blog about this. Only to find out there is a Forbes article with same name and several other articles as well.
The Forbes article had a case study about doing it right around a business process. It was the perfect framework (repeated here) for applying Performance-Based Project Management®
In the Forbes article there are five steps:
In the end project success is about knowing what done looks like, knowing how to get there, how to measure progress along the way. And of course knowing impediments to progress and handling them. These concepts are instantiated in two papers from a colleague Pat Barker, What is Your Estimate at Complete and Program Master Scehdule Can Improve Results, on page 20.
In yesterday's post, the notion of Systems Engineering was suggested as one solutuion to project failure. Here's the next step. The Agile notion started with a manifesto that turned into many interpretations and practices. In the standard project management paradigm, there is a set of principles, practices, and processes described in a variety of ways through several organizations. ITIL, PMI, APM, DOD, DOE, and other owners of project management activities.
When we take the Systems Engineering approach, we can put a wrapper around ALL project management, technical development, and deployment processes, that can be use to assess each practice and process to assure it is providing value. Here's a short overview of this paradigm.
One of the suggestions in #NoEstimates is the slicing of work - either Stories or any word needed to indicate an agile projects chunking of the work - into small pieces. This of course doesn't actually address the issue of producing and Estimate at Completion for the project. An estimate needed by those funding or authorizing the spending of funds to know how much and when.
But slicing is a process of reducing the exposure to uncertainty to a manageable size. It's the next level down's answer to what's the value at risk? Make in small and reduce the value at risk of not showing up on time and on budget. Slicing answers the question, that has been around for some time.
How long are you willing to wait till you find out you are late (or over budget, or it doesn't work as planned)?
The answer to this - how long - question varies according to the domain, value at risk, and other factors usually associated with risk tolerance. But it is a question that must be answered periodically (month;y for us) Recently this notion of slicing has been put forth as part of the solution to the estimating problem, which of course it's not. Since the size of the work chunks only reducing the uncertainty of the variance. Both the aleatory (irreducible uncertainty) and epistemic (reducible uncertainty) will be less when the exposure to the uncertainty is smaller. Beneficial to the project for sure. But the total all in cost and schedule are related to the slicing size only by the cumulative variance of the parts.
It many be interesting to know, that slicing is part of ANSI-748 Earned Value Management assessment by the Defense Contract Management Agency (DCMA). DCMA is the DOD agency that validates the Earned Value Management System's 32 Guidelines. DCMA performs a 14-point assessment of the Integrated Master Schedule (integrated because it is connected to the cost based) and the Performance Measurement Baseline (PMB) (the time phased planned budget for all the work).
DCMA Check 8 looks for high duration activities. These are know to cause issues with the exposure to programmatic risk for the program. The 44 day number represents 2 working months. The work then passes through one accounting period (monthly submission of the Integrated Program Management Report). At the end of each accounting period an assessment of Percent Complete is used to calculate the Budgeted Cost of Work Performed (BCWP) - the earned value for the tasks, works packages, and control accounts (funding buckets) for the program. 44 days may sound long for an agile software development project, but 44 days is short on a multi-year, many millions and likley billions of dollars of defesnse work.
Since the agile comminity is fond of saying there is nothing new here, while suggesting their ideas are new and unique, the above clip is from the DCMA guide, long used in our defense program management paradigm.
So it is worth repeating the principle of asking how long you are willing to wait before you find out something. The rule of thumb is to sample the status of the thing you are controlling are a rate twice your needed to control or determine a value. This is called the Nyquist rate from single processing. Signal processing is where I grew up writing software for Fast Fourier Transforms, Finite Impulse Response Filter, Kalman Filters for particle physics data streaming off the particle accelerator. When I didn't have an orginal idea to finish my PhD studies, I switched to writing the same software for radar signals intelligence, and electroninc warfare systems. Same principles work for any control system including a management control system.
Just as an aside in the control systems paradigm, there is discussion about monitoring and decision making from the information gathered from the monitoring. This is an Open Loop control systems. Without a planned value to seek, the SET POINT if you're using the room thermostat analogy, monitoring the value provide no value, since you don't know what you are seeking the system to do. Just monitoring is Open Loop, you've got numbers from the system the room temperature or the number of stories produced, but no target to control against.
To have a closed loop system, you need a SET POINT, a steering target, a goal, a desired outcome. Then the monitoring - sampling - can produce a variance, a difference between goal and actual - by which you cn take action. Raise or lower the temperature, speed up or sloe down the car, speed up or slow down the production of software outputs. Yes you can go too fast, the down stream user can't take the results and by the time they can, the requirements may have changed. This is the Closed Loop Control.