The Feynman Lectures were a staple of my education, including having Feynman come to UC Irvine a speak to the Student Physics Society on his current work in Quantum Electrodynamics (QED).

The 3 volume set is still in our library. Mine are hardbound, there are paperbacks available now.

The books are not actually very good textbooks. The Lectures are just that, transcriptions of lectures. When reading them, you can hear Feynman talk, in the way several other authors write in the way they talk.

For anyone interested in physics, or has ever heard of Richard Feynman should take a look.

My memory - after many decades - is Feynman loved students in ways not all physics professors do. He was a professional teacher as well as a physicist. His Nobel prize never got in the way of his love of students. One of our own Nobel Laureates, Fred Reines had a similar view of students - both undergrad and grad students. Dr. Reines would invite us to his house for BBQ and entertain us with stories of Los Alamos and other adventures.

Take a look, see how a true teacher writes about a topic he loves.

The notion that we can't predict - to some level of confidence - outcomes in the future is of course simply not correct. Earthquake prediction is not technically possible in the populist sense. It's a complex probabilistic process.

Making forecasts - estimates of future outcomes - for software development projects is much less complex. The processes used to make these estimates range from past performance time series to multi-dimensional parametric models. Several tools are available for these parametric model. Steve McConnell provided an original one I use a decade or so ago. Steve provides some background on making estimates where he speaks of the 10 deadly sins

Confusing targets with estimates - the bug-a-boo of all #NoEstimates advocates. It's simple - DON'T DO THIS.

Saying yes when you mean no - no quantitative data and guessing mans bad estimates - DON'T DO THIS

Assume underestimating has no impact on project - DON'T DO THIS

Estimating in the Impossible Zone - an optimistic estimate has a non-zero probability of coming true

Overestimating from use of new tools - DON'T DO THIS

Using only one estimating technique - DON'T DO THIS

Not using estimating software - DON'T DO THIS

Not including risk factors - the the primary sin of the simple small samples of stories or story points used to linearly forecast future performance. DON'T DO THIS.

Providing off the cuff estimates - this is called guessing. DON'T DO THIS.

When you need to estimate - as you do in any non-trivial project - make sure you're not committing any of the 10 sins Steve mentions.

So Why The Earthquakes and SW Estimates?

One process is very complex and emerging science. One is a well developed mathematical process.

There is so much misinformation about estimating software development, it's hard to know where to start. From outright wrong math, to misuse of mathematical concepts, to failure to acknowledge that estimates aren't for developers, they're for those paying the developers.

Planning for an uncertain future calls for a shift in information management — from single numbers to probability distributions — in order to correct the "flaw of averages."

This, in turn, gives rise to the prospect of a Chief Probability Officer to manage the distributions that underlie risk, real portfolios, real options and many other activities in the global economy.

There are some very serious misunderstandings going around about how management in the presence of uncertainty takes places in business. The basic conjecture is

Management Science's Quest: in Search of Predictability ^{†}

Let's start with a basic fact for all projects, all business processes - everything is a stochastic process. So searching for predictability is not a goal for any informed business or technical person or organization. If it is, then that defines the maturity of that person or organization. It happens, but it states up front little understanding of the underlying stochastic processes that create probabilistic outcomes of - Everything

Here's a quick review of both processes in play in all activties.

In the Decision Making Business there are four reasons why they are hard.

Decisons are hard because of its complexity.

Decisions are difficult because of the inherent uncertainty of the situation.

A decision maker may be interested in working toward multiple objectives, but progress in one direction may impede progress in other directions.

A problem may be more difficult if different perspectives lead to different conclusions.

So to start with the notion of predictability - it is simply not possible in any real project or business domain, to speak about predictability in the absence of the underlying statistical processes that create probabilistic outcomes.

Any credible business or technical manager knows this. If predictability is assumed or even desired, then the naivety of the manager is the only likely source, or maybe the intentional ignorance of the statistical and probabilistic nature of business and technical process. But predictability is not possible in the sense of absolutes, only probabilities.

So let's look at some less than informed concepts that are popular in some circles ...

Predictability is a form of causality - predicting is separated from the source of predictions. And certainty the causality associated with prediction need not be there. Bayesian statistics and Monte Carlo Simulation, need not connect the predicted outcomes with the source of those outcomes - other than the source of the random variables from a generating function.

Planning rests on the assumption we can predict - a Plan is a strategy for guiding our efforts to change something in the future or arrive at some place in the future. The Strategy is a Hypothesis and that hypothesis needs an experiment to test the current situation to determine if it will result in the desired outcomes in the future. This is core design of experiments that we all learned in our High School science class. Plans describe an emerging outcome.

Goals change with the observation of reality - This dynamic adaptation process is what we, in the Agile community, call a feedback loop - this is true, but a target value is needed to compare that feedback information to generate an error signal. This is called Closed Loop Control and is the foundation of all control systems including Statistical Process Control system. And control systems that are adaptive in the presence of emerging dynamic systems. This is the basis of Learning Systems in stochastic adaptive control.

Management techniques must not be based on the existence of a perfect, predictable future - this is a naive understanding of management. Perfect, predictable futures simplay do not exist anywhere for anything. All processes are random processes, many times not even stationary random process.

The suggestions above indicate the lack of understanding of fundamental knowledge of making decisions in the presence of uncertainty as described in the Making Hard Decisions book. The Journal Operations Research and Management Sciences, will put the science back in management science that those conjecturing the topics above seem to have missed.

In Journal papers and many books and related sources all the suggestions that we can't make decision in the presence of uncertainty, that simple minded conjectures like:

The basic problem with most perspectives on management today is that they are static analyses of a future environment. And all decisions are made because we believe we can predict the future.

Are simply not true, and better insight as to why they are not true can be had with straightforward reserch available by joining INFORMS or a variety of other professional societies.

So perhaps before making unsubstantiated claims about how modern statistical and probabilistic management processes are applied to business, some homework might be in order.

On a twitter discussions and email exchanges there is a notion of populist books versus technical books used to address issues and problems encountered in our project management domains. My recent book Performance-Based Project Management® is a populist book. There are principles, practices, and processes in the book that can be put to use on real projects, but very few equations and numbers. It's mostly narrative about increasing the probability of project success. But the to calculate that probability based on other numbers, processes, and systems is not there. That's the realm of Technical books and journal papers.

The content of the book was developed with the help of editors at American Management Association, the publisher. The Acquisition Editor contacted me about writing a book for the customers of AMA. He explained up front AMA is in the money making business of selling books. And that although I may have many good ideas, even ideas that people might want to read about, it's an AMA book and I'll be getting lots of help developing those ideas into a book that will make money for AMA.

The distinction between a populist book and a technical book are the differences between a book that addresses a broad audience with a general approach to the topic and a deep dive book focused on a narrow audience.

But one other disticntion is for most of the technical approaches, some form of calculation takes place to support the materials found in the populist material. One simple example is estimating. There are estimating articles and some books that lay out the principles of estimates. We have those in our domain in the form of guidelines and a few texts. But to calculate the Estimate To Complete in a statistically sound manner, technical knowledge and the underlying mathematics of non-linear, non-stationary, stochastic processes (Monte Carlo Simulation of the projects work structure) is needed.

Two examples of populist versus technical

Two from my past two from my current work.

These two books are about the same topic. General relativity and its description of the shape of our universe. One is a best selling popularization of the topic, found in many home libraries of those interested in this fascinating topic. The one on the left is on my shelf from a graduate school course on General Relativity along with Misner, Thorne, and Wheeler's Gravity.

Dense is an understatement for the math and the results of the book on the left. So if you want to calculate something about a rapidly spinning Black Hole, you're going to need that book. The book on the right will talk about those Black Holes in non-mathematical terms, but no numbers come out from that description.

The book on the left is about probabilistic processes in everyday life that we misunderstand or are biased to misunderstand. The many cognitive biases we use to convince ourselves we are making the right decisions on projects are illustrated through nice charts and graphs.

We use the book on the left in our work with non-stationary stochastic process of complex project cost and schedule modeling. Making these decisions is critical to quantifying how technical and economics risk may affect a system's cost. This book is a treatment of how probability methods are applied to model, measure, and manage risk, schedule, and cost engineering for advanced systems. Garvey's shows how to construct models, do the calculations, and make decisions with these calculations.

Here's The Point - Finally

If you come across a suggestion that decisions can be made in the absence of knowing anything about the future numbers or about actually doing the math, put that suggestion in the class of populist descriptions of a complex topic.

If you can't calculate something, then you can't make a decision based on the evidence represented by numbers. If you can't decide based on the math, then the only way left is to decide on intuition, hunchs, opinion, or some other seriously flawed non-analytical basis.

Just a reminder from Mr. Deming stated in yesterday's post

If it's not your money, there's likley an expectation that those providing the money are intestered in the calculations needed to make those decisions.

We attended a performance of Gustav Holst's The Planets at CU Boulder this week. Its was a combined media show, with music from the CU Orchestra (undergrad and grad music students) and a visual presentation of planetary pictures from the NASA missions to all the planets narrated by NASA astronaut Joseph Tanner, senior instructor in Aerospace Engineering Sciences at CU-Boulder.

What struck me was the following:

The conductor, Gary Lewis, led the students in two pieces that night. Each student had a part to play, complete with music. They knew the parts well, followed Maestro Lewis's lead, while adding their skills and experience to the performance

Mr. Tanner spoke many times during the evening the collaborative efforts of Jet Propulsion Laboratory, prime and subcontractors, and all the participants in the planetary missions for the success and some times the failures.

At the conclusion of the concrete, both Maestro Lewis and Mr. Tanner both spoke of the collaborative efforts to produce the performance.

What struck me was ever one of the Orchestra members, the visual effects people, Mr. Tanner's experience as a Navy Pilot, Flight Instructor, Astronaut; Maestro Lewis' efforts to lead, mentor, and grow the students skills and experiences on the their path professional musicians or other careers was guided by the sense of mission.

My experience observing space flight missions (I'm in the program planning and controls side of that work), is if you want to see grown men cry, be in the control room when their space craft lands on Mars, enters orbits at Saturan, or crashes in the desert (some of the missions I've been around for). Why do grown men - and women - have tears for these events? Because they are watching their childern perform their job. Just like real children.

Here's the real point. Those missions, those participants, those efforts are not about ME, they are about the mission. Sure there are egos involved. Talented people have egos. But you never, and I mean never, hear them talking about It's all about me, what I need, what I want to do.

These types of programs are focused on the external outcomes - mission success, rather than self actualization of the work. The self actualization happened long before arriving on the team. And that team that so many in the software development world talk about is not a team for themselves, it's a team for the mission - beyond the customer, the MISSION. Here's a mix of animation and real footage for a Mars Lander. Eeveyone in the room is there for the mission.

Let's start with the book that should be found on every project managers shelf (along with a long list of other books). Huff's book shows how statistics can be easily misused, misunderstood, and sometimes manipulated to show something that just isn't true. This book is still in print in paperback.

For us project managers, we need to start by understanding that probability and statistics are the life blood of our professional. All the numbers we encounter on projects are in fact random numbers. They are generated by the underlying stochastic processes of how projects work. Projects are collections of interacting work activities. These activities are connected with each other and with the externalities of the project. These externalities start with people. People are random processes looking for something to interact with. Some might say people are random processes looking for something to disrupt. Forecasting the behavior of people is very sporty business. This is one motivation for process. Processes guide or bound the behavior of the random behavior of people. For now let's exclude the random behaviour of people from the conversation.

You see processes creating bounds everyday. The speed limits that create safety bounds. Processes for filling out your application to college or for a car loan. There are also processes for developing products. These usually start with simple paradigms - you give me some money, I spend it to give you something back. You assess the value of that product and give me more money to continue or stop giving me money.

These processes involve several simple variables. People, time, and money. Of course there is the technology, but for now let's also ignore this and assume all the technology is working, non-variable, and not part of the processes we're interested in.

How to Actually Lie with Statistics

Stephen Ross is associate professor of professional practice at Columbia's Graduate School of Journalism. Ross provides 7 Lies that are used daily. All of which we can encounter on projects, from people who work projects, or people who write about projects. Hopefully I'm not one of them:

Non-response bias or the non-representative bias - this is the self selection bias. What this is really called is missing things on purpose. This is what Standish does. Tell us about all the problems you've had on IT projects. Their sampling doesn't say what the total population of IT projects are. Or most importantly how many successes there have been. This is the newspaper reporting of project problems. DOD IT projects overrun by $1B. On how much total budget? How much did they overrun as a percentage? What was the total value of the projects that overran by $1B. They don't say.

This is a sampling problem. There are simple mathematical processes for determining how big the sample has to be compared to the total population to produce a confidence level from those samples. The more serious problem here is the sample is too small. To get a credible probability distribution we need to know how many samples are needed. There's a formula for that and for good or bad IT projects we need roughly 20 to 30 samples for a population of 100 projects. These are all project, not just projects that answered the call for tell us about your failed project.

It may be a sample of one. In my experience I see that estimating doesn't work. Or In my experience and the experience of the coffee club of similar people I see that agile is the best approach for enterprise IT.

Mistaking statistical association for causality - this is a common mistake. Connecting processes with the outcomes of those processes requires the statistical test of correlation and causality. This starts with a pre-defined hypothesis stating what we should see. Usually the null hypothesis H_{0}. In this instance a statement that can be tested with evidence of the causality of the processes impacts on the outcome.

The recent debacle for the Affordable Care Act web brought out lots of voices on how the problems could have been avoided. Since rarely were any of these voices actually involved in government procurement contracts, nor did they have any actual connection with the project, it's hard to make a correlation with a cause of the problems. In the end root cause analysis is needed to determine what the actual source of the problem was beyond the obvious. And even then, we'll have to wait for the GAO to write it's report. GAO, RAND, and IDA write reports on Root Causes of large program failures. Hopefully the ACA report will come soon.

Poisoned control - the epidemiology of project failures does not exist. Project failure analysis is dominated by opinion and conjecture, many time by firms selling the solution or even individuals selling the solution. This is a serious failing in the profession of project management. Internally many firms have assessment process built around Six Sigma or Lean Six Sigma. In the absence of a framing assumption and a governance framework it is difficult to sort out opinion from fact.

If you adopt this process (my process actually), you'll improve the probability of success. There are some obvious approaches. I am the author of one. Earned Value Management is another. But even then research is needed to confirm the connection between a process and increased success. The Software Engineering Institute conducts surveys for success versus maturity. I'm involved in an assessment of connecting Technical Performance Measures to Earned Value Management to provide a better view to performance management through a DOD office.

Data enhancement - "400 killed on highways over the holidays." 65% of all projects (sampled by self selected process) overrun their budget by 50%. These are examples of data enhancement. Extrapolation is another source of data enhancement. We see big problems in IT projects in this domain, so there must be similar problems in all domains. Or the inverse of the extrapolation I work in a 3 man shop at a commercial landscaping equipment manufacture, so what I have found that works for me will surely work on your $500M ERP roll out project.

Absoluteness - the use of overwhelming data is a source of amazement to the casual observer. When we have very complex situations reduced to a single number we are being fooled by the data. In exactly the same way we may be fooled by randomness. Many times the unvertainty, range, and complexity of project performance data cannot be separated from the root cause of success or failure. When an assessment is reduced to a single number - like the Standish Report with no variance intervals or confidence on the measurement - the result is unusable.

Partiality - favorable outcomes are presented by owners of the idea. This is called selling. Independent assessments of the data that support the conjecture are needed before any conclusion can be drawn from the salesman's pitch.

Bad Measuring Stick - The dollar over run of $500M on a $5B project is small. Big numbers but small percentage. It's a 10% overrun. If you can get to the end of the project with a 10% cost overrun or a 10% schedule overrun, you're a Project Management God. Never listen to the absolutes. Only listen to the percentages. And more importantly, the percentage compared to the population variances.

In the end it's all about discovering the variances in everything we do. No work process is steady. All work processes have built in variances. The uncertainty about cost, time, and technical performance that is naturally occurring is called aleatory uncertainty. It is irreducible. This means you can't do anything about it, you have to have margin to protect your project. The other uncertainty on the project is epistemic which means we can learn more about the uncertainty and reduce it with this new knowledge.

If we're going to forecast what it will cost, when it will be done, and the probability that it will work when we arrive at done, then understanding both these uncertainties is critical. The notion of breaking things down into small chunks, doing the work in a serial manner, and thinking that the variances are some how removed and is not going to happen is not reality, at least in the reality of non-trivial projects. 3 people in the same room working on a list of sticky notes on the wall - maybe. Much beyond that and the laws of statistics is going to come into play.

To be credible project managers we need to understand how the underlying statistics impact the probability of success of our project. Ignoring this doesn't mean it goes away. It just means we'll be suprised by the underlying behaviour created by these stochastic processes.

There have several rounds of how to use analogies and how not to use analogies in the past few years.

These involved notions like agile is an untended garden. Actually an untended garden and called a weed patch.

Or we can't really stop and develop a strategy because we're always putting out fires. Actually, firemen rarely put out fires. They spend the majority of their time preventing fire with fire safety, inspections, fire inspections. Their job is to never have fires start.

And of course for false analogies of the double pendulumas a stand in for chaos and unpredictability. Since the equations of motion are easily defined - an exercise for any upper division physics student - and a MathLab plugin, you can plot the path of the double pendulum.

And my favorite the attractor analogy, in which it is presumed that some how in chaos theory the attractor attracts. Without understanding that those pretty pictures of the attractor are the result of the underlying equations for the system.

So with that in mind, there is a new book from one of my favorite authors, Douglas Hofstadter, Surfaces and Essences. In the book, Hofstadter makes the case for analogy as the fuel for creative thinking. Using Robert Oppenheimer's quote...

whether or not we talk about discovery or of invention, analogy is inevitable in human thought, because we come to new things in science with what equipment we have, which is how we have learned to think, and above all how we learned to think about the relatedness of things.

But as always we need to take care to assure that those analogies we use to expand the conversation, don't violate the laws of physics, gardening, or mathematics.

When I encounter a "subject matter expert," I've now learned to assess if this person is a "knowledge holder," or a "learning individual." This is what informed that "learning." This came to my from a friend and neighbor who is a graduate of the University of Chicago in economics and spoke at this conference.

In our business - project and program management - there are lots of "knowledge" providers, assessors of knowledge, frameworks for measuring our knowledge. What is needed now is how to learn to make better decisions about our projects. This is not going to happen without a decision making framework for the core principles, practices, and processes for project management. The current Bodies of Knowledge, self-proclaimed providers of better BOKs and assessment tools, authors (I'm one of them), and experts in the field, all need to reassess what we are doing to increase the Learning abilities, beyond the just the possesion of knowledge and the dispensing of this knowledge to others.

How can we actually learn to improve our probability of succes. The Decision Scientist for project success - as suggested here - needs to have art, science, and the scale to provide actionable information for the decision makers for the project.

We have most of the data we need, what we don't do is make decisions on this data, or even know how to make decisions with this data. We report it, but don't use it to make decisions. Numbers matter, but leadership needs to use the numbers.

When we get into discussions about policy, spending money, especially government money, and the outcomes from that spend, we as a nation are ill equipped to handle the facts. Not because the facts tell us thing we don't want to hear - that's another topic. But because the facts are not understandable by the general public.

This is No Way What So Ever a Denigration of the General Public, but rather that the complexity of everyday life have overtaken our ability to deal with them. Let's start with H. G. Wells

"Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write." H.G.Wells

So here's an example of "everyday" life in the life of scientific based policy making that is likely lost of most of those needed to support the policy.

The question is why is there a Higgs Boson is commonly asked in our group of engineers and business neighbors. There are lots of good explanations. One is provided by the PhD Comics guys. Recently there was a question from an electrical engineer, what is the source of the Higgs field? Is it like the electromagnetic field, that comes from accelerating electrons? The force of the electromagnetic field is carried by the Photon, shown in the picture below.

No, The Higgs field and the Higgs Boson is a scalar field, where the Photon and the Gluons that hold the nucleus of atoms together are vector fields.

While searching in the upstairs library I came across a nice explanation that is beyond the cartoon versions - which BTW are very good at setting the stage - but without all the real math needed to connect the dots.

The Higgs is based on the assumption there is a scalar field, the Higgs Field, the permeates all of space. This field couples to particles, including massless particles to give them potential energy and of course according to the mass energy relation E = MC^{2}, a mass. The stronger the coupling, the more massive the particle.

The way particles are thought to acquire mass is their interactions with the Higgs field is somewhat analogous to the way pieces of blotting paper absorb ink. In such an analogy the pieces of paper represent individual particles and the ink represents energy, or mass. Just as pieces of paper are different size and thickness soak up varying amounts of ink, different particles "soak up" varying amounts of energy or mass. The observed mass of a particle depends on the particle's "energy absorbing" ability, and on the strength of the Higgs field in space.

The Higgs process is not generated in the particle, it is only transferred to the particle from the Higgs field, which contained it in the form of energy. The store of energy can be thought of as a source of inertial mass, just as the inverse that inertial mass can be thought of as a store of energy.

In Jammer's book, the Higg's particle and the Higgs field had not been found, now it has. The next big question is why do specific particles have specific masses? What are the rules that say why specific particles weigh certain amounts? What are the rules governing these masses.

Here is most of everything you need to find out about current science topics, including the actual answers to my issue with the populist notions of complex adaptive systems - see the Nonlinear Sciences section near the bottom of the home page for Adaptive and Self-Organizing Systems.

I know this odd, but I was asked the other day, around the Higgs Boson news how does a Black Hole work? I needed an answer that was beyond the populist one of the gravity is so strong nothing can escape, including light and the actual mathematical physics of General Relativity for the formation of Black Holes, the Large Scale Structure of Spacetime and Gravitation descriptions of objects in strong fields, I used to know how to use in graduate school.

But first, the notion of an object (a Star) where no light escapes is not new:

A luminous star, of the same density as the Earth, and whole diameter should be two hundred and fifty time larger than that of the Sun, would not, in consequence of its attraction, allow any of its rays to arrive at us; it is therefore possible that the largest luminous bodies in the universe may, through cause, be invisible - P. S. Laplace (1798)

The easiest way to derive the conditions for the formation of a Black Hole is to show the properties in the populist description, the ones we see in the movies like Star Trek - nothing, even light can escape. But we need some math to go with this it avoid the populist trap of not being able to calculate anything.

So let's pretend we live on a planet and have a small rock in our hands. We throw or shot the rock straight up and observe the motion through the simple equations of potential and kinetic energy.

This rock has kinetic energy of,

where m is the mass of the rock and v is the velocity of the rock once it leaves my hand.

The potential energy of the thrown rock comes from the gravitional pull on and is,

where M is the mass of the planet and r is the radius of the planet. In order for the rock to escape from our planet's gravitational field and not fall back to the place where I threw it, the total energy of the rock, E=K+U must be greater than zero (0).

Since nothing can travel faster that the speed of light c, remember Einstein's special relativity, the maximum kinetic energy, K is ½mc^{2}. The rock can never escape from the surface of the planet if,

If this inequality is TRUE, then the planet has sufficient gravity to be a black hole. Since the little m, the mass of the rock on in the case of light, the photon (which is zero), cancels on the right side inequality, we can make this test for photons as well. So again, if the inequality if TRUE for the equation on the right, no photons are leaving this place and we live on a Black Hole.

So to figure out if you live on a Black Hole, just determine its mass, M, the gravitional constantG, which is assumed to be universal, which is G=6.67384(80)x10^{-11}m^{3}kg^{-1}s^{-2}

My Brother-In-Law came to visit us in Colorado a few weeks ago. He works on his off days at a soaring center in California and had this video from his group. Notice the variometer (the vertical feet per minute up or down) at the end, once the thermal is reached.

Our son is home from college for the Thanksgiving holiday. In his backpack were his Organic Chemistry books and a library book from school. Fractals and Chaos: Simplified for the Life Sciences. This book has guidance for understanding chaos and avoiding the mistakes of using populist descriptions for systems found in nature - in his case celluar biology and organic chemistry students. (Yes a double major)

One popular misunderstanding is that chaotic systems are not deterministic. The common - but wrong - example is the double pendulum. This book describes the background for the actual fact that Chaotic Systems are Deterministic. Chaos is defined as a complex output that mimics random behavior that is generated by a simple, deterministic system.

This is worth repeating - the chaotic system MIMICS random behavior. This is the error in seeing the double pendulum pattern as random, but not realizing this pattern is generated by a deterministic set of equations - the Lagrangian solution to the equations of motion.

Randomness is different from chaos, but it is hard to tell. Although sequences of values or behavior appear similar but underlying these numbers or behavior are two distinctly different processes. We can only tell the difference when we look at the phase space view of the numbers.

In chaotic systems- dynamical - there are only a small number of independent variables, but the output of a chaotic systems is complex. But it is not random. The sequence of values of measured data can be transformed into an object in space. The Phase Space. This object is the Phase Space Set. Some properties of these numbers is easier to analyze in this phase space.

If there is a algorithm for a chaotic system and we run that algorithm with different starting conditions we get different outcomes. This is the double pendulum. But that outcome is NOT random. It is deterministic, but different. The deterministic equations of motion for the double pendulum are simple algebra and trigonometry. The outcome is dependent on the starting conditions, but the output is NOT random, within certain constraints. There is an angle above which the result is random and chaotic.

The next book that came home is A Bee in a Cathedral. This is a book about using analogies to convey information about analogies and their connection with other topics. Unlike the poor analogy of an untended garden can be an analogy emergent system - well it is emergent, an emergent weed patch. This book uses analogies to convey understandings about things like scaling factors. Or a bowling ball on a rubber sheet as an analogy for general relativity and gravity. Or unshuffling a deck of cards can not be done no how many times you try - entropy in action. Or an eye opener, a troop of Chimps has more genetic diversity than the entire human population. This is because we are very young compared to chimps.

So What's the Point?

When you read something about complexity, about analogies, about a process or a method and there is not a reference to the source of the statement that has itself been sourced, be suspect. And absolutely most important, that those sources are peer reviewed in some way. This eliminates most populist approaches to solution to complex problems.

Both books are worth reading if only to show how organized thought processes are based on more organized thought processes, whose underlying concepts - even if they are written in a populist style - can be directly traced to the underlying equations. The double pendulum to the Lagrania of the equations of motion in a gravitational field.

The very complexity of the natural systems with which the epidemiologist is faced often makes it difficult to tell which of the correlations that he observers are biologically significant. This difficulty is not lessened by the fact that it is quite easy to invent hypothesis that would, if they were true, fit attractively into our puzzle. It is much less easy to determine whether they are true or not; and this step has been omitted with a surprising frequency.

- Topley, 1942, from the Epigraph of Evolution of Virulence in an Experimental Bacteriophage System, PhD Thesis, Sharon Lee Messenger, 1997, University of Texas, Austin. (reference provide by biology student son)

So when there is a conjecture that agile software development is like a biological system or like a physical system, I wonder if the true understanding of what that means is known to the speaker? This whole notion of adopting scientific analogies for use in simple and many times simple minded applications to software development is very sporty business, if there is not a clear statement that the connection is purely "notional" and made of illustrative purposes only, having no connection what so ever to reality.

From John Baez's blog. I've followed Dr. Baez as a mathematical physicist from his early days in m-brane theory (super strings) and my later interest post-grad school.

Having a nice "discussion" with a former neighbor who moved back to Connecticut. Patent attorney for a major pharmaceutical company. The discussion started around the impact on the climate chaneg discussion of the leaked email. Then went to "what is the real consensus" of science around climate change from human's behavior.

Lots of back and forth regarding sources. Landed on a set of journal articles that cleared the air. Along the way came across a good site with many sources for the interested reader.

The flight avionics of the manned spacecraft portion is one of the programs our firm works as Program Planning and Controls personnel.

The commitments needed to meet a launch date, is a culture embedded in the manned space flight and launch vehicle business. These commitments start with a clear and concise description of what "done" looks like, the accomplishments needed to produce "done," and the criteria by which these accomplishments will be judged.

Add to this the continuous risk managPublishement process for every step, every week, every deliverable.

## Recent Comments