Cliff,

We seem to be at a glass half empty/glass half full stateof affairs.

Like I said I think we are in general agreement on the individual issues at the high level. We appear to be disagreeing on the extent of applicability.

I would disagree that it isn't possible to determine what the potential futures look like. Again, you are falling into the trap Warden is arguing against- losing sight of the desired end state because we can't be sure exactly what will happen when we try to affect it.
We agree that a "picture of a desired end state" is the place to start with planning. The disagreement is in the degree to which "probabilities" are involved. It is the differnece between what Karl Popper calls "propensities" or the actual physical interpaly of dependant causes that results in a sytem responding to a set of inputs outputs consistently falling within an observed set of frequencies and the Bayesian notion of probability as way to represent the degree of belief in the truth of a statement.

This gets into some esoterica of probability theory, but if you are going to rigorously pursue a scientific theory of war based on the perceived liklihood of the outcomes of a model based on a systems theory decomposition of an a set of dynamis relationships between entities (what Warden is calling a "system") Then you need to eventually "do the Math" and apply the appropriate theoretical tools.

This is another aspect of the discussion I had with Warden that left me a bit put off, his desire to discuss "theory at the theoretical level" and not be troubled to dig into the implications of that theory or the need to "roll up the sleeves" and find the devil's in the details. That is also what Gen Mattis saw as a shortcoming of EBO proponents in general and has been a hallmark in all the attepmts to implement EBO that I've seen.

In broad terms keeping "theory theoretical" is OK - until someone says that we need to throw out our vocabulary and rethink our concepts of doing business. At that point there needs to be a "here is the evidence demonstrating the failure of current system, and justifying the use of the proposed system." To quote a buddy inside the beltway "The 'why wouldn't you if you could' argument will get you in the door, on a slow day, but won't get you a dime in funding..."

Take for example soil- it is a fairly complex system
Soil is a complicated, but not complex system from a material point of view. Its "mechanics of materials" properties are "propensities" - the distribution of weights you can support on samples of a given soil type are a result of effects caused by the physical interactions of the component particles in the soil. Variation is due to inhomogenieties in the distribution of components, not because of relationship dynamics between the components.

You can 'understand soil' - ie determine how the state of a system involving loads placed on soil will change over time (prediction) because the causal chains the produce effects can be modeled in a useful fashion. Soil with never behave suddenly likee water, or concrete in an unexpected fashion.

That can not be said of complex systems like nation states or other social structures.

Depends on how complex the system is and how good your data is.
No, actually that's not true. Take Conways Game of Life, a cellular automata
'simulator'. There is no way, regardless of how much data you can collect, of being able to predict the future state of a gameby any method except direct execution. In other words it is a DETERMINSITIC system of irreducable complexity. You can't create a simpler 'model' of a game of life tableau than the tableau itself to discover the future state of the system from a given state. Or while there are algorithmic tricks to save unrequired CPU cycles computing the nth state, there is no way to skip directly from the n-th to n+m-th state, without computing the states in between.

Now since we do not know the "rules for real life" there is a fundamental "limit fo knowdege" that means that the future states of complex systems are not just unknowable, becasue we don't know enoigh information YET - and will theoretically knowable at some future time when we have enough information - but future states of Complex systems are fundamentally UNKNOWABLE in the absolute sense.

http://en.wikipedia.org/wiki/Conway's_Game_of_Life


You are correct. But there are still potentially points where you can affect even a complex system.
This is where Baysian methods get people into trouble. They think that Bayesian methods give you "probabilities" that are really the same as "propensities" and not simply "degrees of confidence in an outcome given your confidence in the prior proababilities". In other words "probabilities" about the future state of sytems analyzed using Baysian methods have nothing to do with the actual system itself, but are simple mesures of the impplications of the confidence you have in your assumptions, carried through to a conclusion.

Or in other words, there is no physical theory that connects the future state to a prior state in a Baysian (or certain types of Markov models - which are just chains of bayesian models) system decomposition. (THese are types used in every actual implentation of a Systems Analysis tool for these types of "EBO-ish" problems). Its simply a "belief chain" of how confident we are that a series of "results" will result from a series of "effeects". A few have actual "propensities" associated with them, but the vast majority are simple numbers pulled out of an analysts butt, or worse - come to by a consensus of a BOGSAT because "well, there is no way we can think of to do it".

Meaning, "we really have no fricken idea what the causal relationship between this effect and this result is, but gee wouldn't it be great if it existed, and because we don't know, we'll only give it a confidence of 72%".

IF it is an excercise it really doesn't matter, so to look good its gets raised to 94%.

Yes you can affect a complex system, but then what - if you can't assess the relationship between 'effect' and 'result' how does that help get you to the cool "picture of your end state"?

Additionally, it may be possible to pick COGs that are more predictable than others- obviously your level of confidence in your intel about a COG and its potential reaction to various effects has to enter into the decision making.
This is another major pathology in current Systems theory applications to the real world. IT has varioous forms - "you look for your wallet under the street light because you can see there, even though you know you heardit dropp a block back in the dark" or the "This is the thing we all agree on as a potential effect to generate this cause, so therefore it has a high probability of actually being a cause." The implication of that is that if enough people agree on something, it then physically manifests itself (which is something sports fans in particular are susceptible to )

And the most seductive "Doing this worked 3 out of the last 9 times we tried, a better success rate than any other, so lets do it again and assume it has a 33% chance of being effective. Try this with a coin you flip 9 times and then assume the result is the "actual probability of the coin". You will lose your bar money every time...

But are more than happy to act like it works in "real life"...becasue "what is the alternative"...

Not doing something you KNOW TO BE WRONG would be a start