The debate over how effects should influence our thinking about warfare is fundamentally a philosophical (epistemology) one. Unfortunately some seeming semantic arguments over "effects-based thinking" vs "effects-based operations" belie a much more fundamental debate in philosophical world view. This is the same argument that is currently being debated in scientific circles. One characterization is the "physics-based" worldview vs the "biology-based" worldview.
What this boils down to is a disagreement over what is "knowable" and how we go about "knowing". Can we break everything into components (analyze) to understand how the world works, or must we also understand how some "wholes are greater than the sum of their parts" and conceptualize how things work together (synthesize).
This may seem a long way from warfighting effects - but TheLapsedPacifist dives in head first on the "physics-based" logical positivist side with the statement:
Chaotic and complex systems can be analyzed, their behavior can be predicted. This is a fact –...
Only if you are a logical positivist and its underlying assumptions of "knowability" are true. That indeed you can understand all you need to about how things work by breaking it apart to see how the pieces operate (analyze) and them just aggregate them back together.
Prediction requires an understanding of causes and effects. So this statement "is a fact" if and only if, one has the required understanding of the relationship between cause and effect. Logical positivism (and related empiricism) are based on a "glass is half full" assumption that we can only say what we know, and can't say what we can't know, (because we may discover new knowledge that allows to understand what we previously thought was unknowable).
The use of weather used as an example of predictability is also an example of what many who question the optimism of logical positivism point to as a counter-example. Yes we can predict the weather a few hours ahead for a given location well, and we can 'forecast' (ie set probabilities on a range of possible outcomes - different from prediction) a few days out. But what hope is there for predicting the weather in your backyard 1 million years from now? Those working on complex system theory and philosophy would say that predictability has a horizon and that there is a "predictability horizon" beyond which we can't know.
So how do we know where the "predictability horizon" is for say, a military campaign? We know we don't know, but don't know if it is even knowable - because to know it is to be able to predict up to it - and know somehow your prediction just past it is false.
If we believe that we can predict the "predictability horizon" for a complex system - or we don't believe that such a thing exists, then indeed, it would logically follow that Chaotic and complex systems can be analyzed and predicted. Theoretically it would be possible to predict the weather in my back yard 1 million years from now, you just lack data of sufficient granularity. Just like the EBO, ONA and SOSA advocates say about adversaries "as systems" - the only thing preventing accurate prediction is data.
What we are learning about complex systems appears to be challenging that worldview. The physics-based worldview says if you understand the fundamental building blocks of nature that you can aggregate them together, establish all the causal chains that define the effects they produce, you can understand nature. Deus ex machina. Even Einstein didn't think "god played dice with the universe" - yet the implications of quantum mechanics put a chink in the notion of a possibly unimaginably complicated, not by provably unknowable universe. But increasingly physicists are learning that there seem to be an infinite regression of "fundamental particles" - that exist in an infinitely regression of dimensions. The universe increasing appears "fractal" - some even say "holographic" in its construction. If there is no fundamental building block" how do you win the knowlede game by breaking it down (analyzing it) ad infinitum?
Like relativity came in to address where Newtonian, mechanistic physics failed, biology inspired conceptualizations of how the universe work are increasingly coalescing. Truly complex systems are indeed more than the sum of their parts, and the implication of that is that in truly complex systems may not be analyzable and may not be predictable.
TheLapsedPacifist writes:
2. The paper asserts that the nature of warfare is unknowable...that war is a chaotic and complex system, you can’t predict second or third order effects. Therefore war is impenetrable to scrutiny via scientific approach ergo EBO is fundamentally flawed. Now I can disagree with this, it’s a fallacy.
Not if the what we think we are learning about complex systems stands up to be true. There are deterministic complex systems that one can create that are unpredictable and unanalyzable (Cellular automata like Conway's Game of Life are a good simple example - there is no way to predict the future state of a system of Cellular automata except by "executing the ruleset" ie the rules constitute the simplest algorithm to find a future state - there is no simpler "model" that allows a short cut - the essence of a "predictive model". In other words "the earth itself is the only absolutely accurate map of our planet".
If we believe Godel's incompleteness theorem, the implications for more complex rulesets with indeterminism is more bleak. We can't say impossible, because we 'don't know what we don't know' so the debate moves out of the theoretical into the useful. As my sig line says "All models are wrong" - meaning no predictions are perfect - but some models - hence their predictions - are useful. SO how do we determine the usefulness of prediction?
Understanding they are all to some extent wrong is the first step. That means that we can never have accurate initial conditions, and ultimately will have "a predictability horizon" - which we will only have an educated guess at. Enter the pragmatists into the philosophy of science - that science is not about "truth" - which appears if not unachieavable - at best a long, long, long way off. In the mean time (or as a basic epistemological concept) lets not worry about truth in science, but usefullness.
As for second or third order effects, "predict" is a strong word, it would have been preferable for EBO to be less gung-ho about this and look to "identify" and "anticipate" second or third order effects and develop branches and serial to deal with them.
This gets to where proponents of EBO have been their own worst enemy. The notion that we can "know what the enemy is going to do before he does" and other claims of ONA in particular that are tantamount to precognition are such an intellectual reach that one wonders how they ever got into our concept documents. "Usefulness" would have been good enough, but in their zeal to carry the "Physics-based" worldview to its logical conclusion they seem to take the deus ex machina to extremes not seen since the days of "epicycles of celestial spheres". The problem is that in order to "deal with something" you have to know it is a possible outcome - a form at least forecasting, but one that requires identification of specific chains of events, and an implies reducibility of the causal network. More on that...
Wrap all this mumbo jumbo up and you have the school that The LapsedPacifist and many others ascribe to that the ideas of EBO, ONA and SOSA may be "data-challenged" but represent a goal to strive for that may have limited 'usefulness' today, but will become increasingly 'useful' as we understand it more completely and foomd ways to acquire the required data.
In the other corer you have Gen Mattis and the EBO critics who have an intuition that, at best, the day when we will have the required data (assuming the above school is correct in its assumptions) is so far off and the relative 'usefulness' so marginal that its a waste of time to pursue; at worst the degree of indeterminancy in the basic data, together with an indecipherable nexus of causes and effects in all but the most academic SOSA problems (like power grids and logistics chains) means that the promises of EBO, ONA, and SOSA are in the realm of "not knowable".
The "trajectory of knowledge" to my analysis is in favor of Gen Mattis and the skeptics. In the systems we are interested in, the complex causal networks are irreducible by analysis and thus the are not usefully predictable. TRADOCs recent booklet "Commander's appreciation and Campaign Design",
http://www.tradoc.army.mil/tpubs/pams/p525-5-500.pdf
mentioned in Gen Mattis memo, takes the term "wicked problems" from Barry Watts
http://www.csbaonline.org/4Publicati...t_Training.pdf
and discusses the implications of this on military planning. It discusses some of the fundamental distinction between cases where the engineering oriented notions of EBO, ONA, and SOSA work (simple but extremely complicated to those of limited - 'reducible' complexity) and cases where there are no "scientific solutions" (irreducible complexity) where operational art and it takes a design (some might argue a biologically inspired) plan of attack.
THe bottom line is - "effects-based thinking" from the design-based or biological worldview is likely "useful". Effects-based Operations, ONA and SOSA - derived from a physics-based engineering inspired wordview is at best far in our future, and at worst, not possible to make 'useful'.
Bookmarks