Results 1 to 20 of 77

Thread: Mathematics of War

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Ken White View Post
    Sure have. My issue is with your "degree of accuracy." Adequate for your trade perhaps -- in my former trade that 'degree of accuracy' can easily get you killed.
    Agreed. So the question is whether or not the degree of accuracy in a quantitative model is more or less likely to get someone killed than not using it. In the medical profession--where life and death is equally, and probably more frequently, at question--the answer's obvious.

    I'm unsure who constitutes your "we" but I do know that I'm not wasting any money on pshrinks. Or Term Insurance. As for advertising -- some success stories, some abject failures and even the success stories didn't get nearly everyone...
    Certainly, but the outliers--or even a sizable deviation under certain circumstances (not life threatening, to be sure)--doesn't overwhelm the value gained from predicting behavior in the aggregate. Optimization doesn't guarantee perfection, only a good bet that practice that considers it is better than practice that doesn't.

    If one's ad campaign doesn't work out, few to no lives are likely lost -- if one's war campaign doesn't work out, many lives and perhaps more will be lost.You are familiar with these guys? LINKThey and their founder have been at it since shortly after WW II.

    They and others have tried the numerate approach to war for years. None of those attempts ever really took hold. I think perhaps there's a message in that...
    Don't get me wrong. I'm the first to say that there's no evidence that the power law Gourley et. al. have rediscovered will yield any valuable prescriptions. You can say the same about any number of aphorisms about violence--war is hell, whoever gets there with the mostest the firstest wins, guns don't kill people blah blah--all accurate and probably not all that helpful when faced with a real need to plan and execute.

    On the other hand, you can plainly see the value in quantitative methods in force flow planning, bridging, navigation, decision trees, acquisitions (jokes go here), etc. These methods should and do prove their worth the same way tradition does--by being tested under specific conditions time and time again and under fire. We generalize their lessons at our own risk.
    PH Cannady
    Correlate Systems

  2. #2
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Presley Cannady View Post
    We generalize their lessons at our own risk.
    I should admit this is a weasel phrase. More often than I'd care to count, but not often enough to detract from the aggregate value gained, lessons drawn from quantitative methods applied to people are applicable only to the sample studied. You can have the most elegant model of conflicts from 1931 to 2009 and find out it has no predictive value whatsoever. I guess this is why so many evaluators will qualify their recommendations with pages long "provided that such and such is this and that..." preliminaries. Happens in every industry.
    PH Cannady
    Correlate Systems

  3. #3
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default

    Quote Originally Posted by Presley Cannady View Post
    "Agreed. So the question is whether or not the degree of accuracy in a quantitative model is more or less likely to get someone killed than not using it. In the medical profession--where life and death is equally, and probably more frequently, at question--the answer's obvious."
    My observation has been that the success rate of good intuitive commanders is about 75%; that of their more numerately inclined peers is about 35-40%. My observation has also been that Medical Doctors are Like Economists; if you don't like what one says, ask another. Had a Grandfather who was a Doctor. He contended after over 50 years of practicing medicine that it was more art than science.

    My observation of the Medicos leads me to believe that their numbers probably would roughly co8incide with my combat commanders...
    "...doesn't overwhelm the value gained from predicting behavior in the aggregate. Optimization doesn't guarantee perfection, only a good bet that practice that considers it is better than practice that doesn't."
    I agree with that for many actions and activites. I do not agree that it is correct when applied to warfare -- or Blackjack -- by most people.
    "..blah blah--all accurate and probably not all that helpful when faced with a real need to plan and execute.
    Blah blah is never helpful in anything. Aphorisms and metaphors have their place. So do numbers and models. Warfare mostly is not one of those places.
    On the other hand, you can plainly see the value in quantitative methods in force flow planning, bridging, navigation, decision trees, acquisitions (jokes go here), etc. These methods should and do prove their worth the same way tradition does--by being tested under specific conditions time and time again and under fire. We generalize their lessons at our own risk.
    Having undergone the pain of coping with 'force flow planning' on numerous occasions, I can tell you that it usually gets totally screwed up -- frequently but not always dues to human error -- and then a human has to unstick it. Bridging is an Engineering endeavor and obviously needs several skills to do it efficiently -- not so many are needed to do it effectively. I've seen a number of matrices and decision trees fail totally -- usually at some cost in pain and suffering. Acquisitions, as you say...

    Actually, very few things are "tested under specific conditions time and time again and under fire." That's because almost every effort attempted under fire is subject to the vagaries and variances of the mission, the particular enemy at a given point and time, the terrain and the type or lack of vegetation thereon, the troops one has available (and even with the same troops exactly, time will affect their abilities and effectiveness), the time of year and of day as well as that available and in any situation, not just COIN but mid level or major war, civilian considerations (and that can include own as well as international political constraints, like Rules of Engagement, media coverage and such). Throw in human foibles and you have too many variables so you will build a model upon which you cannot rely above the 50% level -- I like my fights to have better odds and that can usually be arranged.

    BTW, don't conflate tradition and experience -- or principle and application.

  4. #4
    Council Member
    Join Date
    May 2008
    Posts
    4,021

    Default Mathematical models & reality

    The attached attack was sand tabled and rehearsed for a couple of weeks. In the event, tactical bombing failed to dent Siegfried - so also div & regt arty & 4.2 mortars. Charlie (my dad's company) & an attached MG platoon from Dog took most of their casualties in the first hour from pre-registered Jerry arty & mortars.

    So, Charlie stalled by the RR tracks, until a few guys took out the blocking pillbox - and a couple of platoon leaders put together a composite platoon which was at least able to provide Able & Baker with supporting fires. All of C's assault squads (the guys with explosives & flamethrowers) were lost to the Jerry barrage.

    A & B assaulted as planned - and all pillboxes were reduced by day's end. So, 1/117 was the can opener that opened the gap for the rest of the 30ID and 2AD.

    Are there mathematical models for the tip of the spear ?

    Would they predict the casualties sustained ?
    Attached Images Attached Images

  5. #5
    Council Member William F. Owen's Avatar
    Join Date
    Dec 2007
    Location
    The State of Partachia, at the eastern end of the Mediterranean
    Posts
    3,947

    Default

    Quote Originally Posted by jmm99 View Post
    T
    Are there mathematical models for the tip of the spear ?

    Would they predict the casualties sustained ?
    1. Very interesting example. Shake your Dad's hand for me. I mean it.

    2. To answer your question, no their are not, (that I am aware of) in terms of proven reliability. There are general approximations for certain conflicts at certain times, but that doesn't tell you much. The Soviets had extensive and comprehensive sets of data used for planning, but there is no way of knowing how accurate of useful they were.
    A while ago I spoke to some old US Army Colonel who told me that some work done with modern simulations shows them to be generally accurate. - who knows? Personally I think it's an area with little merit in studying.
    Last edited by davidbfpo; 05-12-2009 at 03:07 PM. Reason: Spelling.
    Infinity Journal "I don't care if this works in practice. I want to see it work in theory!"

    - The job of the British Army out here is to kill or capture Communist Terrorists in Malaya.
    - If we can double the ratio of kills per contact, we will soon put an end to the shooting in Malaya.
    Sir Gerald Templer, foreword to the "Conduct of Anti-Terrorist Operations in Malaya," 1958 Edition

  6. #6
    Council Member Tom Odom's Avatar
    Join Date
    Oct 2005
    Location
    DeRidder LA
    Posts
    3,949

    Default

    Are there mathematical models for the tip of the spear ?

    Would they predict the casualties sustained ?
    Mike,

    Perhaps, perhaps not...

    What they would not model was the courage and adaptability of C Company after its losses. The human element is the free radical.

    Tom

  7. #7
    Council Member
    Join Date
    May 2008
    Posts
    4,021

    Default For the present ...

    I'll have to pass on this:

    from Wilf
    Shake you Dad's hand for me.
    but someday I will. In the meantime, Charles Owen can do the honors.

    Dad died in 1978 from his final heart attack, when a sliver of German metal (which couldn't be removed, at least back then) worked its way into his heart's nerve bundle.

    from Tom
    The human element is the free radical.
    So very, very true.

    Thanks guys,

    Cheers

    Mike

    ----------------------------
    PS For those who might be interested in the larger picture (the northern wing of the Aachen envelopment) - which probably is more amenable to modeling, see this map. The 1/117 attack sector (ca. 600m) is the most northern double arrow pattern in the top lefthand corner. My dad was hit (18 Oct) south of the town of Alsdorf in the southeast part of the map.

    The Wehrmacht map, showing both the northern and southern (1ID) wings of the envelopment is here. Patrols of the 30ID and 1ID met southwest of Wurselen - 16 Oct 1944.
    Last edited by jmm99; 05-12-2009 at 12:03 PM. Reason: add PS and links

  8. #8
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Ken White View Post
    My observation has been that the success rate of good intuitive commanders is about 75%; that of their more numerately inclined peers is about 35-40%. My observation has also been that Medical Doctors are Like Economists; if you don't like what one says, ask another. Had a Grandfather who was a Doctor. He contended after over 50 years of practicing medicine that it was more art than science.
    At one point the medical profession was more art than science. Men generally also only lived until their late 40s and bacterial infections were considerably more fatal. And since little if any warfighting prescriptions following from quantitative modeling clearly contradict long experience, I'm not surprised to find that intuition performs so well. As for the performance of the more numerically inclined, I'd say this: the bean counter is not the model and visa versa.

    My observation of the Medicos leads me to believe that their numbers probably would roughly co8incide with my combat commanders...I agree with that for many actions and activites. I do not agree that it is correct when applied to warfare -- or Blackjack -- by most people.
    Most people don't understand the mathematics behind Blackjack. When they do, they make a book and a movie about it.

    Blah blah is never helpful in anything. Aphorisms and metaphors have their place. So do numbers and models. Warfare mostly is not one of those places.Having undergone the pain of coping with 'force flow planning' on numerous occasions, I can tell you that it usually gets totally screwed up -- frequently but not always dues to human error -- and then a human has to unstick it. Bridging is an Engineering endeavor and obviously needs several skills to do it efficiently -- not so many are needed to do it effectively. I've seen a number of matrices and decision trees fail totally -- usually at some cost in pain and suffering. Acquisitions, as you say...
    And yet for more than half a century modern warfare has embraced quantitative methods in all these fields and more. A fair assessment of the success math has in the field would compare the performance of one generation of warfighters to its predecessors.

    Actually, very few things are "tested under specific conditions time and time again and under fire." That's because almost every effort attempted under fire is subject to the vagaries and variances of the mission, the particular enemy at a given point and time, the terrain and the type or lack of vegetation thereon, the troops one has available (and even with the same troops exactly, time will affect their abilities and effectiveness), the time of year and of day as well as that available and in any situation, not just COIN but mid level or major war, civilian considerations (and that can include own as well as international political constraints, like Rules of Engagement, media coverage and such). Throw in human foibles and you have too many variables so you will build a model upon which you cannot rely above the 50% level -- I like my fights to have better odds and that can usually be arranged.
    Vagary and variance are terms of art in stochastic modeling. A model does not yield an analytically exact answer, it specifies a distribution of probabilities within a given domain. This tells us two things--one, models are highly conditional on their subject samples and two, any modeler risks discovering variance so wide that statistically significant relationships are impossible to identify. Readily conceded. The question is whether or not modelers are doomed to find only either statistically useless models or useful ones contrained to useless domains.

    As for the number of variables, climate change models handle orders and orders of magnitude more variables than those you've listed, counted in econometric or broader military science. The number of inputs is irrelevant if techniques to crunch them exist.

    BTW, don't conflate tradition and experience -- or principle and application.
    I don't on the latter, but on the former I see no difference. Neither tradition nor experience as terms demand unwavering adherence, simply deference and consideration.
    PH Cannady
    Correlate Systems

  9. #9
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default It still is...

    Quote Originally Posted by Presley Cannady View Post
    At one point the medical profession was more art than science.
    Because all the science merely provides mor information to fuel a better guess. Sometimes.
    ...the bean counter is not the model and visa versa.
    True but he often pushes his model in spite of knowing it's flaws -- pride of author or owner ship is a terrible thing.
    And yet for more than half a century modern warfare has embraced quantitative methods in all these fields and more. A fair assessment of the success math has in the field would compare the performance of one generation of warfighters to its predecessors.
    I think if you give that few seconds thought and refresh your History cells, you may not really want to go there. Put another way, how well has that worked out for us?
    The question is whether or not modelers are doomed to find only either statistically useless models or useful ones contrained to useless domains.
    You do know that all of our disagreement really revolves around the unconstrained application of metrics, matrices and modeling -- the three 'M's (Good copy, bad practices for warfare) to war. I have no quarrel with the utility and even necessity in many fields -- to include building weapons and supporting war fighters. I do not urge they not be used in actual combat operations but do urge great caution in that use.
    I don't on the latter, but on the former I see no difference. Neither tradition nor experience as terms demand unwavering adherence, simply deference and consideration.
    True -- and exactly the same conditions apply to math and models.

    What all you believers forget is that humans presented with a bunch of numbers that prove something tend to accept them because that means they don't have to think about the problem. That's the danger that most math centric folks do not think about much less care to mention or guard against...

    I go back to what I said earlier. Nothing you've said indicates that I was incorrect:

    ""human interaction will always show patterns -- and different modelers will draw different patterns from the same data. You cannot put people in boxes IMO; you have to deal with the person or group as they are and as they constantly shift and change.""

    You have essentially said that's correct.

    ""Well, you can put 'em in boxes and rely on trends, I suppose. Seen a lot of folks do some fascinating variations on that. None successfully, as I recall...""

    I have watched the US Army try many numerate / modeling efforts and been the victim of attempts to apply templates, matrices and decision trees to combat -- all failed miserably. Whether the model was wrong or through human error in application, they are dangerous.

    I go back to my first comment on this thread (which was not don't use them but) -- "People and numbers don't mix well."

  10. #10
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Ken White View Post
    Because all the science merely provides mor information to fuel a better guess.
    Science can certainly crunch more information into knowledge by virtue of its formalism, but it also can do so more rigorously due to its predilection for continuous testing, integration and evolution.

    Sometimes.True but he often pushes his model in spite of knowing it's flaws -- pride of author or owner ship is a terrible thing.
    Falling in love with your own research is dangerous, definitely, but more often than not we're talking about people applying other people's innovations incorrectly--often disasterously. Let's take David Li--the Chinese national who first thought to price CDOs using Gaussian copulas. He's on record as early as 2005 pointing out that financiers who applied it did so despite the fact the model lacked theoretical grounding for credit portfolios. We can't even blame the model in this case, because it's unclear as to whether the correlation itself or the assumptions folks at Lehman and Citigroup made about their credit portfolios is at fault. Either way, we should point out that the Gaussian copula is one in literally tens of thousands of models in hundreds of distribution classes that financiers use every day, and even though Lehman and Citigroup mined the mathematical trove they had very different, proprietary implementations at their disposal.

    I think if you give that few seconds thought and refresh your History cells, you may not really want to go there. Put another way, how well has that worked out for us?You do know that all of our disagreement really revolves around the unconstrained application of metrics, matrices and modeling -- the three 'M's (Good copy, bad practices for warfare) to war. I have no quarrel with the utility and even necessity in many fields -- to include building weapons and supporting war fighters. I do not urge they not be used in actual combat operations but do urge great caution in that use.True -- and exactly the same conditions apply to math and models.
    If that's the case, I don't think we have a disagreement here. I'd place more emphasis on the value of investigating the use of combat models, but I do not urge any particular set of models or sign off on their prescriptions. More to the topic's point, I do not see any value whatsoever yet in Gourley's work. Noting that war has aggregate behavior described by a power law is a fancy way of saying (in general) "big explosions are expensive, little ones not so much." I think your response would be "duh."

    What all you believers forget is that humans presented with a bunch of numbers that prove something tend to accept them because that means they don't have to think about the problem. That's the danger that most math centric folks do not think about much less care to mention or guard against...
    I think that's a risk we take whenever we give up our faculties to authority, whether to math or to experience. To take a model and blindly apply it, without examining the math, its underlying assumptions, the facts on the ground, etc., is about as insane as trusting in the experience of someone simply because somebody told you he was good. Using a model within its proper domain, cognizant of its limitations, reasonably confident that you've fed it all the facts it needs to compute, is akin to (mis)trusting the experience of someone you've seen work with your own eyes. Science doesn't spare you judiciousness, it's only supposed to support it.

    I go back to what I said earlier. Nothing you've said indicates that I was incorrect:

    ""human interaction will always show patterns -- and different modelers will draw different patterns from the same data. You cannot put people in boxes IMO; you have to deal with the person or group as they are and as they constantly shift and change.""

    You have essentially said that's correct.
    Have to be careful here, because I do strongly disagree with that statement. We've agreed that using models carries risks, especially when it concerns improperly apply them. I do not, however, agree that modelers arrive at different conclusions based on the same data. That's not a matter of faith, it's a mathematical fact. Given some data, there's a finite number of functions describing them. Those functions have to be homomorphic. If they weren't, then the data underlying them has to be different. That the data concerns human behavior is irrelevant.

    Furthermore, I do believe (or should say I have no reason to disbelieve the notion that) human behavior can be quantified. I don't believe in universal quantification, or even that there's a general rule that models can transform into one another. The present evidence suggests that models describing various bits and pieces of human behavior at any scale should be various and highly conditional. They will almost certainly be probabilistic. This is not a problem for me.

    ""Well, you can put 'em in boxes and rely on trends, I suppose. Seen a lot of folks do some fascinating variations on that. None successfully, as I recall...""

    I have watched the US Army try many numerate / modeling efforts and been the victim of attempts to apply templates, matrices and decision trees to combat -- all failed miserably. Whether the model was wrong or through human error in application, they are dangerous.
    Would you say this was the case at all scales of combat? And what time frame are we talking about for these observations? I was under the impression the modeling's been used fairly frequently in campaign analysis in recent decades. I'm not privy to the results of exercises, and data on conventional land-air operations is infrequent.

    I go back to my first comment on this thread (which was not don't use them but) -- "People and numbers don't mix well."
    I don't disagree with any particular point you've made, due either to the obvious power behind it or admitted lack of knowledge (I have no combat experience and haven't even the benefit of others' experience outside of this forum). But the general aphorism that "people and numbers" don't mix well is disproven, once again, by a most obvious example: the medical profession.
    Last edited by Presley Cannady; 05-12-2009 at 10:55 PM.
    PH Cannady
    Correlate Systems

  11. #11
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default Thanks for the response.

    Quote Originally Posted by Presley Cannady View Post
    Falling in love with your own research is dangerous, definitely, but more often than not we're talking about people applying other people's innovations incorrectly--often disasterously...
    I think that's a risk we take whenever we give up our faculties to authority, whether to math or to experience. To take a model and blindly apply it, without examining the math, its underlying assumptions, the facts on the ground, etc., is about as insane as trusting in the experience of someone simply because somebody told you he was good...
    Therin lies the rub as they say...

    In war -- not just in combat but in preparation as well -- the skills to do that rudimentary analysis may not be in the right place at the right time. Time will always be detrimental to a reasoned analysis. I totally agree that the most common problem is misapplication of data or models but my point is that war will force such errors far more often than not. Therefor considerable caution in their development and use should be taken -- and it is not...
    I do not, however, agree that modelers arrive at different conclusions based on the same data. That's not a matter of faith, it's a mathematical fact. Given some data, there's a finite number of functions describing them. Those functions have to be homomorphic. If they weren't, then the data underlying them has to be different. That the data concerns human behavior is irrelevant.
    Ah yes, I'm reminded of the famous Lancet study of Iraqi deaths in the war...

    Not precisley the same thing but misuse of numbers is not unknown, deliberate or inadvertant. Trust but verify is good -- if you have time...

    The problem, BTW, with that study was impeccable math was skewed terribly by very poor and dishonest data collection and thus GIGO occurred.
    Furthermore, I do believe (or should say I have no reason to disbelieve the notion that) human behavior can be quantified. ... They will almost certainly be probabilistic. This is not a problem for me.
    Understand and agree but it can create problems with the carelessly accepting and less numerate or aware.
    Would you say this was the case at all scales of combat? And what time frame are we talking about for these observations? I was under the impression the modeling's been used fairly frequently in campaign analysis in recent decades. I'm not privy to the results of exercises, and data on conventional land-air operations is infrequent.
    Up to the operational level for a great many, for virtually all at Tactical levels up to and including Division. All during the period 1949 until I retired in 1995 for the second time.
    But the general aphorism that "people and numbers" don't mix well is disproven, once again, by a most obvious example: the medical profession.
    Heh. We are two modelers presented roughly the same data and arriving at different conclusions.

  12. #12
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Ken White View Post
    Therin lies the rub as they say...

    In war -- not just in combat but in preparation as well -- the skills to do that rudimentary analysis may not be in the right place at the right time. Time will always be detrimental to a reasoned analysis. I totally agree that the most common problem is misapplication of data or models but my point is that war will force such errors far more often than not. Therefor considerable caution in their development and use should be taken -- and it is not...
    I wouldn't go that far. Combat computation is certainly not the norm at the infantry company scale, but it's made its way to the battalion level. It's applications in the Air Force and Navy stretch back to almost immediately after the end of World War II. Nor is modeling static. Both Navy and Air Force (don't know about Army or the Corps) have dozens of active programs refining and when necessary replacing tools already in the field.

    Ah yes, I'm reminded of the famous Lancet study of Iraqi deaths in the war...Not precisley the same thing but misuse of numbers is not unknown, deliberate or inadvertant. Trust but verify is good -- if you have time...The problem, BTW, with that study was impeccable math was skewed terribly by very poor and dishonest data collection and thus GIGO occurred.

    Setting aside the politics surrounding it, both Lancet studies were severely criticized on the merits. For one, the cluster size was very small compared to say the UN household survey ostensibly studying the same issue; the two reports were off by an order of magnitude. Therefore, there is no conclusive epidemiology about excess mortality due to combat, let alone due to Coalition arms. This is not a criticism of modeling, but in fact a virtue of it. Being able to demonstrate sensitivity to inputs by which we can consider or disregard specific models is something to be desired. I feel this is similar to the "plans v. planning" distinction.

    And there's always time. You don't suffer from not pushing a model into service before it matures, you just don't gain any benefit from it. No need crying over what you simply don't have.

    Finally, models aren't alone or even particularly special in their vulnerability to garbage input. A case has been made, in this forum no doubt, that collection, dissemination, and acceptance by the stakeholders based on no modeling whatsoever contributed to what many view as a misadventure in Iraq.

    Understand and agree but it can create problems with the carelessly accepting and less numerate or aware.Up to the operational level for a great many, for virtually all at Tactical levels up to and including Division. All during the period 1949 until I retired in 1995 for the second time.Heh. We are two modelers presented roughly the same data and arriving at different conclusions.
    Ah, but that data is sampled, and that's the key word. If you have a dataset that includes say the explosive tonnage of munitions expended and I have one that simply goes by the weight, our data sets are inevitably different. If that's the only difference, we'll find our results parallel but differ in magnitude. We can't even guarantee that if our collection is littered with completely unrelated classes of observables. Case in point, the Lancet studies v. the UN survey.

    1949 to 1995? Jesus. Do they throw in frequent flier miles for the second time around?
    PH Cannady
    Correlate Systems

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •