Results 1 to 20 of 77

Thread: Mathematics of War

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default It still is...

    Quote Originally Posted by Presley Cannady View Post
    At one point the medical profession was more art than science.
    Because all the science merely provides mor information to fuel a better guess. Sometimes.
    ...the bean counter is not the model and visa versa.
    True but he often pushes his model in spite of knowing it's flaws -- pride of author or owner ship is a terrible thing.
    And yet for more than half a century modern warfare has embraced quantitative methods in all these fields and more. A fair assessment of the success math has in the field would compare the performance of one generation of warfighters to its predecessors.
    I think if you give that few seconds thought and refresh your History cells, you may not really want to go there. Put another way, how well has that worked out for us?
    The question is whether or not modelers are doomed to find only either statistically useless models or useful ones contrained to useless domains.
    You do know that all of our disagreement really revolves around the unconstrained application of metrics, matrices and modeling -- the three 'M's (Good copy, bad practices for warfare) to war. I have no quarrel with the utility and even necessity in many fields -- to include building weapons and supporting war fighters. I do not urge they not be used in actual combat operations but do urge great caution in that use.
    I don't on the latter, but on the former I see no difference. Neither tradition nor experience as terms demand unwavering adherence, simply deference and consideration.
    True -- and exactly the same conditions apply to math and models.

    What all you believers forget is that humans presented with a bunch of numbers that prove something tend to accept them because that means they don't have to think about the problem. That's the danger that most math centric folks do not think about much less care to mention or guard against...

    I go back to what I said earlier. Nothing you've said indicates that I was incorrect:

    ""human interaction will always show patterns -- and different modelers will draw different patterns from the same data. You cannot put people in boxes IMO; you have to deal with the person or group as they are and as they constantly shift and change.""

    You have essentially said that's correct.

    ""Well, you can put 'em in boxes and rely on trends, I suppose. Seen a lot of folks do some fascinating variations on that. None successfully, as I recall...""

    I have watched the US Army try many numerate / modeling efforts and been the victim of attempts to apply templates, matrices and decision trees to combat -- all failed miserably. Whether the model was wrong or through human error in application, they are dangerous.

    I go back to my first comment on this thread (which was not don't use them but) -- "People and numbers don't mix well."

  2. #2
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Ken White View Post
    Because all the science merely provides mor information to fuel a better guess.
    Science can certainly crunch more information into knowledge by virtue of its formalism, but it also can do so more rigorously due to its predilection for continuous testing, integration and evolution.

    Sometimes.True but he often pushes his model in spite of knowing it's flaws -- pride of author or owner ship is a terrible thing.
    Falling in love with your own research is dangerous, definitely, but more often than not we're talking about people applying other people's innovations incorrectly--often disasterously. Let's take David Li--the Chinese national who first thought to price CDOs using Gaussian copulas. He's on record as early as 2005 pointing out that financiers who applied it did so despite the fact the model lacked theoretical grounding for credit portfolios. We can't even blame the model in this case, because it's unclear as to whether the correlation itself or the assumptions folks at Lehman and Citigroup made about their credit portfolios is at fault. Either way, we should point out that the Gaussian copula is one in literally tens of thousands of models in hundreds of distribution classes that financiers use every day, and even though Lehman and Citigroup mined the mathematical trove they had very different, proprietary implementations at their disposal.

    I think if you give that few seconds thought and refresh your History cells, you may not really want to go there. Put another way, how well has that worked out for us?You do know that all of our disagreement really revolves around the unconstrained application of metrics, matrices and modeling -- the three 'M's (Good copy, bad practices for warfare) to war. I have no quarrel with the utility and even necessity in many fields -- to include building weapons and supporting war fighters. I do not urge they not be used in actual combat operations but do urge great caution in that use.True -- and exactly the same conditions apply to math and models.
    If that's the case, I don't think we have a disagreement here. I'd place more emphasis on the value of investigating the use of combat models, but I do not urge any particular set of models or sign off on their prescriptions. More to the topic's point, I do not see any value whatsoever yet in Gourley's work. Noting that war has aggregate behavior described by a power law is a fancy way of saying (in general) "big explosions are expensive, little ones not so much." I think your response would be "duh."

    What all you believers forget is that humans presented with a bunch of numbers that prove something tend to accept them because that means they don't have to think about the problem. That's the danger that most math centric folks do not think about much less care to mention or guard against...
    I think that's a risk we take whenever we give up our faculties to authority, whether to math or to experience. To take a model and blindly apply it, without examining the math, its underlying assumptions, the facts on the ground, etc., is about as insane as trusting in the experience of someone simply because somebody told you he was good. Using a model within its proper domain, cognizant of its limitations, reasonably confident that you've fed it all the facts it needs to compute, is akin to (mis)trusting the experience of someone you've seen work with your own eyes. Science doesn't spare you judiciousness, it's only supposed to support it.

    I go back to what I said earlier. Nothing you've said indicates that I was incorrect:

    ""human interaction will always show patterns -- and different modelers will draw different patterns from the same data. You cannot put people in boxes IMO; you have to deal with the person or group as they are and as they constantly shift and change.""

    You have essentially said that's correct.
    Have to be careful here, because I do strongly disagree with that statement. We've agreed that using models carries risks, especially when it concerns improperly apply them. I do not, however, agree that modelers arrive at different conclusions based on the same data. That's not a matter of faith, it's a mathematical fact. Given some data, there's a finite number of functions describing them. Those functions have to be homomorphic. If they weren't, then the data underlying them has to be different. That the data concerns human behavior is irrelevant.

    Furthermore, I do believe (or should say I have no reason to disbelieve the notion that) human behavior can be quantified. I don't believe in universal quantification, or even that there's a general rule that models can transform into one another. The present evidence suggests that models describing various bits and pieces of human behavior at any scale should be various and highly conditional. They will almost certainly be probabilistic. This is not a problem for me.

    ""Well, you can put 'em in boxes and rely on trends, I suppose. Seen a lot of folks do some fascinating variations on that. None successfully, as I recall...""

    I have watched the US Army try many numerate / modeling efforts and been the victim of attempts to apply templates, matrices and decision trees to combat -- all failed miserably. Whether the model was wrong or through human error in application, they are dangerous.
    Would you say this was the case at all scales of combat? And what time frame are we talking about for these observations? I was under the impression the modeling's been used fairly frequently in campaign analysis in recent decades. I'm not privy to the results of exercises, and data on conventional land-air operations is infrequent.

    I go back to my first comment on this thread (which was not don't use them but) -- "People and numbers don't mix well."
    I don't disagree with any particular point you've made, due either to the obvious power behind it or admitted lack of knowledge (I have no combat experience and haven't even the benefit of others' experience outside of this forum). But the general aphorism that "people and numbers" don't mix well is disproven, once again, by a most obvious example: the medical profession.
    Last edited by Presley Cannady; 05-12-2009 at 10:55 PM.
    PH Cannady
    Correlate Systems

  3. #3
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default Thanks for the response.

    Quote Originally Posted by Presley Cannady View Post
    Falling in love with your own research is dangerous, definitely, but more often than not we're talking about people applying other people's innovations incorrectly--often disasterously...
    I think that's a risk we take whenever we give up our faculties to authority, whether to math or to experience. To take a model and blindly apply it, without examining the math, its underlying assumptions, the facts on the ground, etc., is about as insane as trusting in the experience of someone simply because somebody told you he was good...
    Therin lies the rub as they say...

    In war -- not just in combat but in preparation as well -- the skills to do that rudimentary analysis may not be in the right place at the right time. Time will always be detrimental to a reasoned analysis. I totally agree that the most common problem is misapplication of data or models but my point is that war will force such errors far more often than not. Therefor considerable caution in their development and use should be taken -- and it is not...
    I do not, however, agree that modelers arrive at different conclusions based on the same data. That's not a matter of faith, it's a mathematical fact. Given some data, there's a finite number of functions describing them. Those functions have to be homomorphic. If they weren't, then the data underlying them has to be different. That the data concerns human behavior is irrelevant.
    Ah yes, I'm reminded of the famous Lancet study of Iraqi deaths in the war...

    Not precisley the same thing but misuse of numbers is not unknown, deliberate or inadvertant. Trust but verify is good -- if you have time...

    The problem, BTW, with that study was impeccable math was skewed terribly by very poor and dishonest data collection and thus GIGO occurred.
    Furthermore, I do believe (or should say I have no reason to disbelieve the notion that) human behavior can be quantified. ... They will almost certainly be probabilistic. This is not a problem for me.
    Understand and agree but it can create problems with the carelessly accepting and less numerate or aware.
    Would you say this was the case at all scales of combat? And what time frame are we talking about for these observations? I was under the impression the modeling's been used fairly frequently in campaign analysis in recent decades. I'm not privy to the results of exercises, and data on conventional land-air operations is infrequent.
    Up to the operational level for a great many, for virtually all at Tactical levels up to and including Division. All during the period 1949 until I retired in 1995 for the second time.
    But the general aphorism that "people and numbers" don't mix well is disproven, once again, by a most obvious example: the medical profession.
    Heh. We are two modelers presented roughly the same data and arriving at different conclusions.

  4. #4
    Council Member
    Join Date
    Nov 2007
    Location
    Boston, MA
    Posts
    310

    Default

    Quote Originally Posted by Ken White View Post
    Therin lies the rub as they say...

    In war -- not just in combat but in preparation as well -- the skills to do that rudimentary analysis may not be in the right place at the right time. Time will always be detrimental to a reasoned analysis. I totally agree that the most common problem is misapplication of data or models but my point is that war will force such errors far more often than not. Therefor considerable caution in their development and use should be taken -- and it is not...
    I wouldn't go that far. Combat computation is certainly not the norm at the infantry company scale, but it's made its way to the battalion level. It's applications in the Air Force and Navy stretch back to almost immediately after the end of World War II. Nor is modeling static. Both Navy and Air Force (don't know about Army or the Corps) have dozens of active programs refining and when necessary replacing tools already in the field.

    Ah yes, I'm reminded of the famous Lancet study of Iraqi deaths in the war...Not precisley the same thing but misuse of numbers is not unknown, deliberate or inadvertant. Trust but verify is good -- if you have time...The problem, BTW, with that study was impeccable math was skewed terribly by very poor and dishonest data collection and thus GIGO occurred.

    Setting aside the politics surrounding it, both Lancet studies were severely criticized on the merits. For one, the cluster size was very small compared to say the UN household survey ostensibly studying the same issue; the two reports were off by an order of magnitude. Therefore, there is no conclusive epidemiology about excess mortality due to combat, let alone due to Coalition arms. This is not a criticism of modeling, but in fact a virtue of it. Being able to demonstrate sensitivity to inputs by which we can consider or disregard specific models is something to be desired. I feel this is similar to the "plans v. planning" distinction.

    And there's always time. You don't suffer from not pushing a model into service before it matures, you just don't gain any benefit from it. No need crying over what you simply don't have.

    Finally, models aren't alone or even particularly special in their vulnerability to garbage input. A case has been made, in this forum no doubt, that collection, dissemination, and acceptance by the stakeholders based on no modeling whatsoever contributed to what many view as a misadventure in Iraq.

    Understand and agree but it can create problems with the carelessly accepting and less numerate or aware.Up to the operational level for a great many, for virtually all at Tactical levels up to and including Division. All during the period 1949 until I retired in 1995 for the second time.Heh. We are two modelers presented roughly the same data and arriving at different conclusions.
    Ah, but that data is sampled, and that's the key word. If you have a dataset that includes say the explosive tonnage of munitions expended and I have one that simply goes by the weight, our data sets are inevitably different. If that's the only difference, we'll find our results parallel but differ in magnitude. We can't even guarantee that if our collection is littered with completely unrelated classes of observables. Case in point, the Lancet studies v. the UN survey.

    1949 to 1995? Jesus. Do they throw in frequent flier miles for the second time around?
    PH Cannady
    Correlate Systems

  5. #5
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default

    Quote Originally Posted by Presley Cannady View Post
    I wouldn't go that far. Combat computation is certainly not the norm at the infantry company scale...
    Having spent most of my time from 1966 on above Company level and all after 1970 above Brigade, I'm well aware of that. I'm also aware that most are of marginal utility. I would say useless but there are Commanders who like numbers so they're handy to placate those guys.
    And there's always time. You don't suffer from not pushing a model into service before it matures, you just don't gain any benefit from it. No need crying over what you simply don't have.
    no one is crying but the time for the people that will use your model in combat to give it scrutiny before application to insure they understand what it shows is often not available. No amount of peacetime or rear area modeling can be reliably used in combat without thorough understanding of what is to be done -- time to get that knowledge embedded often will not exist.
    Finally, models aren't alone or even particularly special in their vulnerability to garbage input. A case has been made, in this forum no doubt, that collection, dissemination, and acceptance by the stakeholders based on no modeling whatsoever contributed to what many view as a misadventure in Iraq.
    Knowing the penchant of many in high places, I'm dubious but honestly don't know. I would characterize Iraq not as a misadventure but a as a necessary but regrettably flawed operation, a flawed effort that was predicated on several iterations of a computer modeled war game...[quote]Ah, but that data is sampled{/quote]Of course it is -- as is most all data.
    1949 to 1995? Jesus. Do they throw in frequent flier miles for the second time around?
    Sure but Federal employees have to turn 'em in...

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •