Quote Originally Posted by Ken White View Post
Therin lies the rub as they say...

In war -- not just in combat but in preparation as well -- the skills to do that rudimentary analysis may not be in the right place at the right time. Time will always be detrimental to a reasoned analysis. I totally agree that the most common problem is misapplication of data or models but my point is that war will force such errors far more often than not. Therefor considerable caution in their development and use should be taken -- and it is not...
I wouldn't go that far. Combat computation is certainly not the norm at the infantry company scale, but it's made its way to the battalion level. It's applications in the Air Force and Navy stretch back to almost immediately after the end of World War II. Nor is modeling static. Both Navy and Air Force (don't know about Army or the Corps) have dozens of active programs refining and when necessary replacing tools already in the field.

Ah yes, I'm reminded of the famous Lancet study of Iraqi deaths in the war...Not precisley the same thing but misuse of numbers is not unknown, deliberate or inadvertant. Trust but verify is good -- if you have time...The problem, BTW, with that study was impeccable math was skewed terribly by very poor and dishonest data collection and thus GIGO occurred.

Setting aside the politics surrounding it, both Lancet studies were severely criticized on the merits. For one, the cluster size was very small compared to say the UN household survey ostensibly studying the same issue; the two reports were off by an order of magnitude. Therefore, there is no conclusive epidemiology about excess mortality due to combat, let alone due to Coalition arms. This is not a criticism of modeling, but in fact a virtue of it. Being able to demonstrate sensitivity to inputs by which we can consider or disregard specific models is something to be desired. I feel this is similar to the "plans v. planning" distinction.

And there's always time. You don't suffer from not pushing a model into service before it matures, you just don't gain any benefit from it. No need crying over what you simply don't have.

Finally, models aren't alone or even particularly special in their vulnerability to garbage input. A case has been made, in this forum no doubt, that collection, dissemination, and acceptance by the stakeholders based on no modeling whatsoever contributed to what many view as a misadventure in Iraq.

Understand and agree but it can create problems with the carelessly accepting and less numerate or aware.Up to the operational level for a great many, for virtually all at Tactical levels up to and including Division. All during the period 1949 until I retired in 1995 for the second time.Heh. We are two modelers presented roughly the same data and arriving at different conclusions.
Ah, but that data is sampled, and that's the key word. If you have a dataset that includes say the explosive tonnage of munitions expended and I have one that simply goes by the weight, our data sets are inevitably different. If that's the only difference, we'll find our results parallel but differ in magnitude. We can't even guarantee that if our collection is littered with completely unrelated classes of observables. Case in point, the Lancet studies v. the UN survey.

1949 to 1995? Jesus. Do they throw in frequent flier miles for the second time around?