The Perils of Arbitrary and False Precision
Kent's Imperative, 22 Jan 08: The Perils of Arbitrary and False Precision
Quote:
We find quite unhelpful the recent academic obsessions over estimative language – largely an exercise in the introduction of a numerical system which offers a degree of apparently comforting but entirely arbitrary, and therefore utterly false, precision. It seems however that we are in one of those cycles which seem to come along in the intelligence community every few decades or so, in which the numerologists and other soothsayers attempt to reshape the profession into their own desires for a more “scientific” practice.
Let us be clear. There are times when quantitative analytic methodology is vital – but there are far more situations in which it is misapplied, misunderstood, and entirely out of place. The latter comprise the vast majority of scenarios in which analytic tradecraft is called upon – not the least of which may be attributed to the highly unbounded and indeterminate nature of the problems with which we must grapple. And any time in which a quantitative basis has not been established, the insertion of numerical percentages for predictive purposes is little more than a farcical exercise in arbitrary selection......
Using Quantitative and Qualitative Models to Forecast Instability
USIP, 21 Mar 08: Using Quantitative and Qualitative Models to Forecast Instability
Quote:
Summary
• Preventing violent conflict requires early warning of likely crises so that preventive actions can be planned and taken before the onset of mass violence.
• For most of the post–World War II period, policymakers and intelligence agencies have relied on experts to make qualitative judgments regarding the risk of instability or violent changes in their areas of study. Yet the inability of such experts to adequately predict major events has led to efforts to use social and analytical tools to create more “scientific” forecasts of political crises.
• The advent of quantitative forecasting models that give early warning of the onset of political instability offers the prospect of major advances in the accuracy of forecasting over traditional qualitative methods.
• Because certain models have a demonstrated accuracy of over 80 percent in early identification of political crises, some have questioned whether such models should replace traditional qualitative analysis.
• While these quantitative forecasting methods should move to the foreground and play a key role in developing early warning tools, this does not mean that traditional qualitative analysis is dispensable.
• The best results for early warning are most likely obtained by the judicious combination of quantitative analysis based on forecasting models with qualitative analysis that rests on explicit causal relationships and precise forecasts of its own.
• Policymakers and analysts should insist on a multiple-method approach, which has greater forecasting power than either the quantitative or qualitative method alone. In this way, political instability forecasting is likely to make its largest advance over earlier practices.
Complete 16 page paper at the link.
Stray Voltage wrt Precision without accuracy
Clearly we have always been inflicted with those who are apt to assign numeric values to everything from individual military performance to Corps COA Evaluation. Specifically why most people like this type of analysis is probably beyond knowing, but I offer the following as possibilities...
1. It is comforting for some (even if they know it is intellectually dishonest) to be able to point to a numeric outcome as a rationale for their decisions. [I]I think this is because they sense it provides a level of freedom from culpability, "but the numbers said this was the best COA, I shall have the offending staff officer shot!"
2. It provides a level of uniformity across the force, "If we all use the same criteria (and associated definitions), we will all come to the same conclusion." I call any reader's attention to the big CAS3 decision brief exercise that a generation of officers were forced to grind out complete with decmat and pairwise comparison.
3. We like to kid ourselves that we can, thru calculations, remove randomness, complexity, and chaos from the nature of our business. Which we all know is horse feathers....
The hardest trick I ever had with Division Commanders (planned for four) was fostering the idea that if we got everyone, metaphorically speaking, marching somewhat in the direction... that was a very good thing. And that sometimes, despite our best efforts, an action taken has unintended/unpredicted consequences. The best we can do is develop plans so that we can account/react/mitigate the effects when those occur.
I batted 50% with those four, one went on to be the hero of New Orleans, the other the hero of Mosul. A coincidence perhaps, but I think crises helps focus the mind.
Live well and row
Are you saying this is a good thing or bad thing
Quote:
Originally Posted by
selil
In my area we deal with accuracy versus reliability especially with assessments. Lots of pictures of arrows on targets which I really understand.
I think the biggest thing that bothers me is how useful metrics can be used correctly and in context with the purpose for which they are being used.
And how more often than not this does not end up being the case due to a variety of things not the least of which is ignorance of what some are actually trying to use them for.
Accuracy has always seemed rather interesting to me in that we actually tend to believe we are creating an accurate picture or understanding of something yet when it all comes down to ground truth we are suddenly amazed by how little our "accurate" assesments actually got right.
I propose that if whatever one is working with or through is not both
(Reliably Accurate) than the greatest part of what we end up with will continue to be right only about 50 percent of the time.
This is one reason I am a strong supporter of trendal analysis because although it uses most of the same metrics you know before you even start that what you end up will not be answers but rather guidelines/possibilities.
So you always get what you expected No more, No Less