Results 1 to 20 of 20

Thread: The Perils of Arbitrary and False Precision

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Council Member
    Join Date
    Oct 2005
    Posts
    3,099

    Default The Perils of Arbitrary and False Precision

    Kent's Imperative, 22 Jan 08: The Perils of Arbitrary and False Precision
    We find quite unhelpful the recent academic obsessions over estimative language – largely an exercise in the introduction of a numerical system which offers a degree of apparently comforting but entirely arbitrary, and therefore utterly false, precision. It seems however that we are in one of those cycles which seem to come along in the intelligence community every few decades or so, in which the numerologists and other soothsayers attempt to reshape the profession into their own desires for a more “scientific” practice.

    Let us be clear. There are times when quantitative analytic methodology is vital – but there are far more situations in which it is misapplied, misunderstood, and entirely out of place. The latter comprise the vast majority of scenarios in which analytic tradecraft is called upon – not the least of which may be attributed to the highly unbounded and indeterminate nature of the problems with which we must grapple. And any time in which a quantitative basis has not been established, the insertion of numerical percentages for predictive purposes is little more than a farcical exercise in arbitrary selection......

  2. #2
    Council Member J Wolfsberger's Avatar
    Join Date
    Jan 2007
    Location
    Michigan
    Posts
    806

    Default

    Quote Originally Posted by Jedburgh View Post
    Kent's Imperative, 22 Jan 08: The Perils of Arbitrary and False Precision
    This has been a long time problem in system analysis, engineering and supporting modeling/simulation. Accuracy without precision can still be useful. Precision without accuracy is not only useless, it is almost always outright damaging since it leads people to infer a level of accuracy that's absent. In fact, and without citing examples, precision without accuracy is usually nothing more than a sophisticated method for lying.
    John Wolfsberger, Jr.

    An unruffled person with some useful skills.

  3. #3
    Council Member
    Join Date
    Oct 2005
    Posts
    3,099

    Default Using Quantitative and Qualitative Models to Forecast Instability

    USIP, 21 Mar 08: Using Quantitative and Qualitative Models to Forecast Instability
    Summary

    • Preventing violent conflict requires early warning of likely crises so that preventive actions can be planned and taken before the onset of mass violence.

    • For most of the post–World War II period, policymakers and intelligence agencies have relied on experts to make qualitative judgments regarding the risk of instability or violent changes in their areas of study. Yet the inability of such experts to adequately predict major events has led to efforts to use social and analytical tools to create more “scientific” forecasts of political crises.

    • The advent of quantitative forecasting models that give early warning of the onset of political instability offers the prospect of major advances in the accuracy of forecasting over traditional qualitative methods.

    • Because certain models have a demonstrated accuracy of over 80 percent in early identification of political crises, some have questioned whether such models should replace traditional qualitative analysis.

    • While these quantitative forecasting methods should move to the foreground and play a key role in developing early warning tools, this does not mean that traditional qualitative analysis is dispensable.

    • The best results for early warning are most likely obtained by the judicious combination of quantitative analysis based on forecasting models with qualitative analysis that rests on explicit causal relationships and precise forecasts of its own.

    • Policymakers and analysts should insist on a multiple-method approach, which has greater forecasting power than either the quantitative or qualitative method alone. In this way, political instability forecasting is likely to make its largest advance over earlier practices.
    Complete 16 page paper at the link.

  4. #4
    Council Member 120mm's Avatar
    Join Date
    Nov 2006
    Location
    Wonderland
    Posts
    1,284

    Default

    A fine example of false precision, and using quantitative terms where they do not apply is in the following "study":

    http://www.rd.com/national-interest/...s/article.html

    A recent Reader's Digest article, where they rank colleges for safety, using 19 different variables.

    The problem with the study? Iowa State University of Ames, Iowa is rated as the second most unsafe campus in the US. (my alma mater, obtw) Anyone who has ever been to Ames, Iowa, will admit that something, somewhere has to be screwed up if ISU is quantified as the second least safe campus in America.

    Or that the University of Worcester is like 4 from the bottom, while Boston U. is 4 from the top in safety ratings to find an "apples to apples" comparison. Anyone with a full and functional brain pan could walk around either campus and realize that Boston U. (my "other" alma mater) is probably a less safe place to be than Worcester.

    Frankly, they chose largely irrelevant variables vis-a-vis safety, and then assigned them arbitrary values, which they then took very seriously in their analysis.

    In other words, most of these folks are full of crap.

  5. #5
    Council Member Hacksaw's Avatar
    Join Date
    Oct 2007
    Location
    Lansing, KS
    Posts
    361

    Default Stray Voltage wrt Precision without accuracy

    Clearly we have always been inflicted with those who are apt to assign numeric values to everything from individual military performance to Corps COA Evaluation. Specifically why most people like this type of analysis is probably beyond knowing, but I offer the following as possibilities...

    1. It is comforting for some (even if they know it is intellectually dishonest) to be able to point to a numeric outcome as a rationale for their decisions. [I]I think this is because they sense it provides a level of freedom from culpability, "but the numbers said this was the best COA, I shall have the offending staff officer shot!"

    2. It provides a level of uniformity across the force, "If we all use the same criteria (and associated definitions), we will all come to the same conclusion." I call any reader's attention to the big CAS3 decision brief exercise that a generation of officers were forced to grind out complete with decmat and pairwise comparison.

    3. We like to kid ourselves that we can, thru calculations, remove randomness, complexity, and chaos from the nature of our business. Which we all know is horse feathers....

    The hardest trick I ever had with Division Commanders (planned for four) was fostering the idea that if we got everyone, metaphorically speaking, marching somewhat in the direction... that was a very good thing. And that sometimes, despite our best efforts, an action taken has unintended/unpredicted consequences. The best we can do is develop plans so that we can account/react/mitigate the effects when those occur.

    I batted 50% with those four, one went on to be the hero of New Orleans, the other the hero of Mosul. A coincidence perhaps, but I think crises helps focus the mind.

    Live well and row
    Hacksaw
    Say hello to my 2 x 4

  6. #6
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default Any friend of

    Quote Originally Posted by Hacksaw View Post
    ...
    I batted 50% with those four, one went on to be the hero of New Orleans, the other the hero of Mosul. A coincidence perhaps, but I think crises helps focus the mind.
    Of Ol' Russ is deserving of accolades.

    Or many days resting and loafing in the shade...

  7. #7
    Council Member
    Join Date
    Oct 2005
    Posts
    3,099

    Default

    Edge, 15 Sep 08: The Fourth Quadrant: A Map of the Limits of Statistics
    ......In the following Edge original essay, Taleb continues his examination of Black Swans, the highly improbable and unpredictable events that have massive impact. He claims that those who are putting society at risk are "no true statisticians", merely people using statistics either without understanding them, or in a self-serving manner. "The current subprime crisis did wonders to help me drill my point about the limits of statistically driven claims," he says.

    Taleb, looking at the cataclysmic situation facing financial institutions today, points out that "the banking system, betting against Black Swans, has lost over 1 Trillion dollars (so far), more than was ever made in the history of banking".

    But, as he points out, there is also good news.

    We can identify where the danger zone is located, which I call "the fourth quadrant", and show it on a map with more or less clear boundaries. A map is a useful thing because you know where you are safe and where your knowledge is questionable. So I drew for the Edge readers a tableau showing the boundaries where statistics works well and where it is questionable or unreliable. Now once you identify where the danger zone is, where your knowledge is no longer valid, you can easily make some policy rules: how to conduct yourself in that fourth quadrant; what to avoid......

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •