Results 1 to 20 of 39

Thread: intelligence analysis, overcoming bias and learning

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Council Member IntelTrooper's Avatar
    Join Date
    May 2009
    Location
    RC-S, Afghanistan
    Posts
    302

    Default

    Quote Originally Posted by Ken White View Post
    You're not likely to see any US countermeasures. We prefer those who totally support what the Boss wants, no matter how inane or even criminal -- as your example proves...
    Yes we do... as I had the distinct impression that her "analysis" was thinly-disguised propaganda to make her battalion commander look good...
    "The status quo is not sustainable. All of DoD needs to be placed in a large bag and thoroughly shaken. Bureaucracy and micromanagement kill."
    -- Ken White


    "With a plan this complex, nothing can go wrong." -- Schmedlap

    "We are unlikely to usefully replicate the insights those unencumbered by a military staff college education might actually have." -- William F. Owen

  2. #2
    Council Member
    Join Date
    Mar 2008
    Posts
    1,457

    Default Hmm, seems to me

    ...the money might be better spent elsewhere

    Cognitive biases are pretty well understood already. Hopefully this isn't reinventing the wheel, particularly since this is contracted research.

    What's really needed IMO is better training for analysts and the money, IMO, would be better spent there.

  3. #3
    Council Member Ken White's Avatar
    Join Date
    May 2007
    Location
    Florida
    Posts
    8,060

    Default Well, yeah but...

    Quote Originally Posted by Entropy View Post
    Cognitive biases are pretty well understood already. Hopefully this isn't reinventing the wheel, particularly since this is contracted research.
    is anyone like, you know, doing anything about that?

    All the real benefit I saw was in the last item quoted: "...and (c) to examine the viability of counter-measures aimed at reducing or eliminating them...."

    That biases are known to exist and the issue is recognized is totally true, my perception is that little has been done to correct the problem for fear of trampling artistic little psyches (pardon my hyperbole and present company excepted but there are some out there...) OR, far more importantly, of not giving the Boss what he's looking for by skewing the analytical process to suit.

    I will avoid mention of an overwhelming desire on the part of some to never be wrong. That based on the rationale that if you do not go out on a limb, they cannot or will not cut it off... (An attitude that I have on a couple of occasions seen proven mistaken by irate Commanders not apprised of all that was known. ).

    Agree with your sentiment on contractors but have to point out that in-house studies tend to reach foregone conclusions (dare I say preordained?) and little changes. Outside studies sometimes have an effect.

  4. #4
    Council Member
    Join Date
    Aug 2007
    Location
    Montreal
    Posts
    1,602

    Default

    In this case, as I understand it, the contractors are largely administering the tests and providing the data, not actually doing the analysis.

    Also, there are biases and there are biases. The focus here is on some pretty specific issues that may not have been adequately explored, such as whether formal methods to reduce certain types of bias (for example, Analysis of Competing Hypotheses methodologies) might actually introduce other sorts of biases (for example, ones associated with the the sequencing of information); the impact that classification levels may have on the perceived weight of information (less a problem among analysts who understand how sausages are made than clients who consume the output, in my view); how probability assessments may be skewed by psychological processes, etc.

    I don't think it is reinventing wheels.
    They mostly come at night. Mostly.


  5. #5
    Council Member
    Join Date
    Mar 2008
    Posts
    1,457

    Default

    Quote Originally Posted by Ken White View Post
    is anyone like, you know, doing anything about that?

    All the real benefit I saw was in the last item quoted: "...and (c) to examine the viability of counter-measures aimed at reducing or eliminating them...."
    Over the decades many analytic methods have been developed that specifically seek to reduce or eliminate bias. There are a least a couple of dozen general methods and many more tailored for specific problems. Too many analysts do not even know these methods exist, much less have the training to use them properly, or, if trained, the time to utilize them properly.

    That biases are known to exist and the issue is recognized is totally true, my perception is that little has been done to correct the problem for fear of trampling artistic little psyches (pardon my hyperbole and present company excepted but there are some out there...) OR, far more importantly, of not giving the Boss what he's looking for by skewing the analytical process to suit.
    In my experience it's more often the Boss that does the skewing since most are not receptive to information that conflicts with what they believe. Beyond that perpetual tension, I think it's fine to do yet more research but IMO the problems with intelligence do not stem from a lack of knowledge on cognitive bias, nor a dearth of methods to combat that bias. There are a host of structural and bureaucratic problems but at the analyst level the three biggest problems I see are:

    1. Lack of analyst training.
    2. Lack of analyst introspection and self-awareness.
    3. Lack of analyst curiosity.

    The latter two problems stem, IMO, from a limited pool of analyst candidates (and is a topic that deserves its own thread). The first problem is something that can and should be solvable but is perennial. Look at the intelligence schoolhouse for any of the services. The vast majority of training time is spent on memorizing information, briefing skills, and intel systems. Those are important, but there is comparatively little (or nothing) on researching skills (beyond "look in the pub" or "search siprnet"), evaluating information, strengths and weakness of the differents "ints" (and what they can and can't provide) along with actual analysis.

    I will avoid mention of an overwhelming desire on the part of some to never be wrong. That based on the rationale that if you do not go out on a limb, they cannot or will not cut it off... (An attitude that I have on a couple of occasions seen proven mistaken by irate Commanders not apprised of all that was known. ).
    That's undoubtedly true and, as you mention, at times there are strong incentives to never be wrong even if that means being useless or never being right either. This is another topic that we could discuss at length since it is deep and endemic.

    Agree with your sentiment on contractors but have to point out that in-house studies tend to reach foregone conclusions (dare I say preordained?) and little changes. Outside studies sometimes have an effect.
    True, sometimes they do but mostly they don't. When they do the effect is usually temporary until the next intelligence failure comes along.

    Rex,
    Also, there are biases and there are biases. The focus here is on some pretty specific issues that may not have been adequately explored, such as whether formal methods to reduce certain types of bias (for example, Analysis of Competing Hypotheses methodologies) might actually introduce other sorts of biases (for example, ones associated with the the sequencing of information); the impact that classification levels may have on the perceived weight of information (less a problem among analysts who understand how sausages are made than clients who consume the output, in my view); how probability assessments may be skewed by psychological processes, etc.
    Then it's probably valuable research but the sad reality is that the vast majority of such research never trickles-down to impact the individual analyst.

  6. #6
    Council Member
    Join Date
    Jan 2007
    Location
    Thunder Bay, Ontario, Canada
    Posts
    156

    Default Word of even BROADER research....

    ....into how the bad guy does things, and how to predict what they'll do.

    Public posting:
    The Department of National Defence, Defence Research and Development (DRDC), Toronto, Ontario has a requirement for the provision of Scientific & Technical support for behavioral, cognitive, and social sciences research in various security environments and operational contexts. Examples of such contexts include command, communications, computer intelligence, surveillance and reconnaissance; collection, analysis and dissemination of intelligence products; effects-based operations; all with an explicit focus on human performance at the individual group and organizational levels.
    Now, a bit more detail from the Statement of Work here:
    “DRDC Toronto is now actively building its capacity for human sciences in a new research domain: Understanding, prediction and influence of adversaries’ intent …. DRDC requires contractual support for research in this new area. In particular, the Adversarial Intent Section (AIS) of DRDC Toronto anticipates a requirement for significant research effort that addresses the following general topic:

    (a) eludication of contemporary security concepts such as Effects-based Approaches to Operations (EBAO), Defence Development and Diplomacy (3D), and the Comprehensive approach (CA) and their use in military capability development and doctrine;

    (b) understanding the social, organizational, and cognitive determinants of human capabilities for the effective production of military and civilian intelligence pertinent to domestic and international issues;

    (c) influence processes that play a role in conflict-ridden environments characterized by complex interactions among adversaries, allied forces, and bystander groups; (and)

    (d) methods and tools for structuring, portraying and analyzing complex contemporary security environments.”
    I've "plain languaged" it a bit here.

  7. #7
    Council Member
    Join Date
    Jan 2007
    Location
    Thunder Bay, Ontario, Canada
    Posts
    156

    Default Why not get a machine to "average out" the analyses?

    This, from Wired.com's Danger Room:
    The U.S intelligence community has a long history of blowing big calls — the fall of the Berlin Wall, Saddam’s WMD, 9/11. But in each collective fail, there were individual analysts who got it right. Now, the spy agencies want a better way to sort the accurate from the unsound, by applying principles of mathematics to weigh and rank the input of different experts.

    Iarpa, the intelligence community’s way-out research arm, will host a one-day workshop on a new program, called Aggregative Contingent Estimation (ACE). The initiative follows Iarpa’s recent announcement of plans to create a computational model that can enhance human hypotheses and predictions, by catching inevitable biases and accounting for selective memory and stress.

    ACE won’t replace flesh-and-blood experts — it’ll just let ‘em know what they’re worth. The intelligence community often relies on small teams of experts to evaluate situations, and then make forecasts and recommendations. But a team is only as strong as its weakest link, and Iarpa wants to fortify team-based outputs, by using mathematical aggregation to “elicit, weigh, and combine the judgments of many intelligence analysts" ....

  8. #8
    Council Member AnalyticType's Avatar
    Join Date
    Jun 2009
    Location
    Austin, Texas
    Posts
    66

    Talking Still a firm believer in Heuer's research...

    I found him to be spot on. It's not possible to eliminate cognitive biases from the analyst entirely, as they're based in how we learn and incorporate knowledge. What IS possible is significant mitigation of those biases by first being trained to identify them; second being willing to address them; third being trained in structured analytic methodologies which go far in levelling the analytic playing field; fourth being willing to retool theories to fit the facts rather than retooling the facts to fit the theories.

    The fourth I perceive to be the most important - and least engaged.

    On the previously mentioned structured analytic method ACH, I found that Heuer's software was effective in mitigating cognitive bias (as is a simple hand-written matrix, BTW) but only if it's worked in a particular fashion.

    As an example:
    Any typical competing hypotheses matrix has a fairly straightforeward design. The first column on the left is populated with all of the facts generated by the analyst's research. The top cell of the second, third, fourth columns (et cetera) contain ALL working hypotheses, each in its own column. There may be several variations on a couple themes, or simply a pair of mutually exclusive theories.

    The analyst then examines the facts in relation to the hypothesies, determining consistency, lack of applicability, or inconsistency. But this is where I found that the way this simple matrix is worked matters regarding the outcome.

    The first couple of times I utilised ACH software, while in college in intelligence analysis classes, I had not yet learned that there may be a difference in how the process should be run for assuring the least bias possible. So I started at the top of the column for Hypothesis A, and worked my way down. I compared data points 1-45 to Hypothesis A, attempting to assign a value (highly consistent, consistent, not applicable, inconsistent, highly inconsistent) to each data point as it related to Hypothesis A. Then I went through the same exercise with the same data for Hypothesis B's column.

    What a mess. For the particular project I was working on at the time, my results were inconclusive and an exercise in frustration.

    Finally another student clued me in. Work across! Is data point 2 consistent, not applicable, or inconsistent with Hypothesis A? Is data point 2 consistent, not applicable, or inconsistent with Hypothesis B? C? Next, is data point 3 consistent, not applicable, or inconsistent with Hypothesis A? B? C? Working across, apply a data point to all hypotheses, then the next fact, then the next, down the matrix.

    I will tell you that it surprised the heck out of me to find that, without having rearranged or changed ANY of my data points or hypotheses, the direction in which I worked the matrix made a HUGE difference in the utility of the results.

    Next, what must be done is to eliminate (or rework) the hypotheses which have large numbers either of "not applicables" or "inconsistents" in their column.

    Having just spent the last year working on the problem of US border security and Mexican drug cartel violence in Texas, I've watched several coworkers repetitively discard confirmed data because it doesn't fit their theories. This stuff has frustrated the living tar out of me! The individuals in question habitually cherry-picked the facts to "prove" their hypotheses, rather than working at trying to disprove all theories. That hypothesis which is least able to be disproven tends to have the highest validity.

    Structured methodology, such as those tools identified and taught by Richards Heuer and Morgan Jones (among others) are the best tools I've found for removing ego and bias from the work of being an analyst.

    As mentioned or alluded to in previous posts, the wheel does not need to be reinvented, nor does the process by which it rolls need to be studied again some more. The tools are there, and have been highly effective for decades; but they must be taught consistently and reinforced often throughout intelligence analysts' careers, regardless of venue or gov't agency.
    "At least we're getting the kind of experience we need for the next war." -- Allen Dulles

    A work of art worth drooling over: http://www.maxton.com/intimidator1/i...r1_page4.shtml

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •