intelligence analysis, overcoming bias and learning
All Source Analysts Training
After a recent DGS DART working group my office was having a discussion on all source analysts training. There seems to be some misconception at least in the DGS community as to what training should be required for those Analysts who are working in an All Source capacity. I think part of this stems from the fact that the AF does not have an All Source Analyst AFSC. Each AFSC traditionally has been INT focused with the exception on the 1N0s, who are Ops focused assigned generally to flying squadrons and the 14N Intelligence Officer.
The line of thinking at the working group tended to steer towards mission specific training and application training rather than core skills that could be used at any unit.
I wanted to get everyone’s thoughts on some of the basic skills training that any all source analysts should have in the toolkit.
The idea we have come up with is a three track approach for all source training, regional training and mission specific training.
The All Source track would be a pyramid with courses built on a foundation of basic skills.
Foundation Skills
• Analytical Methods
– Problem definition
– Process (scientific method)
– Statistics
• Research
– Strategies
– Source evaluation
• Communications
– Technical and Editorial Writing
– Briefing
101 Training
• AFSC/MOS Technical School
– 1NX
• By-fire hose training
– 1N1: Receives SIGINT, HUMINT, AGI, OSINT
– Intel: Receives Ops familiarization;
– AF: Rcvs Army-centric, Navy-centric, Joint
• Military Capabilities
• General Intelligence (CIA, NSA, NGA, etc)
Advanced Skills
• Scenario-based training/exercises
• Specialized analytical techniques
– HUMINT-specific support
– MASINT-specific support
• Region-centric integration
– Able to correlate language, geo-political, and cultural aspects to specific intelligence problems
Canada Studying Int Analyst Bias
This from MERX, Canada's public tender posting page (bolding mine):
"....Defence Research & Development Canada (DRDC) have a requirement to retain the services of a contractor to provide support for DRDC Toronto's data collection and collection involving a series of behavioural science experiments with human subjects. The experiments described in this Statement of Work (SOW) are motivated by DRDC Toronto's Applied Research Program (ARP) project, entitled "Understanding and Augmenting Human Capabilities for Intelligence Production," which is under the project management of Dr. David R. Mandel, Group Leader of the Thinking, Risk, and Intelligence Group within DRDC Toronto's Adversarial Intent Section. The overarching objectives of the ARP project and the experiments described herein for which contractor support is sought are: (a) to identify systematic biases in human performance that may effect the quality of intelligence analysis; (b) to identify factors that may attenuate or amplify such biases or otherwise influence judgmental performance; and (c) to examine the viability of counter-measures aimed at reducing or eliminating them...."
More details in Statement of Work here (.pdf download).
Hoo, Boy. Are they going to have fun...
With some slight experience as one of them and with Analysts, I noted the problem, acknowledge that many can park their bias and do an effective job (subject and issue dependent) but am firmly convinced it is a problem. glad to see someone delving into it. Look forward to the result.
Competition is the spice of life.
It also is a significant aid to keeping everyone honest...
The Canadians know that and are smart enough to have someone else also take a look and pull best ideas from both. We, on the other hand...
The efficiencies of consolidation and centralization are known, what is often ignored is the adverse impact of those moves on effectiveness. Two minds / approaches are always better than one; three even mo' betta... ;)
There's another aspect aside from effectiveness. Put another way, you can always have your best gunners do the shooting or the best guy on point -- but no one else will learn much or become good at what they do and your bench will not be very deep.
All that has been known for centuries; "Quis custodiet ipsos custodes?" is not new and is practiced in most of the world; only the insane American predilections for one size fits all, one size does all, "whatever the boss wants" and "always show your good side" ignore that logic.
You're not likely to see any US countermeasures. We prefer those who totally support what the Boss wants, no matter how inane or even criminal -- as your example proves... :mad:
Word of even BROADER research....
....into how the bad guy does things, and how to predict what they'll do.
Public posting:
Quote:
The Department of National Defence, Defence Research and Development (DRDC), Toronto, Ontario has a requirement for the provision of Scientific & Technical support for behavioral, cognitive, and social sciences research in various security environments and operational contexts. Examples of such contexts include command, communications, computer intelligence, surveillance and reconnaissance; collection, analysis and dissemination of intelligence products; effects-based operations; all with an explicit focus on human performance at the individual group and organizational levels.
Now, a bit more detail from the Statement of Work here:
Quote:
“DRDC Toronto is now actively building its capacity for human sciences in a new research domain: Understanding, prediction and influence of adversaries’ intent …. DRDC requires contractual support for research in this new area. In particular, the Adversarial Intent Section (AIS) of DRDC Toronto anticipates a requirement for significant research effort that addresses the following general topic:
(a) eludication of contemporary security concepts such as Effects-based Approaches to Operations (EBAO), Defence Development and Diplomacy (3D), and the Comprehensive approach (CA) and their use in military capability development and doctrine;
(b) understanding the social, organizational, and cognitive determinants of human capabilities for the effective production of military and civilian intelligence pertinent to domestic and international issues;
(c) influence processes that play a role in conflict-ridden environments characterized by complex interactions among adversaries, allied forces, and bystander groups; (and)
(d) methods and tools for structuring, portraying and analyzing complex contemporary security environments.”
I've "plain languaged" it a bit here.
Why not get a machine to "average out" the analyses?
This, from Wired.com's Danger Room:
Quote:
The U.S intelligence community has a long history of blowing big calls — the fall of the Berlin Wall, Saddam’s WMD, 9/11. But in each collective fail, there were individual analysts who got it right. Now, the spy agencies want a better way to sort the accurate from the unsound, by applying principles of mathematics to weigh and rank the input of different experts.
Iarpa, the intelligence community’s way-out research arm, will
host a one-day workshop on a new program, called Aggregative Contingent Estimation (ACE). The initiative follows Iarpa’s
recent announcement of plans to create a computational model that can enhance human hypotheses and predictions, by catching inevitable biases and accounting for selective memory and stress.
ACE won’t replace flesh-and-blood experts — it’ll just let ‘em know what they’re worth. The intelligence community often relies on small teams of experts to evaluate situations, and then make forecasts and recommendations. But a team is only as strong as its weakest link, and Iarpa wants to fortify team-based outputs, by using mathematical aggregation to “elicit, weigh, and combine the judgments of many intelligence analysts" ....
Still a firm believer in Heuer's research...
I found him to be spot on. It's not possible to eliminate cognitive biases from the analyst entirely, as they're based in how we learn and incorporate knowledge. What IS possible is significant mitigation of those biases by first being trained to identify them; second being willing to address them; third being trained in structured analytic methodologies which go far in levelling the analytic playing field; fourth being willing to retool theories to fit the facts rather than retooling the facts to fit the theories.
The fourth I perceive to be the most important - and least engaged.
On the previously mentioned structured analytic method ACH, I found that Heuer's software was effective in mitigating cognitive bias (as is a simple hand-written matrix, BTW) but only if it's worked in a particular fashion.
As an example:
Any typical competing hypotheses matrix has a fairly straightforeward design. The first column on the left is populated with all of the facts generated by the analyst's research. The top cell of the second, third, fourth columns (et cetera) contain ALL working hypotheses, each in its own column. There may be several variations on a couple themes, or simply a pair of mutually exclusive theories.
The analyst then examines the facts in relation to the hypothesies, determining consistency, lack of applicability, or inconsistency. But this is where I found that the way this simple matrix is worked matters regarding the outcome.
The first couple of times I utilised ACH software, while in college in intelligence analysis classes, I had not yet learned that there may be a difference in how the process should be run for assuring the least bias possible. So I started at the top of the column for Hypothesis A, and worked my way down. I compared data points 1-45 to Hypothesis A, attempting to assign a value (highly consistent, consistent, not applicable, inconsistent, highly inconsistent) to each data point as it related to Hypothesis A. Then I went through the same exercise with the same data for Hypothesis B's column.
What a mess. For the particular project I was working on at the time, my results were inconclusive and an exercise in frustration.
Finally another student clued me in. Work across! Is data point 2 consistent, not applicable, or inconsistent with Hypothesis A? Is data point 2 consistent, not applicable, or inconsistent with Hypothesis B? C? Next, is data point 3 consistent, not applicable, or inconsistent with Hypothesis A? B? C? Working across, apply a data point to all hypotheses, then the next fact, then the next, down the matrix.
I will tell you that it surprised the heck out of me to find that, without having rearranged or changed ANY of my data points or hypotheses, the direction in which I worked the matrix made a HUGE difference in the utility of the results.
Next, what must be done is to eliminate (or rework) the hypotheses which have large numbers either of "not applicables" or "inconsistents" in their column.
Having just spent the last year working on the problem of US border security and Mexican drug cartel violence in Texas, I've watched several coworkers repetitively discard confirmed data because it doesn't fit their theories. This stuff has frustrated the living tar out of me! The individuals in question habitually cherry-picked the facts to "prove" their hypotheses, rather than working at trying to disprove all theories. That hypothesis which is least able to be disproven tends to have the highest validity.
Structured methodology, such as those tools identified and taught by Richards Heuer and Morgan Jones (among others) are the best tools I've found for removing ego and bias from the work of being an analyst.
As mentioned or alluded to in previous posts, the wheel does not need to be reinvented, nor does the process by which it rolls need to be studied again some more. The tools are there, and have been highly effective for decades; but they must be taught consistently and reinforced often throughout intelligence analysts' careers, regardless of venue or gov't agency.
New related paper: What do Analysts' Managers Say?
A bit of follow-up work - from the paper's abstract:
Quote:
Intelligence analysis provides important informational support to civilian and military decision makers. Recent intelligence failures of Canada’s allies have been attributed mostly to cognitive, social, and organizational deficits and biases of individual analysts and intelligence agencies. Such attributions call for a comprehensive examination of intelligence production from the sociopsychological perspective. The present report discusses findings from interviews conducted with Canadian managers of intelligence analysts. The interviewed managers identified a number of pertinent issues in the intelligence production process that may be explicated through the application of the behavioural sciences’ accumulated knowledge and methodology. The identified issues are discussed in light of the intelligence studies and behavioural sciences literature, and a roadmap for the behavioural sciences research program in support of the intelligence function is outlined.
Executive summary downloadable (right click and "Save as") here, full report downloadable here.