I found him to be spot on. It's not possible to eliminate cognitive biases from the analyst entirely, as they're based in how we learn and incorporate knowledge. What IS possible is significant mitigation of those biases by first being trained to identify them; second being willing to address them; third being trained in structured analytic methodologies which go far in levelling the analytic playing field; fourth being willing to retool theories to fit the facts rather than retooling the facts to fit the theories.

The fourth I perceive to be the most important - and least engaged.

On the previously mentioned structured analytic method ACH, I found that Heuer's software was effective in mitigating cognitive bias (as is a simple hand-written matrix, BTW) but only if it's worked in a particular fashion.

As an example:
Any typical competing hypotheses matrix has a fairly straightforeward design. The first column on the left is populated with all of the facts generated by the analyst's research. The top cell of the second, third, fourth columns (et cetera) contain ALL working hypotheses, each in its own column. There may be several variations on a couple themes, or simply a pair of mutually exclusive theories.

The analyst then examines the facts in relation to the hypothesies, determining consistency, lack of applicability, or inconsistency. But this is where I found that the way this simple matrix is worked matters regarding the outcome.

The first couple of times I utilised ACH software, while in college in intelligence analysis classes, I had not yet learned that there may be a difference in how the process should be run for assuring the least bias possible. So I started at the top of the column for Hypothesis A, and worked my way down. I compared data points 1-45 to Hypothesis A, attempting to assign a value (highly consistent, consistent, not applicable, inconsistent, highly inconsistent) to each data point as it related to Hypothesis A. Then I went through the same exercise with the same data for Hypothesis B's column.

What a mess. For the particular project I was working on at the time, my results were inconclusive and an exercise in frustration.

Finally another student clued me in. Work across! Is data point 2 consistent, not applicable, or inconsistent with Hypothesis A? Is data point 2 consistent, not applicable, or inconsistent with Hypothesis B? C? Next, is data point 3 consistent, not applicable, or inconsistent with Hypothesis A? B? C? Working across, apply a data point to all hypotheses, then the next fact, then the next, down the matrix.

I will tell you that it surprised the heck out of me to find that, without having rearranged or changed ANY of my data points or hypotheses, the direction in which I worked the matrix made a HUGE difference in the utility of the results.

Next, what must be done is to eliminate (or rework) the hypotheses which have large numbers either of "not applicables" or "inconsistents" in their column.

Having just spent the last year working on the problem of US border security and Mexican drug cartel violence in Texas, I've watched several coworkers repetitively discard confirmed data because it doesn't fit their theories. This stuff has frustrated the living tar out of me! The individuals in question habitually cherry-picked the facts to "prove" their hypotheses, rather than working at trying to disprove all theories. That hypothesis which is least able to be disproven tends to have the highest validity.

Structured methodology, such as those tools identified and taught by Richards Heuer and Morgan Jones (among others) are the best tools I've found for removing ego and bias from the work of being an analyst.

As mentioned or alluded to in previous posts, the wheel does not need to be reinvented, nor does the process by which it rolls need to be studied again some more. The tools are there, and have been highly effective for decades; but they must be taught consistently and reinforced often throughout intelligence analysts' careers, regardless of venue or gov't agency.