Intelligence-Led Operations: Business as Usual or Paradigm Shift?
Greetings, SWC. I am currently working on a related SAW thesis and have hit a snag on nailing down what “intelligence-led operations” really are. My hope is to generate a discussion that can tap into the conventional wisdom on the concept, as well get some thoughts on my ideas for expanding it. Anyway, on to my points…
The term “intelligence-led operations” is frequently used to describe our activities in Iraq and Afghanistan, yet one would be hard-pressed to find a single, overarching definition or theory behind it. A few sources on “intelligence-led policing” and the FM 3-24 discussions on “intelligence-driven operations” are the closest thing I’ve found to a theory. The following points are my best attempt to paraphrase the theory from these sources.
Intelligence-led operations…
- Are operations driven by intelligence, rather than vice versa.
- Subsequently produce the intelligence that drives further operations.
- Require intelligence personnel to take a pro-active, rather than passive, role in collecting intelligence.
- Requires operators to be intelligence collectors.
As I read the verbiage behind these points, I kept asking myself if there’s really anything new or different here from the typical intelligence-operations relationship. Using the first two points, one could argue that “reconnaissance pull” and the targeting process are forms of intelligence-led operations. The last two points should be standard operating procedure for any type of operation/environment.
If there’s nothing new here, has the term become a euphemism for tactical-level operations based on something other than a developed course of action with well-defined decision points and supporting intelligence requirements? Are these operations with no clear linkage to the campaign objectives and endstate? Although there may be an element of truth to these questions, I don’t believe they or the points above accurately describe the concept as we know and practice it.
Here’s my shot at a better definition/theory: intelligence-led operations are purposeful interactions with adversary systems for the purpose of gaining understanding and leverage. They are therefore applicable to any type of campaign and should be incorporated into the operational design, not simply left to the tactical level. Now here’s what I think is the most important part…to fully incorporate intelligence-LED (vs. intelligence-DRIVEN) operations into a campaign design, intelligence must be its own logical line of operations. Thus, classic fire and maneuver operations must sometimes be designed to support intelligence, instead of vice versa.
These last two points have been admittedly borrowed from LG William Boykin and Shimon Naveh, respectively. This ops/intel paradigm shift they’re describing, I believe, is necessary to conduct “systems thinking” at the operational level.
That's where I'm at so far. Hopefully I've made sense. I'm looking forward to the discussion.
V/R,
Jason Brown
Excellent point. Further, I submit it transcends
Quote:
Originally Posted by
Jobu
. . .
Modern adversaries "maneuver below our discrimination threshold" as the Aussies have said in "Adaptive Campaigning". Passive sensors just aren't going to distinguish these targets...unless you find a way to get the adversary to operate above the threshold. I don't think this is a new idea...exposing adversaries with decoys is a similar if not the same concept...
the intel aspects -- but I'll address them first. Just one quick thought there; getting a competent adversary (and fortunately, most are not that competent in my observation) to operate above that discrimination threshold is all too frequently a matter of luck or the adversary's incompetence -- which is not a matter of solace as the good ones least likely to rise above the threshold are the most dangerous. Thus the more sensible option IMO is to actively pursue indicators.
Active pursuit of information or, the transcending part, active efforts of operation will typically entail risk. Sometimes personal, sometimes to the innocent, frequently to own forces or assets and even more frequently to he or she who direct initiation of those active measures. The former three risk aspects are physical and can result in deaths or casualties; the latter may be physical but more often, the perceived risk is to reputation and job security.
We are today a risk averse society. We are also encumbered by a DoD and service culture that has effectively not really been at war for 62 years (though elements of the services most certainly have been to a number of wars in that time) and that has been increasingly bureaucratized to the point where protection of the institutions takes precedence in too many cases over what needs to be done. We have an intensely -- perhaps too intensely -- competitive selection and promotion system that discourages radicals and, frequently, innovative thinking. We have a budget process that encourages parochialism and finger pointing for fiscal advantage.
The point of all that is that I agree with the rest of your comment:
Quote:
"...I just don't think it is intuitive among intelligence professionals or typical operators (average aircrew or combat arms types) in Small Wars as of yet...at least not above the tactical level.
(emphasis added / kw)
The guys who have to get it done generally do, one way or another -- but they do so at some risk to themselves. For echelons above Battalion / Squadron (all types), I sense strong aversion to risk not universally but all too frequently. I think it is intuitively grasped at upper level -- but is also instinctively rejected for the reasons I cited above.
That raises in my mind two questions. First, back to the Intel piece. Consider that if we remain in the reactive mode and are reluctant to encourage risk taking; if our opponents are smart enough to stay off the radar screen; if being 'right,' 'staying in our lane' and doing things 'correctly' is more important than getting a 75% solution to the folks who need it BEFORE they need it -- or even know they need it (and IMO, that is the goal of Intel led operations) -- are we doing what we should be doing?
The second question is operational; with basically the same considerations except that getting a 75% probability of mission success is adequate to launch -- but we don't because we want a 90% probability -- are we not doing what we should be doing?