Really was an excellent and clear response. (clear meaning even I could understand it)
Printable View
This Okie wrote it so you Kansas guys could understand.:D
Thanks
JohnT
JohnT,
Thanks for your kind comments. But I'll take exception to even the mild criticism. First, I never claimed there was no research on small wars/COIN/IW/etc. I saw too many early drafts of Nagl's and Cassidy's dissertations/books in the very late 90s to ever say that. You do point out that there was indeed some doctrine. But there is a huge difference between doctrine published and doctrine absorbed. Did the publication of FM 100-20 impact the institution? I see little evidence.
But I do take very well your larger point (I think), that the current thinking about small wars/COIN is very much haunted by Iraq (and to a lesser extent, Afghanistan), and is in that sense ahistorical. I perhaps exemplify that, as the POP on piece was entirely written while I was in Baghdad (academic journals have a LONG lead time). But I think the next generation of joint and interagency doctrine will try to gain some critical distance, and perhaps much of the other writing will follow.
Thanks for taking the time to read.
Best,
Doug
I think the point you make about the lack of impact on the institution as a whole of 100-20 is spot on. All the references to LIC, OOTW, and MOOTW were little more than lip service to most officers. It appears that many of my Leavenworth students drifted through their MOOTW classes without a look back (until they got to Haiti or later Iraq...)
As to your "issue" with me: I interpreted your comment on p. 359 based on your discussion of the 70s and 80s where you said, "Of note, during this time period there was no doctrine on counterinsurgency, peace operations, or stability operations." 100% true for the 70s. 100-20 on COIN/LIC is 81. Peace ops and stability ops do not appear until the 90s.
So, I guess we really agree on far more than we disagree.:cool:
Cheers
JohnT
A central task in Counter Insurgency, peace keeping, and post conflict stabilization that the revised U.S. Army Operations Manual and FM 3-24 state as essential is the ability to execute assessments. However, what both the Army Operations manual and FM 3-24 do not provide Commander's and Staff's is the training, methodology, format, and skills to conduct a thorough assessment. Assessments, of any type, are complex undertakings that require a thorough degree of training and staff work in order to execute correctly.
In order to conduct a proper counter insurgency assessment, the assessment process needs to have good clean data; common, well understood definitions; a well structured understanding of what the commander is trying to achieve and what success looks like; and the ability to incorporate non-quantative assessments (i.e. personal opinion, etc) in a systematic fashion. Finally, the hallmark of a good, systematic assessment process is an understanding of its overall usefulness and limitations. Personal opinion, the commander's perspective of the battlefield, and quantative assessment all are part of the counter insurgency assessment process.
At the macro level, the discussion of COIN assessments has revolved around which methodology to use: PMESII, Line of Effort based scorecards, my own developed Combat Analytics Balanced Scorecard. However, the high level, end product counter insurgency assessment scorecard is merely the result of a good assessment process that use quality data; common definitions; a systematic, repeatable process; common sense data gathering; and the ability to systematically incorporate non-quantative opinions into an overall commander / staff assessment of the progress of the counter insurgency campaign.
On a note of caution, I would not combine the obvious short comings of Effects Based Operations (EBO) as a short coming of using a systematic, well defined, repeatable, and well understood assessment process to help drive counter insurgency operations. FM 3-24, in order to be a good document to truly help commanders and staffs in the counter insurgency fight, needs a well understood, documented, and step-by-step assessment process to help military organizations track their progress, determine their successes, target their shortcomings, in order to provide counter insurgency operations a compass towards successful conclusion.
.
Good luck with that, particularly the data quality and gathering...Quote:
"However, the high level, end product counter insurgency assessment scorecard is merely the result of a good assessment process that use quality data; common definitions; a systematic, repeatable process; common sense data gathering; and the ability to systematically incorporate non-quantative opinions into an overall commander / staff assessment of the progress of the counter insurgency campaign."
Although this may be a way of approaching it and thats always a good starting point it's also important to realize that the biggest issue with any such directive applications is the ability to ensure that those in command who might not " get it " when it comes to sometimes the circumstances allow for nothing more concrete than adhoc and changing approaches, don't use said app to avoid doing exactly that.
It's a hard balance to achieve:wry:
Metrics are very hard to get in COIN/stabilization/peacekeeping, particularly leading measures. It's easy to get a metric that will tell you that you were doing really great--or really poorly--six months ago. Short term feedback is much harder.
But I would recommend those who work this should contact the ORSA officers who worked for me in Baghdad--LTC Scott Kain and MAJ Jeremy Newton. Both are incredibly skilled staticians/modelers with a good sensing of the multiplicity of issues involved in doing an credible assessment (and a healthy sense of limitations). While I still believe you have to balance the objective data with subjective assessment (both at a theoretical level of not all is measureable, and at a practical level that we know we don't have the capability to measure as we would like), their model was a good tool--and did pick up the improvements as Baghdad got better.
And I agree--while EBO has obvious shortcomings, we should "harvest" its two primary contributions--non kinetics matter, and assessments matter.
Doug
I accept the need for some appropriate metrics in order to assist in planning and undertstanding within the COIN environment. However, I believe that care most be taken that the 'science' of whatever process is used to gather the data to be analysed does not become confused with 'countering' insurgency. Metrics are a part of the puzzle and by no means necessarily as central to 'unlocking' COIN as what some of the discussion would perhaps have us believe.
Our 20th and 21st Century western militaries invariably have a fascination with science, frequently displaying behaviour that suggests we believe that it will clear the 'fog of war' and give us an insight superior to that of our foes. The evidence of this can be seen in instances as diverse as the McNamara era analyses of the war in Vietnam, to the premature truimphalism of the Air power advocates after the 1991 Gulf War (and, to an extent, in the thinking behind the 'light' decapitation of the Ba'athist regime in Iraq in 2003).
When I read and see offerings of 'systems' or 'processes' offering understanding in COIN, I shudder. During my time at the MNF-I COIN CFE I encountered many Officers and NCO imbued with passion and enthusiasm for various such systems. Invariably they struggled with the dissonance that resulted when their process failed to adequately explain or account for the reality of the complex environment they ended up in. Notwithstanding what I said earlier about the need for some metrics, I found that the only way that seemed to really 'help' was a more socratic approach to teaching.
In short I (and my colleagues) found that we had two options regarding COIN education: Firstly , we could teach a 'system' or 'process' (the 'science' approach) that was 'easy' to teach and quickly gave the satisfying illusion that we had passed on knowledge. This also fit nicely with the Army's and USMC's cultural predilection for receiving such process training. But doing this is not COIN education - it is rote learning process. The second approach (the 'art' approach) was to assist people in 'free thinking' based on upon looking at the situation and assessing it somewhat subjectively, relying on their own innate intelligence and based on a doctrinal principles approach. This is harder to do, it required a lot of work and effort (as well as understanding by the instructors). We chose the second approach - it lead to better comprehension and improved confidence in their own abilities amongst the trainees.
The key to being able to use such an approach is doctrine that clearly artiulates principles and concepts that can intelligently applied with a degree of thought. Doctrine that moves towards advocacy of systems or processes will generate an 'industry' of understanding that actually adds nothing tangible to the bottom line of understanding, whilst providing the very illusion of the same. I tender the example of the MDMP as evidence - Commanders and Staff Officers at all levels can gain virtual PhD levels of understanding of such the process whilst at the same time failing to have the faintest clue about tactical or operational art.
To wind up this suddenly long post, I beleive that the next draft of FM3-24 must shy away from any process driven or formulaic approach and seek to build upon an approach that reinforces clear, simple description and principles to aid soldiers in gaining truer understanding rather than rote learning of a system. Simpler is better. As COL Alex Alderson (former MNF -I plans) said ' Counterinsurgency is not alchemy' .
Regards,
Mark
My personal experience of a systematic, well defined, repeatable, and well understood assessment process, is that it is of limited use, when carried out by human beings dealing with incomplete, changing and ambiguous data.
Some old guy called Carl Von something called it "friction" and I think specifically warned against trying to quantify it.
However, as someone passionate about military science and thought, I'd love to see you post of one these cards or some deeper explanation so we may better assess it.
Available in training for you via IMPAC for $2500/session! (no kidding)
I believe we discussed the merits of balanced scorecardhere.
I have discussed the SWORD Model to some extent here and, at Marc T's suggestion, am working on a summary article (with a new twist or 2). Since it will take me a bit to finish, I want to make a couple of comments here that relate to Mark's post.
The SWORD Model is Social Science. It makes use of the scientific method for theory development, data collection, and data analysis. When you see the article, you will recognize its dimensions from 3-24, 100-20, JP 3-07 and lots of other places. Until the research was done, however, no quantitative metrics existed. Moreover, the model works quite well with qualitative data and analysis but only - as Mark says - as principles. Metrics in an ongoing conflict are notoriously difficult to gather and, especially difficult to cull from public sources. In the last chapter of Max and my Uncomfortable Wars Revisited I applied the model to the ongoing conflict in Iraq using only public data (this was as of 2004). I hedged and qualified all over the place but a rigorous read shows that I was overly optimistic at the time and dead wrong on unity of effort.
So, using any Social Science model depends on the quality of the model, the quality of the data (mine was not nearly as good as it should have been), and the art of the interpreter (mine was pretty good since it was almost sufficiently hedged:rolleyes:).
On that cautionary note
Cheers
JohnT
I mentioned this briefly before in a similar topic some weeks ago, but I think the methodology used by the strategic warning community, often called "indications analysis" might be useful for some of the difficulties in judging effectiveness and or progress toward various goals in coin. Indications analysis is designed to be carried out by "human beings dealing with incomplete, changing and ambiguous data" - it comes with the territory of the strategic warning problem.
Probably the best primer on the subject is Cynthia Grabo's now-declassified and updated text from the 1970's. In a simplistic nutshell, indications analysis works backward from a particular end-state. A series of indicators, or signposts, on the way to that end-state are developed and then monitored. Although developed for warning, any end-state or scenario can be broken down and analyzed using this methodology. The advantages are two-fold: It can be used to make predictions (which is what it was originally designed to do) but it also can provide a means to analyze and compare various courses of action one might take.
To borrow from Mark's post below, indications analysis is both an "art" and a "science" and tries to combine the strengths of both philosophies. Although the framework might be science-based, the nature and ambiguity of information requires human judgment, particularly since indicators and indications are not limited to hard, quantifiable data. From Grabo:
andQuote:
An indication can be a development of almost any kind. Specifically, it may be a confirmed fact, a possible fact, an absence of something, a fragment of information, an observation, a photograph, a propaganda broadcast, a diplomatic note, a call-up of reservists, a deployment of forces, a military alert, an agent report, or anything else. The sole provision is that it provide some insight, or seem to provide some insight, into the enemy’s likely course of action. An indication can be positive, negative or ambiguous (uncertain).
Instead of "preparation for hostilities" you can substitute any theoretical end-state you desire. In fact, Indications Analysis within the intelligence community has expanded beyond the traditional role of warning of hostile actions by adversaries to monitoring a variety of issues of interest to the US. "Warning problems" have been established on a variety of topics that have little to do with the potential of an adversary's attack. I see no reason why this tested framework cannot be used in analysis of COIN and LIC, particularly since in my experience so much effort is put towards current intelligence, which has little value for this kind of estimation and analysis.Quote:
An indicator is a known or theoretical step which the adversary should or may take in preparation for hostilities. It is something which we anticipate may occur, and which we therefore usually incorporate into a list of things to be watched which is known as an “indicator list.’’ Information that any step is actually being implemented constitutes an indication. The distinction between expectation and actuality, or between theory and a current development, is a useful one, and those in the warning trade have tried to insure that this distinction between indicators and indications is maintained. Many non-specialists fail to make this careful distinction.
I do see two potential problems however. First, indications analysis requires a lot of resources, time and effort. Often, the strategic warning community is under-resourced in lieu of other requirements - it seems likely that a COIN-focused effort would suffer to an even greater extent.
Secondly, indications analysis requires a focused collection effort that may not be available in COIN scenarios. IOW, indicator development and monitoring are not possible if there isn't a significant body of baseline knowledge and the ability to get information in the necessary areas.
Of course, both these limitations apply equally to any other methodology or analytical framework that one might use.
I've always thought that this quote from Grabo should be framed and hung on a number of walls (especially in the NGO/aid "early warning" community):
Quote:
The researcher should take care not to be trapped in a rigid system which cannot be readily expanded or modified as new developments occur. The system should be designed to serve the analyst, not to have the analyst serve the system.
Entropy, thanks for posting the link to the Grabo text - I hadn't seen it before. It's an interesting blend of inductive and abductive logics; I think I will probably use it in my course this fall.
Grabo's Anticipating Surprise is a classic in the intelligence field and a great read. However, because of its fairly tight focus on warning in conventional conflict, the principles she relates may be more difficult for inexperienced readers to absorb and mentally shift into the COIN/UW context.Quote:
Originally Posted by Entropy
Entropy, I don't feel that indicator development is impossible or overly difficult in COIN/UW. But keep in mind that Grabo discusses strategic warning in her book, with the product intended to alert policymakers to emergining threats - and she also cautions about the difficulties of convincing them (and of others in the IC) of the real dangers embodied in emerging threats that are outside of their current perceptions. Warning intel in COIN/UW is most effective at the unit level.
Putting aside the formality of lists for a moment, just consider that all soldiers operating on the ground in such an environment develop their own personal indicator lists in their head. To use a cliche, but real, example - the sudden absence of locals from a normally lively street in town is usually taken as an indicator that something bad is about to happen. In the COE, such indicators range from being simple and broadly applicable as in that example, to the much more complex and focused on narrow, local context.
(Recall the bit from Go Tell the Spartans, where the analyst demonstrates to the cynical commander the ability to predict which village the VC are going to hit next.)
At an even more personal level, I used to train my HUMINT'ers in the principles of indicator analysis for interrogation. The baseline of information regarding kinesics, cognition and emotion is gathered during the first phase of the interrogation (or, if the situation allows, during the first screening interview). In this case, the indicators developed are used, not for "warning" in the standard sense, but to alert the interrogator to deception, potential leads and openings for manipulation of any one or all of the three mentioned aspects of the source.
To get back to analysis, the unfortunate truth is that many analysts at the tactical level have neither the training nor the experience to effectively implement a warning system for their units. Another obstacle is that, even if they develop the best list of indicators available in-country, unless they have an effective system for monitoring incoming information specifically for indicators and disseminating immediate warning, it ends up being a waste of valuable time of a critical asset. Just as Grabo relates for the strategic level, to be effective at the unit level it would also have to be a full time gig - and I know of few units that can spare an analyst to do nothing more than the warning job. But most competent analysts are still able to integrate elements of the warning discipline into everything else they've got on their plate. I'm sure many of those on the board can think of examples.
FYI: Several other good pubs from NDIC, along with the Grabo text, are available for download through the link posted on an earlier thread