Unless your war is vastly different and I suspect it is not,
Quote:
Originally Posted by
Bob's World
Who was most productive today? Who was most effective today? What do we do more of? What do we do less of? How do I convince conventional commanders to allocate critcal enablers to one form of engagement over another?
I'm pretty sure that varies a great deal depending on whether the location and overall relevance of those IEDS was critical, nice or unimportant and if the weapons of those' insurgents' were obtained and how well their deaths can be exploited; whether the cache was significant and/or the snatch was worthwhile (and the snatchees can be exploited); and whether the meeting and MedCap were in a critical or humdrum area.
IOW, the factors of METT-TC apply as they pretty much do in all wars. Importance and productivity vary from day to day and each type operation can range from unnecessary to deadly to so-so to good work to super important. That's war for you... :wry:
Frustrating, isn't it? :o
Quote:
I have a counterpart who reads the "scorecard" to the commander every morning. No one ever asks what it means.
That could mean they're all self explanatory or self evident -- or that those items are irrelevant. Either way, sounds awfully bureaucratic to me. What do you report on those days when there is nothing to report?
Quote:
How does that go, "you sell the sizzle, not the steak?" That's fine. But never forget you will quickly starve eating nothing but sizzle. But boy, to people love sizzle.
I'm not sure if that's a metaphor that insinuates that overrated DA is more glamorous than is the FID/SFA business which is really far more important but it seems like it might be. If it is, I agree.
Illegitimi non carborundum. Keep on pushing. ;)
I guess you forgot I'm a strong believer in no metrics, Bill.
As applied to warfighting, they do have a valid place in log, engineering and fire support functions but 'metrics' have no place in the conduct of combat operations. You obviously also missed the fact that I said above that the KIAs, weapons and hopeful and possible exploitation have some merit but should not be publicized in any way or be a briefing item. Those facts can, if you know your enemy (and we generally do not because we don't stay long enough to learn him) be beneficial and useful.
You and a few others are objecting on the basis that they ARE or will become a briefing item. Possibly, even probably true -- but that's a flaw on the part of the senior person who directs that, not of the process itself. Misuse of intelligence is all too frequent is not restricted to bad guy KIA data...
You say:
Quote:
"We need more analysis, more ground truth about what makes people tick, more leadership instead of management, and less focus on diverting staffs to chase useless metrics."
I totally agree. Get commanders to stop asking for and staff officers to stop providing such fluff.
You also forgot I'm firmly convinced Staffs are way to large and that is responsible for the proliferation of useless 'data' and 'metrics.' There are better ways to build a mobilization base...
Don't call information useless just because you haven't used it or seen it used profitably. ;)
The Real EBO - Effects Blurred Operations
Thanks to Wilf for initiating this thread and to the many contributions which are richly exploring the issues related to metrics. I wish to contribute to this exchange with my own observations and experiences.
My experience began with the development of EBO concepts and theory and the ability to apply measures to assess the desired effects with JFCOM in 2002. I have since had the dubious benefit of actually applying the theory of MOP/MOE development, collection, assessment, analysis and reporting at the strategic, operational and tactical levels of the ISAF mission with separate deployments to both Kabul and Kandahar as an operational analyst.
Notwithstanding the vigorous debates surrounding EBO itself, the continued and insatiable demand for measures has frustrated me considerably. I echo many of the identifed problems several posts have described. Unlike many of my contemporaries (scientists) I feel there are serious limitations to the collection, analysis and reporting of measures. It is my position that reliance on the measurable has led to a situtaion I describe as the real EBO effects blurred operations. This blurring is based on several factors such as:
i) Reliance on MOPs versus MOEs - MOPs are necessary to evaluate if we are doing things right. Numbers killed in an AO can be used to evaluate insurgent activity. However numbers of KIA, development projects or numbers of Shura meetings are meaningless when divorced from the requirement for patrols, development or local interaction. MOEs are intended to evaluate if we are doing the right things - such as the earlier post regarding the value and priority associated with combat patrols, intelligence driven operations, or medical and humanitarian assistance. True MOEs could be used to support transitioning ink spot strategies from clear, hold and build dominant operations in an AO for example. Unlike MOPs (which are straightforward and easier and faster to measure), the selection and development of MOEs is not easy and their evaluation lags the operational tempo.
ii) measuring the measurable versus the important - too often there is far too much reliance on what we can measure and then associating that measure with the progress of an operation. Number of schools built may make one feel good, but what has it to do with education requirements/need? Are the schools located where they are needed, are they built to specified standards, are they protected from Taliban vandalism, are they staffed with qualified teachers, do they have secured funding for operations, do they have teaching materials, are the local children attending the school etc? I have seen Unicef tents that were more effective schools than were dedicated buildings. Measuring 'things' also results in what I call an effects explosion. This refers to the ever increasing numbers of metrics that are foisted on deployed personnel to collect and report irrelevent data (ISAF progressed from 25 to 100 to over 700 monthly measures over a one year period for example). Couple this with the problems in collection and reporting this information across the provinces results in inconsistencies, ommissions, real and unintentional errors, inflationary or creative reporting, etc.
iii) reporting measures and producing slides or excel tables instead of analysing their significance and providing advice or recommendations. Often analysts divorced from the actual operations are given the responsibility to assess and report on the data. The results are mostly divorced from the operations and are virtually useless in terms of influencing the operations themselves, they are predominantly used to report to higher headquarters and later repackaged and distilled to form IO materials that are used to show progress of the mission to national audiences and media.
iv) metrics obsession - certain metrics (KIAs, TICs and other SIGACTs for example) become THE metric for reporting and this detracts from the purpose of collecting the metric in the first place which should be determining effectiveness and supporting HQ decision analysis requirements. Too often I witnessed senior staff that were more concerned with the numbers than they were with the associated trend and analysis. We need to emphasize the relationship between metrics and operations. The counter narcotics reliance on # of acres of poppy eradicated is a useless and overly relied on metric when one considers that despite the ever increasing (presumably good) statistic, poppy cultivation has exploded in terms of its proliferation both within problem provinces and to a large number of previopusly poppy free provinces. Where is the analysis of the eradication program in relation to the growth of the problem. Secondly, is there a secondary relationship between eradication and insecurity which may be counter to the dominant mission in Afghanistan?
v) distillation of metrics - Too often I witness the development of metrics that are so distilled (down to the tactical level) that they are relatively meaningless to the original purpose. This distillation is the result of combinations of the above problems. Any one measure can be rationalized to be important to a particular purpose in and of itself, however in the roll up of the comprehensive assessment the relative weighting of the metric becomes too prominent and can dominate the reported results. I believe certain metrics are key and quite dominant (numbers of IEDs could be one example) on their own, however the overall assessments need to account for more than the constituent elements. I assert that qualitative measures are more important to MOEs, which should form the basis of higher level assessments.
Quantitative assessments which dominate metric measures can inform qualititative assessments however the subjectivity of these qualitative measures detracts from their adoption and use. I would assert that an informed and effective force understands qualitatively their AO. This understanding fails to be conveyed within our dominant quantitative metric measurement systems.
These are a few of my observations. I continue to work in this field and am attempting to develop proposals for how to more effectively implement measures in support of decision making and influenceing operations.
V/R
David
Metrics for Helmand: with pictures
From the BBC:
Quote:
Viewpoint: Measuring success in Afghanistan
As the biggest anti-Taliban offensive in Afghanistan since 2001 continues in southern Helmand province, the challenge of how to hold on to and rebuild areas previously held by insurgents remains. Fotini Christia, a political scientist at the Massachusetts Institute of Technology, who has recently spent time in Afghanistan researching conflict and development, sets out the latest ideas on how to measure the success of such operations.
Link:http://news.bbc.co.uk/1/hi/world/south_asia/8524137.stm
A new name to me, so here is her official bio:http://web.mit.edu/SSP/people/christ..._christia.html
So what this boils down to is
That we need to ask the famous question when presented with metrics.."So what?" What does this metric mean to us, How does it give us insight into what we're doing and if we're doing the right things?
A Math based HR approach...
Arguments for both sides have a point.
A body count (in the sense of Killed, Captured, Defeated, or otherwise removed from the enemy strength) is the only real quantifiable measure of success.
The problem is that it must be put into context, and only related to the total combat power available to the enemy. (similar to Seahorse above on the relation to poppy destroyed to overall trend in cultivation)
The calculus of progress then becomes attempting to calculate the number of insurgents taken out over time compared to the change in total number available over the same time. Sometimes taking bad guys out violently adds more strength (propaganda driving recuiting) . Sometimes it drives recuits away.
That "derivative" then describes the velocity of the enemy's combat power at any given time (increasing or decreasing) which can tell us if we are "winning" or "losing".
The problem comes from the lack of fidelity or accuracy in information. How many recruits does accidently bombing a house with 10 enemy and 2 civilians bring to an insurgency? How many insurgent recruits are driven away when a new infrastructure project opens? These can really only be hinted at through indirect means to a low degree of accuracy.
Understanding the changes in enemy strength through reports like a 'body count' is crucial. But, it must be compared to the overall change in available forces.
Without fidelity of information on both figures, we resort to 'atmospherics' to try an qualitatively intuit what cannot be quantitatively derived.
When some theorists say 'ignore' a body count, I would suggest that they really mean "we just can't get those numbers, so use these other tools as a substitute for your lack of omniscience"
Anyone ever try and compare reports in CIDNE to whats opensource? or to other reports in theater? You'll know what I'm talking about...
Insurgents don't surrender, they mostly just fade into the background.
Or join the 'winning' side (one of which in an insurgency there is not likely to be; the best you can usually get is an acceptable outcome for both sides. That's great fun... :rolleyes:).
Numbers of any events concerning humans are fungible and volatile. That volatility typically occurs in quite random patterns and with great speed in wartime. The fungibility drives the scorekeepers bonkers.
The numbers -- and they include KIA, WIA, Prisoners/detainees, schools built, dams built, Soccer balls handed out, MedCaps conducted, etc. etc. won't really tell you much unless you get really accurate numbers and watch them for a long period of time to ascertain trends. Not really results, just trends. How many schools were built (=n) versus how many were willingly converted to other functions (=n-w) and how many were destroyed by own fires (=n-do) or by the Insurgents (=n-dI) or just fell apart due to shoddy design or construction (=n-sd or n-sC)?
In the US, a week is a long period of time; in Canada a month is -- in Iraq a couple of years might be, in Afghanistan a decade is (maybe...). China counts in centuries...
Accuracy in a combat zone is if not impossible (for which I'd vote, not least because many of your number counters / takers checkers will cheat prodigiously...) certainly frustratingly difficult.
Touchdowns are good, though... :D
Unless they're 'own goals.' :o