While my argument is too absolute above, the underlying point in my opinion remains valid which is that we have allowed metrics (the tail) to wag the dog (strategy). Our search for the perfect metric will always be illusionary, but that doesn't mean some limited metrics and assessments don't have a role in conflict, but that role should be much more limited than it is now. Metrics and assessments will never tell us if we're winning if winning means defeating an adversary, or getting the adversary to stop fighting (political agreement, etc.). As I stated above what we measure has little impact on the adversary's will or means to continue fighting for whatever his objectives may be. When metrics distract us from this reality we lose sight of what is really important. I'm sure our metrics would have told us Germany and Japan should have surrendered long before they did based on the futility of continued struggle, but people are proud and stubborn beyond reason. The same holds true for insurgents and the counterinsurgent who both wage war/conflict until both sides tire enough to resolve their differences politically. All our measures/assessments prior to that point are of questionable value. Providing a couple of starting papers to inform the debate that I hope will follow since this is an important topic for all conflict, not just small wars.
https://csis.org/publication/afghani...-ten-years-war
Afghanistan: The Failed Metrics of Ten Years of War
http://wih.sagepub.com/content/19/1/...ication_detailThis analysis looks at the reporting available on the state of the war at the end of 2011, in terms of the data, trends, and maps available from the US Department of Defense (DoD), the US National Counterterrorism Center (NCTC), the NATO/ISAF command, and the UN. It attempts to explore the meaning of these data, the reasons for the sharp differences between them, and what they say about the fighting to date and its progress.
The Problem of Metrics:
Assessing Progress and Effectiveness in the Vietnam War
A lot in this article, but I found this particular paragraph of particular interest.
O.K. I admit my bias I a big Bernard Fall fan, and yes it appears he is making an argument for metrics, but hey a big fan doesn't mean a die hard fan, we can have our disagreements. On a serious note, I recall reading comments Mr. Fall made in an article about how the U.S. village assessment process was highly inaccurate in Vietnam, and while that article didn't use the term administrative control his example did. He pointed to the schools we built (sound familiar, Kodak moments for our Civil Affairs Teams) being used as a metric of success, but the reality was the Viet Cong administered the schools and appointed the teachers. I had a senior Afghan security official tell me the same thing in 2010. I wasn't able to confirm it, but he said Mullahs were teaching in two of the schools we built radicalizing the children and parents didn't want to send their kids there, but were threatened if they didn't. I suspect our assessments and metrics never captured that uncomfortable truth.South East Asia expert Bernard B. Fall similarly highlighted the complexities of measuring success in an unconventional environment. Fall had spent nearly 15 years in Indochina becoming one of its most respected scholars. His 1961 publication Street without Joy, a classic of the eight-year French-Indochina War, influenced a wide range of American officers preparing for deployment to Vietnam. Fall defined victory in revolutionary war as ‘the people and the army … emerg[ing] on the same side of the fight’.12 He realized, though, that assessing progress toward such victory required suitable indicators.
The French criteria for ‘control’ often had ‘no real meaning when it came to giving a factual picture of who owned what (or whom) throughout the Vietnamese countryside inside the French battleline, much less outside’.13 Fall argued that trends in levels of security and population control could be plotted objectively on a map, given accurate reporting of assassinations, insurgent raids, and Vietcong taxation. Measuring ‘administrative control’, however difficult, if done properly, provided military commanders with the most accurate assessment of their progress.14
and part of the last paragraph captures it all rather nicelyBy late 1967 the sheer weight of numbers had become crushing. At MACV commander conferences, staff officers deluged Westmoreland with 65 charts during the Measurement of Progress briefing alone. According to the briefing officer, no senior general expressed any interest in one indicator over another.52 All the while, the US mission in South Vietnam tempered any signs of failure with a blizzard of statistics suggesting progress that led only to confusion on the American home front. Both optimists and pessimists easily justified their positions from the mounds of conflicting data. Without
linkages to coherent strategic aims and sound threat assessments, it seemed any balance sheet or prognosis was as good as the next. Even the establishment of an operations research and systems analysis office in MACV (MACEVAL) and an increasing reliance on automation did little to facilitate analysis of the war’s trends. Given the obsession with statistics and measurement, the war’s complexity had simply overwhelmed MACV’s
capacity for understanding.53
In the end, the United States Army failed in Vietnam in part because its metrics for success masked important operational and organizational deficiencies. Flawed measurements validated imperfect counter-insurgency methods and provided MACV with a false sense of progress and effectiveness. These measurements were symptomatic of a larger failure in thinking about the war’s deeper issues.
Bookmarks