From the ICRC round table statement:
Link: (Otherwise there would be too many posts)
First the problematic of removing soldiers from the battle field:
Quote:
new technologies remove soldiers further and further away from the battlefield was a matter of recurring discussion.
More thinking is required about the consequences of these remote means and methods of warfare. Firstly, what is the consequence of their use for the definition, the extent of the battlefield? Some have argued that if drones can be flown or cyber attacks launched from anywhere in the world, then anywhere in the world becomes a battlefield. This would in effect be an endorsement of the concept of a “global battlefield”, with the consequence that the use of force rules allowing for incidental civilian loss and damage under the IHL principle of proportionality extend far beyond the scope of what has until now been accepted. This is a notion that the ICRC does not follow.
Just like high altitude bombing might be safer for soldiers but also in certain circumstances indiscriminate and unlawful, so new technologies, however protective for the troops, will always have to be tested for their compatibility with humanitarian law and in particular their possible indiscriminate or disproportionate effects. This, however, requires that we get a better understanding about the effects of such technologies, in particular their precision and their incidental effects - not only in abstract technological terms but in the way they are concretely being used.
Secondly the problematic of responsibility:
Quote:
responsibility and accountability for the deployment of new technologies. Whether new technologies will reduce our capacity to allocate responsibility and accountability for violations remains to be seen. As a starting point, it is worth recalling that international humanitarian law parties to conflicts (states and organised armed groups) and international criminal law binds individuals.
. It is a topic that reminds me of Goethe's poem Der Zauberlehrling ('the sorcerer apprentice'), who unleashed a broom with destructive artificial intelligence and UAV capacity. Both the apprentice and the magician himself certainly bore their share of responsibility and the magician ultimately had to put his house in order. In cyber space on the other hand, allocation of responsibility does appear to present a legal challenge if anonymity is the rule rather than the exception.
Finally on technology it self:
Quote:
technology, in itself, is neither good nor bad. It can be a source of good and progress or result in terrible consequences at worst. This is true most of the time. Transposed to technologies that are weaponised, this means that most weapons are not unlawful as such; whether their use in conflict is lawful or not depends on the circumstances and the way in which they are used.
This being said, some weapons are never lawful and have been banned – blinding laser weapons or landmines, for instance.
Much, much more to think from this statement but an excellent resume of all the questions we raised here at our limited level.
M-A
PS: Mike, I believe that guy is serious on the subject.
A Future for Drones: Automated Killing
Quote:
One afternoon last fall at Fort Benning, Ga., two model-size planes took off, climbed to 800 and 1,000 feet, and began criss-crossing the military base in search of an orange, green and blue tarp.
The successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. (image: Georgia Tech Research Institute. Alberto Cuadra and Peter Finn/The Washington Post) The automated, unpiloted planes worked on their own, with no human guidance, no hand on any control.
After 20 minutes, one of the aircraft, carrying a computer that processed images from an onboard camera, zeroed in on the tarp and contacted the second plane, which flew nearby and used its own sensors to examine the colorful object. Then one of the aircraft signaled to an unmanned car on the ground so it could take a final, close-up look.
Target confirmed.
This successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. Imagine aerial “Terminators,” minus beefcake and time travel.
In Berlin last year, a group of robotic engineers, philosophers and human rights activists formed the International Committee for Robot Arms Control (ICRAC) and said such technologies might tempt policymakers to think war can be less bloody.
Some experts also worry that hostile states or terrorist organizations could hack robotic systems and redirect them. Malfunctions also are a problem: In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.
The ICRAC would like to see an international treaty, such as the one banning antipersonnel mines, that would outlaw some autonomous lethal machines. Such an agreement could still allow automated antimissile systems.
“The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war. “The good technology is far off, but technology that doesn’t work well is already out there. The worry is that these systems are going to be pushed out too soon, and they make a lot of mistakes, and those mistakes are going to be atrocities.”
Published on Tuesday, September 20, 2011 by The Washington Post
1 Attachment(s)
Problem definitely solved ...
so long as the enemies are "orange, green and blue tarps". :D
"Artificial intelligence" will have to improve beyond "tarps" to identify human enemies (and distinguish them into combatants, non-combatants and civilians), especially where irregular forces are concerned.
I was struck by a comment in W. Hays Parks, Special Forces’ Wear of Non-Standard Uniforms (2003), p.6 pdf:
Quote:
Special Forces personnel who had served in Afghanistan with whom I spoke stated that al Qaeda and the Taliban had no difficulty in distinguishing Northern Alliance or Southern Alliance forces from the civilian population.[8]
[8] Because neither Taliban/al Qaeda nor Northern or Southern Alliance forces wore a uniform, visual friend or foe identification at a distance was a challenge. Third Battalion, Fifth Special Forces Group, The Liberation of Mazar-e Sharif: 5th SF Group UW in Afghanistan, 15 Special Warfare 34, 36 (June 2002). However, this differs from dressing as civilians for the purpose of using the civilian population or civilian status as a means of avoiding detection of combatant status. From the standpoint of possible violation of the law of war, the issue is one of intent. As indicated in the main text, use of non-standard uniform (Massoud pakol and/or scarf) by some Special Forces personnel was to appear as members of the Northern Alliance rather than be conspicuous as US soldiers and, as indicated in the preceding footnote, high-value targets.
This comment has stuck in my head; and is one factor in my suggesting that "Western rules" may well be immaterial or counterproductive in the context of a "non-Western" environment.
A discussion of "uniform rules" (yes, a bit of a punny) is found in this thread, Is it time for psuedo operations in A-Stan?..., with discussion of articles by Parks and others starting at post #13. That post mentions what I believe is still the best US work on irregular combatants, the 1959 JAG Treatise, A TREATISE ON THE JURIDICAL BASIS OF THE DISTINCTION BETWEEN LAWFUL COMBATANT AND UNPRIVILEGED BELLIGERENT (JAG School 1959).
Here are some words of wisdom from its introduction:
Quote:
A noncombatant [3] who fights can be punished with death. [4]
3. A noncombatant is a person whom both sides on the basis of experience can reasonably expect will not actually engage in overt acts of war. The word can only be defined to the satisfaction of both sides when nations of the same cultural heritage are at war. Then, noncombatant is defined by traditional examples which have meaning to both sides. In most western civilizations all persons not in the fighting forces and some, such as physicians are traditionally thought of as noncombatants. This is the sense in which the word is used here.
4. FM 27-10, The Law of Land Warfare, Jul. 1956, paras. 80, 81, 82.
Again, we are back to the basic factor that "PID" differs between military cultures.
Do I think we should rush out and develop ICRC-type rules for future AI drones and robots ?
No, I think the general principles are sufficient - define the enemy; distinguish between combatants, non-combatants and civilians; allow adequate discretion based on military necessity; and apply a balanced proportionality test to targeting. The tests should be the same whether a human or robot targeter is involved.
We don't need as an addition to the "Laws of War" another ICRC-type "rule" - such as the ICRC 2009 "guidance" on Direct Participation in Hostilities (91 pages on the ICRC's concept of the "transitory guerrilla").
The 1863 Lieber Code (GO 100) is 25 pages in large type pdf (attached). That size effort could be a real "handbook"; and is more of what I'm suggesting for IHL. Someday, someone should add up the pages of ICRC rules and commentaries - surely into the tens of 1000s.
Regards
Mike
The artificially intelligent weapon of tomorrow…
It’s today!
Flying Robots Called 'Nano Quadrotor' Drones Swarm Lab
On Wednesday, robotics researchers at University of Pennsylvania released a video of what they call "nano quadrotors" - tiny flying robots that engage in complex social movements like swarming and pattern formation.
The video shows what look like mini helicopters flying with remarkable agility and precision. They can do flips, avoid obstacles, and shift direction effortlessly, all on command. Toss one up in the air, and it finds its balance and and flies back to the hand that launched it. Best of all, when in the company of other drones, they gather to fly in a figure-8 formation.
http://www.huffingtonpost.com/2012/0...n_1249442.html
We were all convinced it would remain Science Fiction for years…
The rethorical question of an artificially intelligent machine engaing mankind is becoming the trivial next legal frontiere of today. I can see a brand new world of legal, ethic and military questions opening in front of my eye. :(:D:o
Marc, Not everyone was nearsighted ...
A forthcoming article by Wilf in the Infinity Journal is tentatively entitled: "Drivel: 'Nano Quadrotor' Drones Swarming Tactics are not a New Generation in Warfare; Having Been Foreseen by Carl von Clausewitz."
Cheers
Mike
PS: If those reading this (do we have plural readers ?) happen to be mucking about Infinity Journal, they might take a look at Targeted Killings Work, which looks at them from an Israeli viewpoint (which seem to have "rigour" and "vigour", as opposed to the US strikes having "rigor" and "vigor").
By the same authors, "Targeted Killings: A Modern Strategy of the State", A.E. Stahl and William F. Owen, Michigan War Studies Review, July 2011.
But, as the Good Book says: "Name something good that comes out of Ann Arbor."