Page 2 of 2 FirstFirst 12
Results 21 to 28 of 28

Thread: New technologies and war legislation: a progress?

  1. #21
    Council Member M-A Lagrange's Avatar
    Join Date
    Aug 2009
    Location
    In Barsoom, as a fact!
    Posts
    976

    Default ICRC statement part 1

    International Humanitarian Law and New Weapon Technologies
    08-09-2011 Statement
    34th Round Table on Current Issues of International Humanitarian Law, San Remo, 8-10 September 2011. Keynote address by Dr. Jakob Kellenberger, President, ICRC
    .Mr. President of the Institute,
    Your Excellencies,
    Ladies and Gentlemen,

    New technologies and new weapons have revolutionised warfare since time immemorial. We need only think about the invention of the chariot, of canon powder, of the airplane or of the nuclear bomb to remember how new technologies have changed the landscape of warfare.

    Since the St. Petersburg Declaration of 1868, which banned the use of projectiles of less than 400 grammes, the international community has attempted to regulate new technologies in warfare. And modern international humanitarian law has in many ways developed in response to new challenges raised by novel weaponry.

    At the same time, while banning a very specific weapon, the St. Petersburg Declaration already set out some general principles which would later inform the entire approach of international humanitarian law towards new means and methods of warfare. It states that the only legitimate object which States should endeavour to accomplish during war is to weaken the military forces of the enemy, and that this object would be exceeded by the employment of arms which uselessly aggravate the sufferings of disabled men, or render their death inevitable.

    In this spirit, the regulation of new means and methods of warfare has developed along two tracks for the last 150 years: The first consists of general principles and rules that apply to all means and methods of warfare, as a result of the recognition that the imperative of humanity imposes limits to their choice and use. The second consists of international agreements which ban or limit the use of specific weapons – such as chemical and biological weapons, incendiary weapons, anti-personnel mines, or cluster munitions.

    The general principles and rules protect combatants against weapons of a nature to cause superfluous injury or unnecessary suffering but have also developed to protect civilians from the effects of hostilities. Thus, for example means and methods of warfare that are indiscriminate are prohibited.

    Informed by these fundamental general prohibitions, international humanitarian law was designed to be flexible enough to adapt to technological developments, including those that could never have been anticipated at the time. There can be no doubt that international humanitarian law applies to new weaponry and to all new technology used in warfare. This is explicitly recognised in article 36 of Additional Protocol I, according to which, in the study, development or adoption of a new weapon or method of warfare, states parties are under an obligation to determine whether their employment would, in some or all circumstances, be prohibited by international law applicable to them.

    Nonetheless, applying pre-existing legal rules to a new technology raises the question of whether the rules are sufficiently clear in light of the technology's specific – and perhaps unprecedented - characteristics, as well as with regard to the foreseeable humanitarian impact it may have. In certain circumstances, States will choose or have chosen to adopt more specific regulations.

    Today, we live in the age of information technology and we are seeing this technology being used on the battlefield. This is not entirely new but the multiplication of new weapons or methods of warfare that rely on such technology seems exponential. The same advances in information technology that enable us to have live video chat on our mobile phones also make it possible to build smaller, less expensive, and more versatile drones. The same technology used for remote controls of home air conditioning units also makes it possible to turn off the lights in a city on the other side of the globe.

    This year's Round Table will allow us to take a closer look and to discuss a number of technologies that have only recently entered the battlefield or could potentially enter it. These are, in particular cyber technology, remote-controlled weapon systems, and robotic weapon systems.

    Let me first turn to "cyber warfare".

    The interest in legal issues raised by "cyber-warfare" is currently particularly high. By cyber warfare I mean means and methods of warfare that rely on information technology and are used in the context of an armed conflict. The military potential of cyber space is only starting to be fully explored. From certain cyber operations that have occurred, we know that one party to a conflict can potentially "attack" another party's computer systems, for instance by infiltrating or manipulating it. Thus, the cyber infrastructure on which the enemy's military relies can be damaged, disrupted or destroyed. However, civilian infrastructure might also be hit – either because it is being directly targeted or because it is incidentally damaged or destroyed when military infrastructure is targeted.

    So far, we do not know precisely what the humanitarian consequences of cyber warfare could be. It appears that technically, cyber attacks against airport control and other transportation systems, dams or nuclear power plants are possible. Such attacks would most likely have large-scale humanitarian consequences. They could result in significant civilian casualties and damages. Of course, for the time being it is difficult to assess how likely cyber-attacks of such gravity really are, but we cannot afford to wait until it is too late to prevent worst-case scenarios.

    From a humanitarian perspective, the main challenge about cyber operations in warfare is that cyberspace is characterized by interconnectivity and thus by the difficulty to limit the effects of such operations to military computer systems. While some military computer infrastructure is certainly secured and separated from civilian infrastructure, a lot of military infrastructure relies on civilian computers or computer networks. Under such conditions, how can the attacker foresee the repercussions of his attack on civilian computer systems? Very possibly, the computer system or connection that the military relies on is the same as the one on which the hospital nearby or the water network relies.

    Another difficulty in applying the rules of international humanitarian law to cyberspace stems from the digitalisation on which cyberspace is built. Digitalisation ensures anonymity and thus complicates the attribution of conduct. Thus, in most cases, it appears that it is difficult if not impossible to identify the author of an attack. Since IHL relies on the attribution of responsibility to individuals and parties to conflicts, major difficulties arise. In particular, if the perpetrator of a given operation and thus the link of the operation to an armed conflict cannot be identified, it is extremely difficult to determine whether IHL is even applicable to the operation.

    The second technological development that we will be discussing at this Round Table are remote-controlled weapon systems.

    Remote controlled weapon systems are a further step in a long-standing strategic continuum to move soldiers farther and farther away from their adversaries and the actual combat zone.
    Drones – or "unmanned aerial vehicles" are the most conspicuous example of such new technologies, armed or unarmed. Their number has increased exponentially over the last few years. Similarly, so-called unmanned ground vehicles are increasingly deployed on the battlefield. They range from robots to detect and destroy roadside bombs to those that inspect vehicles at approaching checkpoints.
    One of the main arguments to invest in such new technologies is that they save lives of soldiers. Another argument is that drones, in particular, have also enhanced real-time aerial surveillance possibilities, thereby allowing belligerents to carry out their attacks more precisely against military objectives and thus reduce civilian casualties and damage to civilian objects – in other words to exercise greater precaution in attack.

    There could be some concern, however, on how and by whom these systems are operated. Firstly, they are sometimes operated by civilians, including employees of private companies, which raises a question about the status and protection of these operators; and questions about whether their training and accountability is sufficient in light of the life and death decisions that they make. Secondly, studies have shown that disconnecting a person, especially by means of distance (be it physical or emotional) from a potential adversary makes targeting easier and abuses more likely. The military historian John Keegan has called this the "impersonalization of battle".

    Lastly, let me say a few words about robotic weapon systems.

    Automated weapon systems – robots in common parlance – go a step further than remote-controlled systems. They are not remotely controlled but function in a self-contained and independent manner once deployed. Examples of such systems include automated sentry guns, sensor-fused munitions and certain anti-vehicle landmines. Although deployed by humans, such systems will independently verify or detect a particular type of target object and then fire or detonate. An automated sentry gun, for instance, may fire, or not, following voice verification of a potential intruder based on a password.

  2. #22
    Council Member M-A Lagrange's Avatar
    Join Date
    Aug 2009
    Location
    In Barsoom, as a fact!
    Posts
    976

    Default ICRC satement part 2

    The central challenge with automated systems is to ensure that they are indeed capable of the level of discrimination required by IHL. The capacity to discriminate, as required by IHL, will depend entirely on the quality and variety of sensors and programming employed within the system. Up to now, it is unclear how such systems would differentiate a civilian from a combatant or a wounded or incapacitated combatant from an able combatant. Also, it is not clear how these weapons could assess the incidental loss of civilian lives, injury to civilians or damage to civilian objects, and comply with the principle of proportionality.
    An even further step would consist in the deployment of autonomous weapon systems, that is weapon systems that can learn or adapt their functioning in response to changing circumstances. A truly autonomous system would have artificial intelligence that would have to be capable of implementing IHL. While there is considerable interest and funding for research in this area, such systems have not yet been weaponised. Their development represents a monumental programming challenge that may well prove impossible. The deployment of such systems would reflect a paradigm shift and a major qualitative change in the conduct of hostilities. It would also raise a range of fundamental legal, ethical and societal issues which need to be considered before such systems are developed or deployed. A robot could be programmed to behave more ethically and far more cautiously on the battlefield than a human being. But what if it is technically impossible to reliably program an autonomous weapon system so as to ensure that it functions in accordance with IHL under battlefield conditions?

    When we discuss these new technologies, let us also look at their possible advantages in contributing to greater protection. Respect for the principles of distinction and proportionality means that certain precautions in attack, provided for in article 57 of Additional Protocol I, must be taken. This includes the obligation of an attacker to take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental civilian casualties and damages. In certain cases cyber operations or the deployment of remote-controlled weapons or robots might cause fewer incidental civilian casualties and less incidental civilian damage compared to the use of conventional weapons. Greater precautions might also be feasible in practice, simply because these weapons are deployed from a safe distance, often with time to choose one's target carefully and to choose the moment of attack in order to minimise civilian casualties and damage. It may be argued that in such circumstances this rule would require that a commander consider whether he or she can achieve the same military advantage by using such means and methods of warfare, if practicable.
    Ladies and Gentlemen,

    The world of new technologies is neither a virtual world nor is it science fiction. In the real world of armed conflict, they can cause death and damage. As such, bearing in mind the potential humanitarian consequences, it is important for the ICRC to promote the discussion of these issues, to raise attention to the necessity to assess the humanitarian impact of developing technologies, and to ensure that they are not prematurely employed under conditions where respect for the law cannot be guaranteed. The imperative that motivated the St. Petersburg Declaration remains as true today as it was then.

    I thank the Institute of International Humanitarian Law for hosting this Round Table and thank all of you for your interest in engaging with us in reflection and debate. I wish you fruitful and successful discussions.

    http://www.icrc.org/eng/resources/do...2011-09-08.htm

  3. #23
    Council Member M-A Lagrange's Avatar
    Join Date
    Aug 2009
    Location
    In Barsoom, as a fact!
    Posts
    976

    Default From the ICRC round table statement:

    Link: (Otherwise there would be too many posts)
    First the problematic of removing soldiers from the battle field:
    new technologies remove soldiers further and further away from the battlefield was a matter of recurring discussion.
    More thinking is required about the consequences of these remote means and methods of warfare. Firstly, what is the consequence of their use for the definition, the extent of the battlefield? Some have argued that if drones can be flown or cyber attacks launched from anywhere in the world, then anywhere in the world becomes a battlefield. This would in effect be an endorsement of the concept of a “global battlefield”, with the consequence that the use of force rules allowing for incidental civilian loss and damage under the IHL principle of proportionality extend far beyond the scope of what has until now been accepted. This is a notion that the ICRC does not follow.
    Just like high altitude bombing might be safer for soldiers but also in certain circumstances indiscriminate and unlawful, so new technologies, however protective for the troops, will always have to be tested for their compatibility with humanitarian law and in particular their possible indiscriminate or disproportionate effects. This, however, requires that we get a better understanding about the effects of such technologies, in particular their precision and their incidental effects - not only in abstract technological terms but in the way they are concretely being used.
    Secondly the problematic of responsibility:
    responsibility and accountability for the deployment of new technologies. Whether new technologies will reduce our capacity to allocate responsibility and accountability for violations remains to be seen. As a starting point, it is worth recalling that international humanitarian law parties to conflicts (states and organised armed groups) and international criminal law binds individuals.
    . It is a topic that reminds me of Goethe's poem Der Zauberlehrling ('the sorcerer apprentice'), who unleashed a broom with destructive artificial intelligence and UAV capacity. Both the apprentice and the magician himself certainly bore their share of responsibility and the magician ultimately had to put his house in order. In cyber space on the other hand, allocation of responsibility does appear to present a legal challenge if anonymity is the rule rather than the exception.
    Finally on technology it self:
    technology, in itself, is neither good nor bad. It can be a source of good and progress or result in terrible consequences at worst. This is true most of the time. Transposed to technologies that are weaponised, this means that most weapons are not unlawful as such; whether their use in conflict is lawful or not depends on the circumstances and the way in which they are used.

    This being said, some weapons are never lawful and have been banned – blinding laser weapons or landmines, for instance.
    Much, much more to think from this statement but an excellent resume of all the questions we raised here at our limited level.

    M-A
    PS: Mike, I believe that guy is serious on the subject.

  4. #24
    Council Member
    Join Date
    May 2008
    Posts
    4,021

    Default He's serious ...

    you're serious, I'm serious, about the topic... I posit that as a given

    Ah, Der Zauberlehrling - the key lines to me:

    Having memorized
    what to say and do,
    with my powers of will I can do
    some witching, too!

    You can't say that the little guy didn't have a "Can Do" spirit !

    As to M. Kellenberger's three points, "cyber warfare", "remote-controlled weapon systems" and "robotic weapon systems", I don't have the technical experience (despite a BS from a decent engineering school) to pontificate on their probable positive uses and negative abuses).

    Of the three, "remote-controlled weapon systems" seem to me the least problematic (although we will always have the issue of PID, as in the COL Klein situation and, even more clearly, in the Uruzgan cases I link below). As to Kellenberger's general comments on that subtopic:

    There could be some concern, however, on how and by whom these systems are operated. Firstly, they are sometimes operated by civilians, including employees of private companies, which raises a question about the status and protection of these operators; and questions about whether their training and accountability is sufficient in light of the life and death decisions that they make. Secondly, studies have shown that disconnecting a person, especially by means of distance (be it physical or emotional) from a potential adversary makes targeting easier and abuses more likely. The military historian John Keegan has called this the "impersonalization of battle".
    The first point is simply resolved legally by "Hoovering" (vacuuming up) those civilians into the military - treat them legally as military and set up a separate command divorced from the rice bowls of the other regular military services. The second point has been made (by Grossman et al) more in the context of distance making it less psychologically damaging on the targeter. The second point may have some validity (how much ?) or it may be junk science (how much ?).

    The first subtopic "cyber warfare" belongs to someone like Sam Lyles, who's written some pieces on it. The third subtopic "robotic weapon systems" seems very much sci-fi re: a robot that will think like a human infantryman. Programming that cyborg with "IHL" (per M. Kellenberger) is pretty funny - whose version of IHL ?

    No doubt that new technology complicates warfare. Compare the AR 15-6 Executive Summaries for the 21 Feb 2010 Uruzgan incident (an ISAF combined arms situation with three car loads of civilian casualties) with the 28 Jul 2011 Uruzgan incident (a BBC and Pajhwok reporter killed in a building being cleared by ISAF forces).

    The issue of complications brings us back to some initial remarks from M. Kellerberger:

    Since the St. Petersburg Declaration of 1868, which banned the use of projectiles [JMM: explosive projectiles] of less than 400 grammes, the international community has attempted to regulate new technologies in warfare. And modern international humanitarian law has in many ways developed in response to new challenges raised by novel weaponry.

    At the same time, while banning a very specific weapon, the St. Petersburg Declaration already set out some general principles which would later inform the entire approach of international humanitarian law towards new means and methods of warfare. It states that the only legitimate object which States should endeavour to accomplish during war is to weaken the military forces of the enemy, and that this object would be exceeded by the employment of arms which uselessly aggravate the sufferings of disabled men, or render their death inevitable.

    In this spirit, the regulation of new means and methods of warfare has developed along two tracks for the last 150 years: The first consists of general principles and rules that apply to all means and methods of warfare, as a result of the recognition that the imperative of humanity imposes limits to their choice and use. The second consists of international agreements which ban or limit the use of specific weapons – such as chemical and biological weapons, incendiary weapons, anti-personnel mines, or cluster munitions.
    I do object to the second class of regulations; I believe we should stick to the first class, "general principles and rules" - and greatly simplify even those. That has nothing to do with one's beliefs about "universality" or "impunity"; but it has everything to do with practicality. The "Laws of War" have to be usable by those who fight the battles.

    E.g., The ICRC "handbook" on Customary International Humanitarian Law (a Louise Doswald-Beck product) runs over 5,000 pages - somehow I can't see my dad and his comrades checking that out as they were assaulting German pillboxes.

    Colonialement

    Mike
    Last edited by jmm99; 09-20-2011 at 08:57 PM.

  5. #25
    Council Member M-A Lagrange's Avatar
    Join Date
    Aug 2009
    Location
    In Barsoom, as a fact!
    Posts
    976

    Default A Future for Drones: Automated Killing

    One afternoon last fall at Fort Benning, Ga., two model-size planes took off, climbed to 800 and 1,000 feet, and began criss-crossing the military base in search of an orange, green and blue tarp.

    The successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. (image: Georgia Tech Research Institute. Alberto Cuadra and Peter Finn/The Washington Post) The automated, unpiloted planes worked on their own, with no human guidance, no hand on any control.

    After 20 minutes, one of the aircraft, carrying a computer that processed images from an onboard camera, zeroed in on the tarp and contacted the second plane, which flew nearby and used its own sensors to examine the colorful object. Then one of the aircraft signaled to an unmanned car on the ground so it could take a final, close-up look.

    Target confirmed.

    This successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. Imagine aerial “Terminators,” minus beefcake and time travel.

    In Berlin last year, a group of robotic engineers, philosophers and human rights activists formed the International Committee for Robot Arms Control (ICRAC) and said such technologies might tempt policymakers to think war can be less bloody.

    Some experts also worry that hostile states or terrorist organizations could hack robotic systems and redirect them. Malfunctions also are a problem: In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.
    The ICRAC would like to see an international treaty, such as the one banning antipersonnel mines, that would outlaw some autonomous lethal machines. Such an agreement could still allow automated antimissile systems.
    “The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war. “The good technology is far off, but technology that doesn’t work well is already out there. The worry is that these systems are going to be pushed out too soon, and they make a lot of mistakes, and those mistakes are going to be atrocities.”
    Published on Tuesday, September 20, 2011 by The Washington Post

  6. #26
    Council Member
    Join Date
    May 2008
    Posts
    4,021

    Default Problem definitely solved ...

    so long as the enemies are "orange, green and blue tarps".

    "Artificial intelligence" will have to improve beyond "tarps" to identify human enemies (and distinguish them into combatants, non-combatants and civilians), especially where irregular forces are concerned.

    I was struck by a comment in W. Hays Parks, Special Forces’ Wear of Non-Standard Uniforms (2003), p.6 pdf:

    Special Forces personnel who had served in Afghanistan with whom I spoke stated that al Qaeda and the Taliban had no difficulty in distinguishing Northern Alliance or Southern Alliance forces from the civilian population.[8]

    [8] Because neither Taliban/al Qaeda nor Northern or Southern Alliance forces wore a uniform, visual friend or foe identification at a distance was a challenge. Third Battalion, Fifth Special Forces Group, The Liberation of Mazar-e Sharif: 5th SF Group UW in Afghanistan, 15 Special Warfare 34, 36 (June 2002). However, this differs from dressing as civilians for the purpose of using the civilian population or civilian status as a means of avoiding detection of combatant status. From the standpoint of possible violation of the law of war, the issue is one of intent. As indicated in the main text, use of non-standard uniform (Massoud pakol and/or scarf) by some Special Forces personnel was to appear as members of the Northern Alliance rather than be conspicuous as US soldiers and, as indicated in the preceding footnote, high-value targets.
    This comment has stuck in my head; and is one factor in my suggesting that "Western rules" may well be immaterial or counterproductive in the context of a "non-Western" environment.

    A discussion of "uniform rules" (yes, a bit of a punny) is found in this thread, Is it time for psuedo operations in A-Stan?..., with discussion of articles by Parks and others starting at post #13. That post mentions what I believe is still the best US work on irregular combatants, the 1959 JAG Treatise, A TREATISE ON THE JURIDICAL BASIS OF THE DISTINCTION BETWEEN LAWFUL COMBATANT AND UNPRIVILEGED BELLIGERENT (JAG School 1959).

    Here are some words of wisdom from its introduction:

    A noncombatant [3] who fights can be punished with death. [4]

    3. A noncombatant is a person whom both sides on the basis of experience can reasonably expect will not actually engage in overt acts of war. The word can only be defined to the satisfaction of both sides when nations of the same cultural heritage are at war. Then, noncombatant is defined by traditional examples which have meaning to both sides. In most western civilizations all persons not in the fighting forces and some, such as physicians are traditionally thought of as noncombatants. This is the sense in which the word is used here.

    4. FM 27-10, The Law of Land Warfare, Jul. 1956, paras. 80, 81, 82.
    Again, we are back to the basic factor that "PID" differs between military cultures.

    Do I think we should rush out and develop ICRC-type rules for future AI drones and robots ?

    No, I think the general principles are sufficient - define the enemy; distinguish between combatants, non-combatants and civilians; allow adequate discretion based on military necessity; and apply a balanced proportionality test to targeting. The tests should be the same whether a human or robot targeter is involved.

    We don't need as an addition to the "Laws of War" another ICRC-type "rule" - such as the ICRC 2009 "guidance" on Direct Participation in Hostilities (91 pages on the ICRC's concept of the "transitory guerrilla").

    The 1863 Lieber Code (GO 100) is 25 pages in large type pdf (attached). That size effort could be a real "handbook"; and is more of what I'm suggesting for IHL. Someday, someone should add up the pages of ICRC rules and commentaries - surely into the tens of 1000s.

    Regards

    Mike
    Attached Files Attached Files
    Last edited by jmm99; 09-24-2011 at 08:10 PM.

  7. #27
    Council Member M-A Lagrange's Avatar
    Join Date
    Aug 2009
    Location
    In Barsoom, as a fact!
    Posts
    976

    Default The artificially intelligent weapon of tomorrow…

    It’s today!

    Flying Robots Called 'Nano Quadrotor' Drones Swarm Lab
    On Wednesday, robotics researchers at University of Pennsylvania released a video of what they call "nano quadrotors" - tiny flying robots that engage in complex social movements like swarming and pattern formation.
    The video shows what look like mini helicopters flying with remarkable agility and precision. They can do flips, avoid obstacles, and shift direction effortlessly, all on command. Toss one up in the air, and it finds its balance and and flies back to the hand that launched it. Best of all, when in the company of other drones, they gather to fly in a figure-8 formation.
    http://www.huffingtonpost.com/2012/0...n_1249442.html

    We were all convinced it would remain Science Fiction for years…

    The rethorical question of an artificially intelligent machine engaing mankind is becoming the trivial next legal frontiere of today. I can see a brand new world of legal, ethic and military questions opening in front of my eye.

  8. #28
    Council Member
    Join Date
    May 2008
    Posts
    4,021

    Default Marc, Not everyone was nearsighted ...

    A forthcoming article by Wilf in the Infinity Journal is tentatively entitled: "Drivel: 'Nano Quadrotor' Drones Swarming Tactics are not a New Generation in Warfare; Having Been Foreseen by Carl von Clausewitz."

    Cheers

    Mike

    PS: If those reading this (do we have plural readers ?) happen to be mucking about Infinity Journal, they might take a look at Targeted Killings Work, which looks at them from an Israeli viewpoint (which seem to have "rigour" and "vigour", as opposed to the US strikes having "rigor" and "vigor").

    By the same authors, "Targeted Killings: A Modern Strategy of the State", A.E. Stahl and William F. Owen, Michigan War Studies Review, July 2011.

    But, as the Good Book says: "Name something good that comes out of Ann Arbor."

Similar Threads

  1. America Does Hybrid Warfare?
    By RedRaven in forum Military - Other
    Replies: 45
    Last Post: 08-04-2009, 04:18 AM
  2. Canadian Future War via Sci Fi Novel, Part 2
    By milnews.ca in forum Futurists & Theorists
    Replies: 4
    Last Post: 02-19-2009, 10:38 PM
  3. How Technology Almost Lost the War: In Iraq, the Critical Networks Are Social — Not E
    By Rex Brynen in forum Catch-All, Military Art & Science
    Replies: 20
    Last Post: 12-08-2007, 03:19 PM
  4. War 2.0
    By SWJED in forum Futurists & Theorists
    Replies: 0
    Last Post: 02-23-2007, 08:07 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •