Page 454

REVISTA IEEE 2

454 Revista del Instituto Español de Estudios Estratégicos Núm. 2 / 2013 • In addition, thirdly, despite eventually equipping robots with mechanisms to distinguish between civilians and military combatants, these devices lack the capacity to reach human levels of common sense that are indispensable for the correct application of the principle of non-discrimination. Professor Sharkey says that he is extremely sceptical even about it ever being possible -despite expected technological breakthroughs- to reach this extreme24. B) It is not certain that the assessment of the specific circumstances required in order to apply the principle of proportionality correctly is something that autonomous weapons systems are able to undertake. In fact, its application is underpinned by concepts such as “good faith” or the aforementioned “common sense”, and we are not in a position today to know whether these types of concepts are able to be “assumed and understood” by the IT programs that feed into these systems25. Professor Sharkey admits that though it is possible for robots to be programmed to observe, in some respects, the principle of proportionality (in particular the “easy proportionality problem”), or to minimise collateral damage by selecting appropriate weapons or munitions and properly directing them, it is not possible today -and he does not believe this will be the case in the future- to guarantee respect for the “hard proportionality problem”, that is to say: knowing when damage to civilians exceeds or outstrips the military advantage provided by the attack; in that case, this is a “qualitative and subjective decision” that only a human being may make.26 Taking into account the express wording of Articles 51.5 and 57.2 of the 1977 Additional Protocol I to the Geneva Conventions, applicable to armed conflicts, it is difficult to imagine how a “machine” -or the IT program directing it- will be able to envision how to attack or decide not to strike when the probable damage to civilians is “excessive in relation to the concrete and direct military advantage anticipated” (Article 51.5.b)27. It is particularly difficult to imagine how a “machine” can adapt to account for changing conditions on the ground. (For instance, for those of us who are not robotics or 24  “The evitability of autonomous…” cit., pp. 788-789. 25  Vid. ad ex. LIN, Patrick; BEKEY, George and ABNEY, Keith: “Robots in War: Issues of Risk and Ethics”, in Capurro R. and Nagenborg, M. (Edts): Ethics and Robotics, Heidelberg: AKA Verlag, 2009, pp. 49-67, pp. 57-58 (this chapter is also available at http://www.digitalcommons.calpoly.edu; accessed Monday 9 September 2013); SHARKEY, Noel E.: “Automated Killers and the Computing Profession”, Computer, Volume 40 (2007), Issue 11, pp. 122-124, p. 124 (http://www.computer.org; accessed Monday 9 September 2013); WAGNER, Markus: “The Dehumanization of International Humanitarian Law: Legal, Ethical and Political Implications of Autonomous Weapon Systems”, pp. 1-60, pp. 28-38 (http://www.robots.law.miami.edu; accessed Monday 9 September 2013). 26  “The evitability of autonomous…” cit., p. 789. 27  BOE (Official State Gazette) of 26 July 1989. Vid. WAGNER, Markus: “Beyond the drone debate: autonomy in tomorrow’s battlespace”, Proceedings of the 106th annual meeting (Confronting complexity), American Society of International Law, 106 (March 2012), pp. 80-84.


REVISTA IEEE 2
To see the actual publication please follow the link above