You are on page 1of 23

Military frameworks: technological know-how and the

legitimization of warfare
John Kaag and Whitley Kaufman
University of Massachusetts Lowell
Abstract It is the elusive target of policymakers, ethicists and military strategists: the
target of a just war. Since the advent of precision-guided munitions in the mid-1970s,
commentators claimed that surgical-strike technology would advance the cause of jus in
bello, ending the longstanding tension between effective military engagement and
morality. Today, many policymakers accept that the ethical dilemmas that arise in the
fog of war can be negotiated by the technical precision of weaponry. This is, at best, only
partially accurate. At worst, its misplaced optimism risks numbing the moral sense of
strategists and, just as importantly, the sensibilities of the general populace. We argue that
the development of precision guided munitions (PGM), stand-off weaponry and military
robotics may force policymakers and strategists to experience new ethical tensions with an
unprecedented sensitivity and may require them to make specic policy adjustments. In the
move toward more quantitative approaches to political science and international affairs it
is often forgotten that military ethics, and the ethics of military technologies, turn on the
question of human judgment. We argue that the ethical implications of revolution in
military affairs (RMA) are best investigated by way of a detailed discussion of the tenuous
relationship between ethical decision-making and the workings of military technology.
Introduction: revisiting questions concerning technology
1
Our questions concerning military technology may be viewed as owing from the
work of Martin Heidegger, who delivered the lecture The question concerning
technology in 1955, but are more accurately understood in the wider context of
many Western thinkers who have taken up the interrogation of the moral and
epistemic assumptions that seem to accompany and validate technical
capabilities. As John Kaag has noted elsewhere, Heidegger delivered his address
at a historical moment in which technological advances were beginning to double
as the political imperatives and the moral justications of war (Kaag 2008).
The arms races that dened the geopolitical landscape of the second half of the
20th century may be over, but a new form of technological know-how, one that
1
The theoretical foundations of this study were rst briey outlined in John Kaag
(2008). The current article, however, departs from that article in signicant ways in its
detailed and exclusive focus on the ethical implications of military (rather than homeland
security) technologies. The issue of intelligence-gathering and PGM technologies, rst
broached by Kaag (2008), have been developed more fully in the seventh section of the
current article.
Cambridge Review of International Affairs,
Volume 22, Number 4, December 2009
ISSN 0955-7571 print/ISSN 1474-449X online/09/04058522 q 2009 Centre of International Studies
DOI: 10.1080/09557570903325496
turns on precision rather than magnitude, now threatens our moral sensibilities.
This danger manifests itself in two distinct, yet related, ways.
First, we risk confusing technical capabilities and normative judgments by
assuming that precision weaponry facilitates ethical decision-making. Here
facilitate, derived fromfacilitis, means to make easier. Second, we are in danger of
allowing techne to facilitate ethics in a more dramatic sense. Here we might
consider facilitis as stemming from the verb facere, meaning to do or make. We risk
our ethical standards when military technologies are purported to make the
thoughtful determinations that have always been the sine qua non of ethics.
The employment of robotics on the battleeld stands as an extreme case of this
problem. Military robotics remains in its early form of research and development,
but recent reports on battle-ready robots should give ethicists pause. In effect,
strategists and theorists have begun to argue that we make the issue of military
ethics an easy one by placing ethical mechanisms in our machinery thereby
shifting moral responsibility onto techne itself. We argue that the implementation
of these robotics must be preceded by a careful reminder of what ethical judgment
entails, that warfare must be regarded as a strictly human activity and that moral
responsibility can never be transferred to the technology that is employed therein.
A brief history of precision in aerial bombardment
The development of precision-guided munitions, satellite navigation and stealth
technologies has transformed the character of aerial bombardment.
In investigating the ethical pitfalls accompanying the use of these technologies,
it is only fair to acknowledge the way in which they have reduced the rate and
cumulative total of collateral damage suffered in warfare. While the debate
concerning the exact denition of collateral damage continues (whether human
casualties or private and public property should be included in this damage), it is
impossible to argue that strategic bombing has not undergone a radical
transformation in the past century and that, on the whole, this transformation
has continued to raise the ethical standards of jus in bello. In World War II, between
300,000 and 600,000 German civilians were killed by Allied aerial attacksin
truth, these estimates might underestimate the total fatalities, since the Red Army
employed tactical airstrikes that were not calculated in this total. The bombing of
Dresden on 13 February 1945, compounded by the use of nuclear devices at the
end of the Second World War, has come to symbolize the terror of total war and
presents a strong argument for the development of surgical strike capabilities.
Over the course of two days, 35,000 refugees and residents of the Dresden area
died in a restorm that could be seen froma distance of 200 miles; and, as scholars
continue to note, this outcome was possibly part of the strategy of the Dresden
attack (Ramsey 2002, 353). These statistics horrify our modern sensibilities,
sensibilities that have been cultivated in an age of precision-guided weaponry.
At the time, however, such attacks were standard operating procedure, especially
for strategists such as Churchill, who had used widespread strategic bombing
since the early years of the century in order to terrorize and subdue colonial
populations from India to Egypt to Darfur (Beiriger 1998).
Today, as Edward Holland notes, a single high-altitude non-nuclear bomber
can destroy a specic target with a single bomb that 60 years ago would have
586 John Kaag and Whitley Kaufman
required thousands of B-17 sortie missions dropping approximately 9000 bombs
(Holland 1992, 39). Since the rst Gulf War, the United States (US) public has seen
an increasing number of photographs and lm clips of surgical strike capabilities,
images that that might have led one to believe that only precision-guided
munitions were employed in the American military effort. In truth, only seven or
eight per cent of sortie missions in the rst Gulf War were precision guided, but
they were reserved for urban targets and helped military planners avoid direct
civilian casualties. Non-precision weaponry was employed in the Kuwaiti theatre
where US forces faced a variety of stand-alone targets. Due to the type of military
theatres in Operation Enduring Freedom (7 October 2001) and Operation Iraqi
Freedom (19 March 2003), 80 per cent of all bombs or missiles deployed by the US
Air Force in Operation Iraqi Freedom were guided by video camera, laser
or satellite targeting (CBS News 2003). This improvement in the economy of force
has coincided with a decrease in direct civilian casualties. This being said,
strategic precision bombing (such as the targeting of an electrical grid or water
treatment facilities) can have lasting effects on the health of a population under
attack. For example, the US Department of Commerce reported that
approximately 3500 civilians died directly from the US bombing of Iraq in 1991.
In addition, because of the aftermath of direct attacks on Iraqs economic and
physical infrastructure, it was calculated that there were 111,000 indirect or excess
deaths in 1991 due to what was described as post-war adverse health effects
(Daponte 1993). Whereas direct collateral damage and excess deaths used to be at
least loosely correlated, the advent of precision guided munitions (PGM) has
allowed strategists to decouple these gures, destroying infrastructure without
immediately killing civilians.
The discussion concerning direct collateral damage, immediate death and
destruction resulting from a particular attack, and excess deaths helps frame the
upcoming treatment of techne and ethical judgement in three distinct ways. First,
while PGM helps reduce direct collateral damagewhich has been often regarded
as the ethical metric for just war theoriststhese munitions can still adversely
affect the health of a given population. In short, PGM strikes can satisfy traditional
ethical standards, but in so doing make us numb to additional ethical quandaries
that accompany their use. Second, the successful use of military technology to
satisfy one set of ethical standards may allow policymakers and strategists to
assume that technological advancement is identical to moral advancement. Third,
making the distinction between direct civilian casualties and excess death is the
kind of ethical judgement that cannot be made by determinate rules. It requires the
exibility and sensitivity that only humans can bring to bear on a given situation.
Ethical judgments and techne: a philosophical overview
Following Platos Republic and the Gorgias, Aristotle argues in The Nicomachean
ethics that the practical matters of ethical judgement are, by denition,
indeterminate (aorista) (2002, 1137b29). Experientially, this point seems on the
mark, since in the heat of an ethically charged moment individuals face
bewildering choices and conicting ideals. This is the reason for Aristotle telling
us that ethics must rst be outlined or sketched out in rough gure and not with
precision (1103b24). It is not technical precision but human sensitivity that must
Military frameworks 587
be cultivated in light of this fact. It is not the case that our precision must be
rened in order to account for a particular ethical judgment; rather Aristotle
insists that it is the nature of these [ethical] matters to remain more complex than
any set of rubrics generated by techne (1137b17). This is one reason for Plato to
argue that Gorgias, and all rhetoricians who attempt to make ethics into a type of
science, are unable to claim expert status in the eld of ethics. There are no experts
in a eld that is dened by new and changing situations. Heidegger tried to
extend this point in the 1950s when technocrats began to make their way into the
circles of power in Washington and in Europe. At this time, there was a strong yet
misguided belief that strategic experts, detached from the emotional setting of the
battleeld, could wage successful and just wars.
Aristotle is dubious, stating that cases concerning the human good do not fall
under any science [techne] or under a given rule but the individual himself in each
case must be attuned to what suits the occasion (1104a10). Moral behaviour
happens in situ, in a human interaction with a particular setting and circumstance.
In her analysis of The Nicomachean ethics, Martha Nussbaum explains that
the general account of ethics is imprecise . . . not because it is not as good as a
general account of these matters can be, but because of the way that these matters
are: the error is not in the law or the legislator, but in the nature of the thing since
the matter of practical affairs is like this from the start (Nussbaum 2001, 302).
Nussbaums analysis of techne and tuche (chance) is very instructive for scholars
trying to understand the motivations of military management.
These observations do not foreclose the possibility of developing ethical rules
and standards. Indeed, Aristotle, Cicero and Augustine are the philosophic
progenitors of the standards of just war theory, especially the outline of jus ad
bellum. All of these thinkers, however, were fully aware that determinate laws are
by their very nature general, but are used to interpret and access particular human
situations. The application of general rules to specic cases is always an issue of
judgment (Nussbaum 2001, 318340). For example, determining the thresholds of
just cause, legitimate authority and comparative justice in situations such as
the Gulf War or the Bosnian War (19921995) is difcult by virtue of the fact that
these rules must be tailored to the intricate character of these conicts.
Cicero and Augustine trace this difculty not only to the complexity of human
interactions, but also to human emotion that can jeopardize moral deliberation
and judgment. This belief seems to underpin much of the current discussion about
judgments on the battleeld, and it motivates the research and development of
technology that might circumvent the ethical mistakes that are attributed to
emotionally driven decisions. The hopes for military robotics turn precisely on
this position. In a certain sense, Cicero sets this philosophical groundwork, stating
that emotions risk overriding the rational calculations that encourage an agent to
make genuinely moral decisions (Russell 1977, 5). Along these lines, Augustine
held that God and angels did not have emotions and, for this reason among others,
did not have to face the difcult moral choices that dene human affairs. This
stance begs the question: Do we approach the status of gods and angels if we are
able to mechanize morality, that is, create technologies that free human
practitioners from the choice that is at the heart of ethics?
There seems to be two distinct answers that emerge to this question. From
Augustine, the answer is unequivocally negative. Early in his career, Augustine
wrote On the free choice of the will, in which he argues that choiceand the
588 John Kaag and Whitley Kaufman
accompanying temptation of desire, emotion and conveniencenot only mark,
but literally dene, the eld of ethics as a eld of human investigation. We are not
being ethical by ridding ourselves of the burden of human choice. He writes,
without freechoiceof humanwill onecouldnot act rightly (Augustine1982, II, I, 5).
According to Augustine, to pretend that a free choice of moral judgment is not
haunted by fallibility, by the epistemological blindness of the human condition, is
either an act of profound ignorance or, more likely, profound hubris. Conversely,
modern military strategists who often rely heavily on precision weaponry seem to
occasionally forget the human character of ethics and assume that fallibility is
something to be fully overcome in the course of scientic investigation. In the case
of military robotics, addressed in coming sections, this forgetfulness occasionally
morphs into a self-conscious attempt to embed decision-making capabilities in the
development of new technologies. We believe that such attempts seek to close the
question of ethics before it can be opened in a meaningful way. Undoubtedly, these
trends in military ethics are born of good intentions, namely the intent to be both
efcient and moral. It seems to stemfroma more basic trend toward what might be
called the quantication of military strategy, a move to employ game-theoretical
modelling in optimizing military outcomes. Herman Kahn, a proponent of this
approach and one of the founding defence intellectuals fromRAND Corporation
in the 1950s, describedthe dangers andmitigation of emotional decision-making in
US nuclear strategy:
It is not that the problems of (warfare) are not inherently emotional. They are. It is
perfectly proper for people to feel strongly about them. But while emotion is a good
spur to action, it is rarely a good guide to appropriate action. In the complicated and
dangerous world in which we are going to live, it will only increase the chance of
tragedy if we refuse to make and discuss objectively whatever quantitative
estimates can be made. (Kahn 1960, 47)
From Kahns statement, it follows that if strategists make and discuss objectively
quantitative estimates of casualties and destruction in a given attack, they are
more likely to avoid the tragedies of war (Ghamari-Tabrizi 2005, 203204). In one
sense, Kahns position seems to make sense. Tragedies present people being
destroyed by forces that are beyond their control. The development of military
technology and the corresponding ability to accurately estimate casualties allow
military planners to order aerial strikes with a greater sense of the their
consequences, thereby achieving a greater degree of control over a given situation
in the eld.
In another sense, however, Kahns position on techne and quantitative estimates
appears tomiss its mark. As Nussbaumandothers have noted, tragedyshows good
people doing bad things. Sometimes the act that the individual intentionally did is
not the same as the thing that is actually accomplished. Regardless of the degree of
precision, strategists must continue to be aware of the possible, indeed inevitable,
disjunction between the intended consequences of attacks and the outcomes of
military confrontation in actu. As Clausewitz noted in the early 1800s, theoretical or
ideal plans of attack, despite their specicityandprecision, will remainout of synch
with the sui generis circumstances of particular campaigns. Techne cannot
overcome this Clausewitzian friction, a term that becomes the central theme of
On war. For the sake of our discussion of precision-guided munitions, it is worth
noting that the concept of friction (Gesamtbegriffe einer allgemeinen Friktion)
Military frameworks 589
is coupledwiththe phrase the fogof war, for the frictionbetweenplans andactions
turns on the inevitable limitations of human foresight (Clauswitz 1980, 264; Watts
2004). A reliance on mathematics and technical precision does not help us out of
ambiguous judgements, for, as Clausewitz explains, The roadof reason . . . seldom
allows itself tobe reducedtoa mathematical line byprinciples andopinions . . . The
actor in War therefore soon nds he must trust himself to the delicate tact of
judgement (Clausewitz 1980, 314). Clausewitz believed that strategists who
assume that scientic approaches to military strategy can fully overcome fog and
friction are making a serious tactical error; we merely extend this point by
suggesting that such an assumption results in serious moral hazards as well.
Inadditiontothis point concerningunintendedconsequences, tragedypresents
an even more disturbing situation, namely the occurrence of a tragic conict. In
such intractable instances, an audience looks on as an action is committed by a
person whose ethical character would, under other circumstances, reject such an
act. Antigone would not normally disobey the mandate of the state, but in her
particular situation she is forced to do so in order to full her sense of lial piety.
Tragedy in this case performs the conict between ideals in light of given
circumstances. Antigones act is not physically compelled, nor is its enactment due
to the ignorance or misinformation. Fog is not to blame in this case. Instead, this
type of tragedy turns on the inherent pitfalls of moral decision-making, pitfalls that
cannot be obviatedbythe discussionof quantitative measurements. Tragic conicts
are interesting and instructive because they remind their audiences that ethical
decision-making is an unshakably human endeavour that occurs in unique and
ever-changing situations. The meaning of right and wrong in these particular
situations cannot be determinedby a general metric appliedto all cases but only by
way of a unique interpretation of virtue; indeed, these theatrical scenes continue to
fascinate audiences for the sole reason that they are not to be gured out in a
scientic manner. In spite of this fact, it seems that Kahns position on the general
ethics of techne receivedwide acceptance in the wake of the attacks of September 11,
2001 and as the US global war on terror (GWT) quickly got underway.
Before advancing our argument, it seems wise to pause for a moment to
consider the implications of the assertion that has just been made. We hold, in the
spirit of Aristotle and Augustine, that the ethics of war turn on the issue of human
judgement, and that judgment, by virtue of its sui generis circumstances and
emotionally laden character, should be regarded as indeterminate. To say that
judgement is indeterminate (aorista) is in no way to succumb to the type of
relativism that bars the way of making normative claims. In our reading, Aristotle
is no relativist. Instead, he is a middle-range theorist who believes that good
judgement can be achieved only through a humans practised attentiveness to
particular situations, a knowledge of previous forms of judgement, and the
renement of ideals that, while never fully attained, can serve as causes to pursue.
Aristotles understanding of virtue has been widely criticized for not providing
solid guidelines for right action. As JL Mackie noted in 1977,
[T]hough Aristotles account is lled out with detailed descriptions of many of the
virtues, moral as well as intellectual, the air of indeterminacy persists. We learn the
names of the pairs of contrary vices that contrast with each of the virtues, but very
little about where or how to draw the dividing lines, where or how to x the mean.
As Sidgwick says, he only indicates the whereabouts of virtue. (Mackie 1977, 186)
590 John Kaag and Whitley Kaufman
Mackie is right in the sense that Aristotles Ethics are not going to set out a hard
and fast set of guidelines for ethical conduct. He is wrong, however, to suggest
that we should disparage or dismiss Aristotle on these grounds. While Aristotle
might be reticent to prescribe certain rules to guide our action, he is quite happy to
tell us what is not permissible in making ethical judgementssuch as making ethics
into a techne. It is the prohibition against the mechanization of judgement that
serves as the theoretical groundwork for our current project.
Framing ethics: PGM, targeting and the technological mandate
Our military capabilities are so devastating and precise that we can destroy an Iraqi
tank under a bridge without damaging the bridge. We do not need to kill thousands
of innocent Iraqis to remove Saddam Hussein from power. At least thats our belief.
We believe we can destroy his institutions of power and oppression in an orderly
manner.
Former US Secretary of Defence, Donald Rumsfeld, 2003 (cited in Kaag 2008)
The transformation that occurred in military tactics and technology at the
beginning of the invasion of Iraq in 2003 was driven by a belief that is dramatically
presented in Rumsfelds statement. Unfortunately, this beliefthat technical
precision could allow military strikes to neutralize terrorists while sparing non-
combatantsis underpinned by a dangerous logical conation. In the rst
sentence of his statement, Rumsfeld describes the physical capabilities of modern
PGM. A Hellre missile can destroy one object without destroying another one in
close proximity. It is true that PGM can allow tacticians to target a tank without
destroying a nearby bridge, yet Rumsfeld makes a shift in his subsequent
comments which describe not technical capabilities but rather normative
judgments (Kaag 2008). The designation of enemy combatant, legitimate
targets and oppression are not determinations that can be made by precision
weaponry, but by the individuals who participate in the targeting cycles of US
military command. Rumsfeld suggests that targeting enemy combatants is simply
a neutral matter of destroying one object while leaving other proximate objects
untouched. Here, he confuses neutral capabilities with moral permission, and
reverses the Kantian ethical maxim that ought implies can by insisting that can
implies ought. This is the sort of conation that worries Heidegger as he
witnesses the rise of modern technocracy with a system that allows scientic and
technical capacities to drive the formulation and expression of particular
normative claims (Kaag 2008).
Technological advancement was the keystone of what Rumsfeld termed
military transformation, a buzzword that circulated through the Pentagon for
many years of global war on terror (GWOT). This was to be a transformation of
capabilities, but it now seems that it risks transforming longstanding moral
standards. Rumsfelds rendering of military targeting downplays the inherent
moral decision that is involved in the designation of enemy combatant. Indeed, it
seems to allowthe neutral sights of stand-off weaponry to replace the fallible and
value-based lens by which strategists made decisions in the past. Heidegger
underscores this danger in his 1955 comment:
Everywhere we remain unfree and chained to technology, whether we passionately
afrm or deny it. But we are delivered over to it in the worst possible way when we
Military frameworks 591
regard it as something neutral; for this conception of it, to which today we
particularly like to do homage, makes us utterly blind to the essence of technology.
(Heidegger 1993, 320)
So what is this essence of technology and why must tacticians re main wide-eyed
to the dangers that accompany this essence? In The question concerning
technology, Heidegger reminds his audience that technology should be
understood in two distinct, yet related, ways. First, we must regard it as an
instrumentum, as a mere means to an end. Instruments are not ends in themselves,
but are rather employed at the service of human objectives. This brings us to the
second way of understanding technical capabilities: these capabilities must
always be understood as associated with human purposes and pursuits
According to Heidegger, technology has always been associated with episteme,
as a means of knowing and as a means of being at home in the world. Modern
technology, however, differs from previous forms of instrumentum in the way that
it pursues knowledge and makes its home in modernity (Heidegger 1993).
The goal of modern technology is the establishment of order and limits
through a unique mode of enframing (Ge-stell). Heideggers conception of Ge-stell
seems interestingly appropriate in light of our focus on precision-guided
munitions. As Bosquet and Dwyer have described in detail, the frames of
infrared, satellite and video technologies have been employed to transform and
recongure the space of the battleeld and now appear to clarify the impending
moral vagueness of todays asymmetric warfare (Bosquet and Dwyer 2009).
Heidegger asserts that the essence of modern technology lies in enframing in
which everywhere everything is ordered to stand by, to be immediately on hand,
indeed to stand there just so that it may be on call for a further ordering
(Heidegger 1993, 322). In line with Heideggers analysis, the use of precision
technologies on the battleeld can be traced to a desire to order the growing chaos
of asymmetric warfare. The ease with which this ordering takes place often masks
the ethical dilemmas that accompany it. Indeed, the ease of technology is often
mistakenly equated with the superior morality of its usethe easy becomes the
good. Policymakers such as Rumsfeld would like to think that smart bombs
could make terrorists immediately on hand to be neutralized. Such a desire is
understandable. Today, the combatants that face the militaries of industrialized
nations do not adhere to the standards of double effect or double intent as outlined
in just war theory, and they routinely hide among civilians in order to stymie
opposing forces who would adhere to these standards. This desire may be
understandable, but it may lead to a dangerous moral myopia.
A real risk is that targets become legitimate targets when strategists acquire
technologies that can easily neutralize them with little collateral damage and with
minimal political fallout. Here we risk confusing instrumental means with the
ends of military objectives. More specically, we allowed a tool with impressive
capacities to determine the ends and purposes that we pursued. This confusion is
enabled by the technical precision of weaponry, but also exacerbated by the
technocratic culture of the Bush administrations Department of Defense (DoD)
and the broad and sweeping rhetoric of the global war on terror. As many
scholars have now noted, the term terrorist is not a word with a clear referent
one that designates any particular set of attributes or characteristics of individuals
(Graham 1997, 117; Kaag 2008). Due to its ambiguity, we are unable to use this
592 John Kaag and Whitley Kaufman
rhetoric as a tool to single out individuals as potential targets. In light of this fact,
strategists nowface the temptation of relying on technical precision to make moral
distinctions in the targeting cycle.
Heidegger suggests that such a danger is real and present. Modernity has
already allowed technology to reveal the meaning of the natural world; we are
suggesting that technocrats who optimistically speak of military transformation
would allow PGM to reveal important meanings in the worlds of security, politics
and warfare. For Heidegger, scientic and empirical manipulations designate
nothing less than the way in which everything presences that is wrought upon by
the revealing that challenges (Heidegger 1993, 323). The risks of this sort of
manipulation are front and centre in Heideggers later work, especially in The
question concerning technology, the Letter on humanism and The turning.
Heidegger believes that in modernitys approach to understanding nature we
have reduced it to its instrumental uses. The river is no longer understood as free
owing, but rather is only understood as the amount of electricity it can generate
when it is dammed up. That is to say that in the face of technological manipulation
the river becomes merely or solely (bloss) a source of power. Similarly, the open
plateau is no longer understood in its openness, but only as being-cordoned-off
for the purposes of farming; the tree is not understood in its bare facticity, but only
as a form of cellulose that can be used and employed. While Heidegger seems to
irt with romanticism in his comments, he does make a sound point: the
technologies that are used to put nature in order become the only means of
understanding natures emergence. This discussion concerning the enframing
of nature may appear far aeld from a discussion of the ethical implications of
technologies of violence, oppression and militarism. Appearances can be
deceiving. Heidegger believes that the unquestioned technological manipulations
that place nature on hand and under our control are the same sort of
manipulations that allow mass atrocities to occur on the social and political scene.
That is precisely the claim that we are making in this paper in regard to the
advancement of surgical strike capabilities. In a quotation that is often cited, and
even more often misunderstood, Heidegger states that Agriculture is now a
motorized food industry, the same thing in its essence as the production of corpses
in the gas chambers and the extermination camps, the same thing as blockades
and the reduction of countries to famine, the same thing as the manufacture of
hydrogen bombs (cited in Spanos 1993, 315). Heidegger has been criticized since
the early 1950s for this comment, for it seems to trivialize the brutality of the
Holocaust by making a comparison between genocide and agriculture.
While this cryptic remark deserves scrutiny along these lines, it does seem to
suggest that being mesmerized by technological expediency can blind us to, or
distract us from, other ways of knowing that do not turn on the rhetoric of utility.
This is the case in the use of PGM as much as it is the case in the employment of
atomic weapons. The promise of the hydrogen bomb is to create an amount of
destruction that is orders of magnitude greater than conventional or ssion
bombs. Such power can tempt engineers and strategists to develop and test these
weapons without attending to the on-the-ground implications of these devices.
Combating this form of moral myopia is, in a certain sense, rather easy, for the
developers of these weapons did not purport to save lives, but rather to destroy
them. The case of PGM is slightly different. The promise of PGM is to kill or
neutralize the greatest number of targets while minimizing the risk to innocents
Military frameworks 593
and modern military personnel. Such a promise seems like a good one (if the
targets are justly selected), but unfortunately this is a promise that technology
itself cannot keep. Only human beings can make good on this ethical commitment.
Despite this fact, the development of precision technologies has enabled the
rhetoric of safe, cheap and efcient small wars. As Michael Adas argues, the
elision between surgical strike technology and the rhetoric of efcient warfare is
just the most recent version of the longstanding partnership between technology
and imperialism (Adas 2006). This reliance on technical capabilities is not easily
criticized by ethicists, since the development of these weapons is made in the name
of ethics. When the mouthpieces of war machines coopt the language of ethics and
justice, ethicists face greater and more nuanced challenges.
In light of this discussion, a related question arises: Do military professionals
understand the moral challenges of particular battle-spaces by way of ethical
training or only through the technical frameworks of the weaponry employed?
Heidegger restates the broad point concerning technological enframing in his
later writing: When modern physics exerts itself to establish the worlds formula,
what occurs thereby is this: the being of entities has resolved itself into the method
of the totally calculable (Heidegger 1998, 327). The current revolution in military
affairs driven by the US DoD has encouraged modern physics and technology to
exert itself in order to establish a formula for modern warfare. The 2006
Quadrennial Defence Review (QDR), which provides objectives and projections
for US strategy, aims at minimizing costs to the United States while imposing
costs on adversaries, in particular by sustaining Americas scientic and
technological advantage over potential competitors (QDR 2006, 5). This comment
indicates that US military tactics are tacitly employing an egoistic version of a
utilitarian standard as their ethical norm (the good is achieved in minimizing
costs while maximizing benets to allies and friends). Many disadvantages of
this moral framework have been repeatedly voiced by critics of utilitarianism
one of which is the fact that the metric of utility changes in reference to US military
personnel, innocent civilians and enemy combatants. There is, however, one
supposed advantage of utilitarianism, namely that is that it is totally calculable.
The calculations of utilitarian measurements are allied closely with the
calculations of technical precision, and, in the case of the QDR objective,
strategists seem to indicate that technological advantage can aid in making this
moral calculation of cost benet analysis. The philosophical underpinnings of the
QDR are reected in, and seem to motivate, the research and development of
technologies such as military robotics that will fully replace the human soldier in
battleeld situations.
Ethical reections on battle-ready robots
Perhaps no single idea better expresses the technological fantasy of futuristic
warfareor even of transcending warthan the idea of robot soldiers. Robots, we
are told, have already stepped out of the science ction pages and onto the
battleeld (Carafano and Gudgel 2007, 1). In fact, while the US military already
has several thousand robots in operation, these machines are not fully
autonomous systems but are remotely operated by human beings in real time
and, impressive as they are, are hardly the stuff of science ction. These robots
594 John Kaag and Whitley Kaufman
include the Air Force Predator Drones, a type of unmanned aerial vehicle (UAV)
with both surveillance and combat capability, from which important al-Qaeda
operatives have been killed using Hellre missiles. Of equal importance is the
land-based bomb disposal robot, crucial against improvised explosive devices
(IEDs) in Iraq, the cause of the large majority of US casualties. Robots are
currently used to disarmbombs, explore caves and buildings and scout dangerous
areas so that human soldiers can be spared from these dangerous tasks. Some
Israeli military robots are equipped with submachine guns and with robotic arms
capable of throwing grenades; however, as with US robots, the decision whether
to use these weapons is in the hands of a remote human operator. While these
remotely operated machines are important technological advances and in some
ways are already dramatically changing the way war is fought, it is misleading to
call them battleeld robots, nor do they appear to raise especially complex or
novel ethics or policy questions beyond what has already been discussed in
reference to PGM.
However, we now face (so we are told) the prospect of genuinely autonomous
robot soldiers and vehicles, those that involve articial intelligence (AI) and
hence do not need human operators. The Future Combat Systems Project, already
underway at a projected cost of US$300 billion, aims to develop a robot army by
2012, including a variety of unmanned systems with the capacity to use lethal
force against enemies, requiring the ability to locate, identify an enemy, determine
the enemys level of dangerousness and use the appropriate level of force to
neutralize the target, though it is unclear what degree of autonomy these
unmanned systems will have. The US military is now one of the major sources of
funding for robotics and articial intelligence research (Sparrow 2007, 62). While
at present true robot soldiers remain mere vapourware, this has not stopped
enthusiasts of futuristic warfare from speculating about the imminent
transformation of war. John Pike, recently writing in the Washington Post, declares
that Soonyears, not decades from nowAmerican armed robots will patrol on
the ground as well [as in the air], fundamentally transforming the face of battle
(2009, B03). Wallach and Allen tell us that current technology is converging on the
creation of (ro)bots whose independence from direct human oversight, and whose
potential impact on human well-being, are the stuff of science ction (Wallach
and Allen 2008, 3). According to a 2005 article in the New York Times, The
Pentagon predicts that robots will be a major ghting force in the American
military in less than a decade, hunting and killing enemies in combat (Weiner
2005). Whereas Isaac Asimovs famous Laws of Robotics mandated that no robot
may injure a human, these robots will, in contrast, be programmed for the very
opposite purpose: to harm and kill human beings, ie, the enemy.
The deployment of genuinely autonomous armed robots in battle, capable of
making independent decisions as to the application of lethal force without human
control, and often without any direct human oversight at all, would constitute a
genuine military as well as moral revolution. It would involve entrusting the
ultimate ethical question to a machinewho should live and who should die?
Of course, machines already make lethal decisions. An ordinary land mine, for
example, uses lethal force against soldiers or vehicles by detecting their presence
based on pressure, sound or magnetism; advanced mines even are capable of
distinguishing between enemy and friendly vehicles. However, the very lack of
discrimination of anti-personnel mines is the reason that the 1999 Ottawa Treaty
Military frameworks 595
prohibited the use of such weapons, since they do not reliably distinguish
between soldiers and civilians (or even animals) and can be deadly long after the
conict is nished. Hence the development of genuinely robotic lethal decision-
makers, capable of making rational decisions as to what constitutes a legitimate
target, would in theory surmount this objection and would constitute an
unprecedented step in military technology.
A machine capable of making reliable moral judgments would presumably
require strong AI, that is achieve actual intelligence equivalent to or superior to
our own, a project that to date remains yet a speculative possibility. It seems
therefore quite premature to consider the ethical ramications of genuinely
autonomous lethal robot soldiers. Indeed, the very project threatens to be self-
defeating if the underlying motivation for robot soldiers is to replace humans in
situations that are dangerous or otherwise undesirable. For a machine that
achieved equivalent mental capacity to human beings could arguably claim
equivalent moral status as well and as such have equal right to be protected from
the dangers of warfare (it should be noted that the Czech word from which we
derive robot means serf or slave). Of course the robots might be better suited for
dangerous missions, having built-in armour and weaponry to protect them.
However, some proponents (such as Arkin) call for designing these robots without
an instinct of self-preservation; even if this is possible, the denial of a right of self-
protection to a moral agent it is itself ethically problematic. Alternatively, it is
possible that such autonomous machines would lack some crucial element
required for attaining moral status and hence could be treated as mere machines
not protected by the rights of soldiers. However, we do not even know whether a
being is capable of moral decision making without being itself a moral agent.
It thus seems pointless even to try to answer such questions at this stage until we
even know whether such beings are possible and what they would be like
(e.g. whether they would they have desires and purposes just like us, or whether
they would be capable of suffering) (Sparrow 2007, 7173). A prior moral issue
involves asking just what the goals are in developing such robot soldiers:
To protect humans from harm? To save money? To wage war more effectively?
To make war more ethical and humane to both sides? Clearly, the purpose with
which we engage on this project will inuence the nature of the robots created and
their ethical legitimacy.
The rhetoric and the predictions for an imminent AI robot army run so far
ahead of any actual engineering capabilities for the near future that it seems that
the disproportionate attention is more a product of the seductive fascination of
technology than of realistic engineering possibility. These robot soldiers offer the
dream of a transformed way of waging war. In a 2005 New York Times article,
Gordon Johnson of the Joint Forces Command at the Pentagon is quoted as stating
the advantages of robots over human soldiers: They dont get hungry. Theyre not
afraid. They dont forget their orders. They dont care if the guy next to them has
been shot. Will they do a better job than humans? Yes (Weiner 2005). Roboticist
Ronald Arkin hopes that robots, with their (hypothetically) superior perceptual
skills, will be better able to discriminate in the fog of war and also make ethically
superior decisions (Arkin 2007, 6). John Pike suggests that the very existence of
war and genocide is to be blamed on human weakness, and makes utterly
fantastic claims for the ability of robot soldiers to usher in a new millennium of
permanent peace, including the end of genocide as well. For Pike, the problem
596 John Kaag and Whitley Kaufman
with human soldiers is not merely their physical limitations and their cost but
even more fundamentally their psychological limitations, including particularly
their vulnerability to human emotions such as sympathy and compassion that
makes them hesitant to kill. Pike cites the celebrated 1947 study by SLA Marshall
as support for the proposition that most soldiers will not even re their weapons
at the enemy. However, Marshalls evidence has long been discredited as
speculative at best, and as sheer invention at the worst. Note that even if soldiers
are hesitant to re, it is unclear whether that hesitation is due to sympathy, fear or
even mundane factors such as the need to clean ones weapon.
The widespread fascination with the possibility of robot soldiers and the
credulous acceptance in the media of claims about their imminent arrival, long
before there is any realistic possibility of producing them, suggests that what is
really at work is what historian David Noble has labelled the religion of
technology (Noble 1999). Noble argues that the Western (and especially
American) obsession with technology has long been a sort of secular religion.
That is, its aims (however purportedly scientic) have paralleled the religious goal
of salvation by overcoming human imperfection and creating a new and better
being (often an immortal one). The motivations behind technology have rarely
been merely practical and mundane, but rather rooted in a desire to transcend the
imperfections of the human condition and become godlike. As Moravec claims,
Technical civilization, and the human minds that support it, are the rst feeble
stirrings of a radically new form of existence (Moravec 2000, 12). Noble contends
that this quasi-worship of technology has resulted in the unfortunate double
tendency to engage in escapist fantasies rather than realistic assessments of what
is technically feasible, and at the same time to display a pathological
dissatisfaction with, and deprecation of, the human condition (Wallach and
Allen 2008, 207).
Both of these tendencies are on display in the current fascination with robot
soldiers. For one thing, as Wallach and Allen point out, the more optimistic
scenarios for AI are based on assumptions that border on blind faith (Wallach
and Allen 2008, 194). They quote as an example Michael LaChats prediction that
the articial intelligent being will become a morally perfect entity; Wallach and
Allen remark that the word entity should be replaced by deity (Wallach and
Allen 2008, 194). For another, even if such robots were technologically feasible, the
wild claims that they would wholly transform or even end war are simply of a
piece with the historical tendency to declare that any major newpowerful weapon
will mean the end of war. John Pike writing in the Washington Post predicts that
this robot army will make Americas military irresistible in battle and usher in a
new robotic Pax Americana that would end the large-scale organized killing that
has characterized six millenniums of human history.
2
Such predictions inevitably
neglect the ability of enemies to discover new tactics and strategies to outwit the
technological advantages of their opponents, and moreover the fact that, once
created, it is impossible to keep a monopoly on a new technology. Once both sides
had access to robot armies, it seems unlikely that war would become any less
frequent, or any more humane; if anything it would likely become more lethal and
destructive, especially if Pike is right in that the major advantage of robot soldiers
2
Coming to the battleeld, Washington Post, 4 June 2009, B03.
Military frameworks 597
would be the utter ruthless efciency with which they would be able to take
human life.
Arkin follows a long tradition in articial intelligence of locating human
limitations in our emotions that prevent us from reasoning clearly; for him
emotions distort our reasoning faculty and produce biases such as the scenario
fullment fallacy in which people select information to conform to their pre-
existing expectations (Arkin 2007, 6). Robotics appears to offer a path toward
escaping the fog of war through eliminating those elements of the human thought
process that interfere with the clarity and precision of reason. This outlook
reects the inuence of Cartesian dualism and its radical distinction between
reason/mind and emotion/body. This extreme and implausible dualism seems to
be motivated by the technophiles goal of separating out the sources of ambiguity
in human judgment from those elements that can be made clear and distinct, so
that a perfect reasoning machine can be made that is not subject to human foibles.
In fact, it remains an open question, to put it mildly, whether an autonomous
intelligent agent could be created without endowing it with emotions comparable
to those of humans; indeed a substantial literature rejects the Cartesian
assumption that emotion and reason are separable at all (Damasio 2005).
Furthermore, it is quite possible that emotions may be necessary to ethical
judgment; one who could not feel compassion for the sufferings of others might
not be capable of making good moral decisions. In fact, while Arkin and others
observe that Augustine worried that emotions such as anger or fear can distort
moral judgment in war, what they neglect to note is that for Augustine the
problem is not emotions in themselves but the wrong emotions. For Augustine
what makes war morally permissible is precisely that it is fought with the right
emotion, love rather than anger or pride (Russell 1987).
Essential to the moral evaluation of the use of genuinely autonomous robots
capable of inicting lethal force is of course the degree of reliability of their moral
judgments. Would it be ethical to use robots that can discriminate soldiers from
civilians somewhat less effectively than humans can on the grounds that they
were cheaper than human soldiers? What if robots could discriminate better
overall than humans, but had a particular blind spot (say, the capacity to recognize
hospitals as non-military targets)would it be ethical to deploy them anyway?
It is perhaps no surprise that advocates of the newrobot army insist that the robots
would not merely be capable of equalling human moral capacity, but of
surpassing it. Arkin, while disavowing any claim that robot soldiers would be
capable of being perfectly ethical in the battleeld, nonetheless is convinced that
they can perform more ethically than humans are capable of (Arkin 2007, 6). He
cites a series of ndings demonstrating the ethical failings of United States
soldiers, including the fact that substantial numbers of soldiers report having
mistreated non-combatants, especially when angry; that some units modify the
Rules of Engagement (ROE) in order to accomplish a mission; and that a third of
soldiers reported facing ethical situations in which they did not know how to
respond (Arkin 2007, 8). Arkins project, funded by the DoD, is to provide a set of
design recommendations for the implementation of an ethical control and
reasoning system potentially suitable for constraining lethal actions in an
autonomous robotic system so that they fall within the bounds prescribed by the
Laws of War and the Rules of Engagement (Arkin 2007, 1).
598 John Kaag and Whitley Kaufman
We have already criticized the widespread assumption that unethical conduct
in war can largely be attributed to emotions and their distorting impact on reason.
Equally problematic is the claim that a signicant cause of unethical behaviour
is a lack of understanding as to what ethics requires: that is, the failure of soldiers
to grasp the clear demands of the rules of war. The unstated assumption, of
course, is that moral ambiguity is attributable to human ignorance or irrationality,
rather than being an intrinsic and inevitable element of ethical reasoning. Arkin
(2007) for example seems to believe that modifying the ROE constitutes in itself a
violation of ethics or the laws of war. But it is fallacious to assume that a change in
the ROE reects a moral failing, for the modied ROEs may be no less morally
acceptable than the prior ones, or might even render the ROE more consistent with
morality. For example, Evan Wright (2008) describes a situation in the Iraq War in
which the initial ROE, permitting targeting only those carrying weapons, imposed
an undue restriction on the troops, for it precluded the use of force against
forward observers for mortar attacks, men dressed in civilian clothes and
carrying binoculars and cell phones but no weapons; these men report the
coordinates of American troops so as to make the mortar rounds more accurate.
Wright describes how a modication in the ROE worked its way up the chain of
command, as marines requested permission to re on forward observers,
permission that was eventually approved (Wright 2008, 102). It is quite plausible
that a forward observer is a legitimate target both under the laws of war and
morality despite not actually carrying a weapon, though the moral analysis is by
no means obvious (an example of the fog of war). The presumed advantage of a
robot soldier in its deterministic inability to modify the rules is by no means an
obvious improvement over the human judgment process; it is far from obvious
that human exibility (even with its capacity for misuse) is intrinsically worse
than a robots mechanical determinism to follow rules that may themselves be
awed.
Mechanizing judgment
Advocates of robot soldiers will no doubt argue that the problem lies in the
ambiguity of the prior rules; by more precisely specifying who is a legitimate
target, we can avoid the need for exibility. But such a claim is unconvincing, for
the above example arguably demonstrates intrinsic ambiguity in morality rather
than ambiguity due to perceptual limitations or lack of clarity in the rules. For
whether someone counts as a legitimate target is necessarily a matter of degree; at
one end of the spectrum is the man ring the gun; at the other end is the civilian
playing no role in the attack. In between is a continuum of cases varying by the
level of involvement or support being provided in the attack. While radioing in
directions to a mortar team is probably sufcient to render one a combatant
(despite not carrying arms), other cases are not so easy, for instance civilians who
merely warn the mortar crewthat Americans are coming, or civilians who provide
food or water to the crew, or even just merely give them words of support. It is
unlikely that any set of rules can be prescribed in advance to determine when
lethal force is permissible.
Nor is this the end of the intrinsic moral ambiguity of such situations. David
Bellavia recounts an incident in the Iraq War in which the Mahdi Militia in Iraq
Military frameworks 599
were using a small child of ve or six as a forward observer. In such a situation,
even though there was no doubt about the boys role and his essential function in
targeting the Americans, nonetheless the American soldiers declined to target the
child on moral grounds; as Bellavia and Bruning explains, Nobody wants a child
on his conscience (Bellavia and Bruning 2007, 10). The fact that a robot soldier
would presumably lack a conscience and be able to kill the ve-year-old is hardly
evidence of its superiority to human soldiers, at least in moral terms. Moreover,
the age problem raises yet another continuum problem; at what age does a person
become sufciently morally accountable to be a legitimate target? It is unlikely
that any rule can be formulated in advance to cover such situations; the ability to
respond exibly and contextually to such ambiguity is a reection of the human
capacity to exercise moral judgment in complex situations.
Nor can the problem of perceptual ambiguity be eliminated by deploying
robots in place of humans, despite frequent assertions to the contrary. As Max
Boot states, [t]he US military operates a bewildering array of sensors to cut
through the fog of war (Boot 2003). There is no doubt that machines can
dramatically improve on humans in such matters as visual acuity, electronic
surveillance and so forth. And in many cases this will make a robot capable of
better complying with the rules of war and even ethics, for example if it can
determine denitively that the apparent civilian is in fact a forward observer and
not merely a spectator with a cell phone. But it is highly doubtful that even the
best machines could eliminate the intrinsic perceptual ambiguity of the battleeld.
In Noel Sharkeys example, it is unlikely that a robot could decide whether a
woman is pregnant or carrying explosives without the use of the human skill of
mere common sense (quoted in Flemming 2008). Perceptual ambiguity will
always be part of the fog of war and cannot be eliminated by technological
solutions. Indeed, the very question of how much perceptual evidence is required
before deciding it is appropriate to resort to lethal force is itself inextricably
intertwined with moral judgment. Thus even if one could ascertain that the person
were in fact communicating with Iraqi soldiers, that would not of course dictate
that he or she is a legitimate target; such a judgment would require appreciation of
psychological and moral complexity, for example whether he or she is merely
encouraging them or providing necessary technical assistance.
Even more troubling is the possibility that ethical principles themselves may
be modied to suit the needs of a technological imperative. Aremarkable example
of this is Arkins discussion of how to choose between the many contested moral
theories; Arkin rejects one of these moral theories, virtue theory, on the grounds
that it does not lend itself well by denition to a model based on a strict ethical
code (Arkin 2007, 43). There may of course be good substantive reasons for
rejecting a given moral theory, but to do so based on the technical criterion of
operationalizability is to let ethics be guided by techne rather than techne guided
by ethics. The drive to unite ethics with technology risks subordinating the former
to the latter; ethical principles may be distorted by the need to implement them in
an algorithmic form suitable to machine architecture. Wallach and Allens
comment seem to warrant considerations along these lines in their noting that
designing a robot ethics based on reasoning provides a more immediately
tractable project than one based on moral emotions (Wallach and Allen 2008, 108).
To take another example, Arkin calls for making the moral criterion of
proportionality more explicit through research, but he does not specify just
600 John Kaag and Whitley Kaufman
what sort of research this would be, and it seems more likely that any such
research would rather conrm the essential complexity and ambiguity of
judgments of proportionality (Arkin 2007, 12). The very project of formalizing
ethics for use by autonomous entities thus would seem to beg the question in a
grand fashion. Of course, the goal of such researchers may be the far more modest
one of experimenting to see if an algorithmic ethical system can produce morally
satisfactory decisions. However, Arkins initial and implausible assumption that
one can evade the problem of moral controversy by following the agreed upon
and negotiated laws of war suggests that the project is more than merely
provisional, but reects a deep ideological commitment to the denial of
moral ambiguity and the attainability of a technologically reproducible ethics
(Arkin 2007, 9). If so, the very project of robotic ethics would produce an
ethical system that is precise, determinate and clearyet morally unacceptable.
The danger is that what is operationalizable will become what is morally
permissible by means of the technological imperative.
Indeed, the inherent controversy and ambiguity of moral judgment would
seem to present overwhelming evidence that morality is not reducible to a set of
rules. If it were, it is likely that we would have discovered many or most of these
rules long ago. As Wallach and Allen explain, the prospect of reducing ethics to a
logically consistent principle or set of laws is suspect, given the complex intuitions
people have about right and wrong (Wallach and Allen 2008, 215). The goal of an
automated ethical decision system is the dream of escaping the fog of ethics, if
only in the realm of war. In fact, the battleeld is, if anything, more ethically
fraught and ambiguous than the ethics of ordinary life, and less subject to rules;
the rules regarding the permissibility of killing in war are, for example, extremely
limited and vague in comparison with the rules of homicide in civil society. Once
again, the techno-fantasist attempts to escape uncertainty and disorder in war and
ethics by attributing it to human fallibility and limitations, such that a higher
entity such as the futuristic robot will transcend any such constraints and discover
new moral truths.
We have already mentioned the objection that an autonomous ethical machine
would lack accountability for faulty moral choices and hence would not be
permissible under the rules of war, any more than the use of child soldiers
(Sparrow 2007, 66; Asaro 2006). This complex issue cannot be decided until we
know the actual workings of these robots; will their moral decisions be
determinate and predictable in advance, or will a moral system entail such
complexity as to introduce a fundamental and intrinsic unpredictability into their
behaviour?
To the extent they are predictable, then it would follow that the designer or
user is responsible for its actions, and no special moral issues are raised (any more
than say for mines). However, the hope and expectation among most researchers
seems to be that robots will become genuinely autonomous decision-makers.
The worry here is that such robots would become a black box for difcult moral
decisions, preventing any second-guessing of their decisions. Ironically, this may
be the very attractiveness of such a project, for it offers a way to deect moral
responsibility away from humans and thus evade genuine moral dilemmas
(as Pike puts it, it permits detachment from inicting death on our enemies).
Nor should one ignore the distinct political advantage in exploiting the seductive
power of technological fantasy in promising the public a war without tragedy or
Military frameworks 601
even hard moral choices, conducted by superior and infallible machines. Wallach
and Allen express the concern that we have started on the slippery slope toward
the abandonment of moral responsibility by human decision makers
(Wallach and Allen 2008, 40). Or even worse, it may be that the value of robot
soldiers is that they will be unconstrained by human weaknesses such as
compassion that limit military effectiveness, and hence ruthless and uncon-
strained in their use of force against the enemy.
There is an alternative view of the role of robots in war, though it has had far
less attention because it is less dramatic and glamorous. Instead of envisioning
robots as idealized replacements for human soldiers, one might see the role of
robotics as assisting human decision-making capacity. As Roger Clarke argues,
The goal should be to achieve complementary intelligence rather than to continue
pursuing the chimera of unneeded articially intelligence. While computers excel
at computational problems, humans are unsurpassed in what one might broadly
call common sense, as Clarke explains to include unstructured or open-
textured decision-making requiring judgment rather than calculation
(Clarke 1993, 64). Humans are, as Wallach and Allen assert, far superior to
computers in managing information that is incomplete, contradictory, or
unformatted, and in making decisions when the consequences of actions cannot
be determined (Wallach and Allen 2008, 142). In other words, human superiority
will remain in the eld of ethics itself and above all the ethics of the battleeld,
where situations are complex, changing and unpredictable, and the rules
themselves open-ended. None of this is to deny the crucial role for remote-
controlled or semi-autonomous robot units taking over tasks that are especially
dangerous; but moral judgments about the taking of life or even the destruction of
property must remain the domain of the human soldier, at least for the foreseeable
future. For all that technology can do to improve human life, there is no reason at
present to believe that it can solve ethical problems that have challenged humans
for thousands of years, or to eliminate the fog of war.
Aftershocks: how military technologies effect intelligence-gathering
Smart bombs are neither smart nor moral. The use of Hellre missiles to carry out
the targeted assassination of alleged terroristsas in the case of the targeting of
the Hussein brothers in downtown Mosulis morally and legally problematic.
As PW Singers Wired for war (2009) underscores, the use of stand-off technologies
and unmanned vehicles in such surgical strikes can create a videogame/voyeur-
istic approach to warfare in which military personnel who control these drones
from remote locations lose touch with the realities of the battleeld (Singer 2009).
The use of similar weaponry, including battle-ready robots, which might result in
the killing of innocent civilians due to technical malfunction or faulty intelligence-
gathering and communications, is more obviously questionable. In January 2002,
former Deputy Secretary Paul Wolfowitz expressed hopes for the impending US
military campaigns: they aimed to apply a very small force on the ground and
leverage it in a dramatic way not only through precision-guided munitions but
through precision communications that would get those munitions accurately to
the right target instead of accurately to the wrong target. Wolfowitzs comment,
unlike Rumsfelds, seems to acknowledge the danger of relying too heavily on
602 John Kaag and Whitley Kaufman
technological precision in the formation of strategy, by concluding that accuracy
by itself doesnt do you any good if your target identication is wrong (cited in
Soloman 2007, 82). His comment seems to apply equally to the use of PGM and
any future developments of battle-ready robots, both of which could be used to
moral or immoral ends depending on the plans and purposes of military
commanders.
At rst glance, it seems that Wolfowitz understood at least one of the moral
lessons of precision-guided munitions, namely that precision is only as good as
the intelligence that is used in a given targeting cycle. Learning this lesson,
however, can lead to ethically problematic conclusions. As Kaag has argued,
Painting and destroying specic enemy targets with laser-guided accuracy
depends on the reliability of intelligence, and this intelligence is often garnered
fromthe interrogation and coercion of enemy prisoners. The demands of PGMand
robotic targeting, the need to specify an enemys exact position and character, may
place undue burden on interrogators who feel responsible for providing this
information (Kaag 2008). Wolfowitzs seemingly cautious remark concerning PGM
must be understood in the wider scope of the global war on terror. On 27 December
2001, a week before Wolfowitzs comment, Rumsfeld (at that point, Wolfowitzs
immediate superior) announced the establishment of Guantanamo Bay as a
holding site for detainees. The interrogation techniques used at this site, many of
which were previously used to break trainees at the Armed Forces SERE
(Survival Evasion Resistance Escape) School, were sanctioned by DoD ofcials in
the months surrounding Wolfowitzs comments concerning the use of PGM.
We are not claiming that the development of military technologies directly
cause abuse or torture; this would be overstating the point. We are, however,
suggesting that the demand to use precision-guided munitions and military
robotics in a moral way will place unprecedented pressure on interrogators to
garner the intelligence to identify appropriate targets. This may have already
resulted in compromising the standards set by the Geneva Convention for the
treatment of prisoners of war, or, more likely, the wholesale dismissal of these
standards. Exposing the relationship between military technology and interrog-
ation practices is not meant to shift responsibility away from the strategists and
commanders who enact morally questionable policies. Instead, we echo writers
such as Bauman and Coward by observing that the structures of technology and
bureaucracy can contribute to the articulation of new forms violence while
masking the unique and deeply problematic character of this violence (Coward
2009, 45). There is in this case a complex symbiosis between stand-off and
precision weaponry and the intelligence-gathering techniques that might inform
its use. This point is driven home when we come to recognize that even the
initiation of the Iraq War, undoubtedly the most technologically advanced war
ever waged, was motivated by allegedly savage interrogation procedures.
Stephen Gray, who investigated Central Intelligence Agency (CIA) detention
centres, alleges that the supposed connection between al-Qaeda and Saddam
Hussein was corroborated by intelligence gathered from Iban al Shakh al Libby,
who provided this information only after being tortured in prisons in Egypt
(Agence France Press 2006).
Much more could be said about this topic in light of the history of philosophy.
For example, Friedrich Schiller wrote his Aesthetic letters in 1794, in the midst of
another age of war. He suggests that human beings forfeit their humanity in two
Military frameworks 603
distinct ways. On the one hand, they could turn to savagery in which one
prioritizes feeling and emotion over reason and science, in his words,
when feeling predominates over principle. On the other, they could
become barbarians and could prioritize science and techne at the expense of
human feeling and sentiment, in Schillers words, when principle destroys
feeling (Schiller 2004, 34). Schillers warning comes home to us when we examine
the relationship between advanced military technologies, forms of techne that aim
to remove all human feeling and sentiment from the battleeld, and recent
methods of intelligence-gathering, methods that appear to break basic ethical
principles. Indeed, such an investigation may expose the unique way in which
barbarism and savagery enable one another in the course of modern warfare.
Conclusion
In the dialogue Protagoras, Plato recounts the myth of Prometheus bringing
technology to humankind. The gift of techne threatened to result in the destruction
of all humans, since humans lacked any standards for the proper use of these
dangerous powers. Zeus, fearing the possible extermination of humans, sent
Hermes to deliver them the gift of justice (dike) to bring order and conciliation to
men as a necessary supplement to technology. Moreover, Zeus insisted that the
knowledge of justice be distributed among all people, and not given merely to a
small number of experts, for, he says, cities cannot be formed if only a few have a
share of these as of other arts [technon] (Plato 1990, 321323).
Platos warning about the relation between techne and ethics is even more valid
in an age when technology can cause far more damage far more quickly than was
imaginable to the ancient Greeks. The seductive power of technology promises
war on the cheap, cheap both in blood and in treasure, and, even more
importantly, it holds out the possibility of a war puried of all moral tragedy.
Technology perpetually threatens to coopt ethics. Efcient means tend to become
ends in themselves by means of the technological imperative in which it becomes
perceived as morally permissible to use a tool merely because we have it (often by
means of the fallacious argument that if we dont use it someone else will). Or the
very ease of striking a target becomes the rationale for doing so; the technology
determines what counts as a legitimate military target rather than vice versa.
The allure of the technocratic ideal reverses Platos warning by promising that
ethics can be made into a eld of expert knowledge, circumventing the difcult
process of moral deliberation and judgment. The fantasy of robot soldiers is but
the extreme of all of these trends; here moral choice is taken out of the hands of
human soldiers and indeed of humans altogether, and the technocratic expert is
the technology itself, the machine making accurate moral choices. We have argued
here that technology can never eliminate the challenge of difcult moral choices
and moral dilemmas, though it is in the very nature of technology to continually
tempt us to think it can do so. This dangerous illusion results in inappropriately
low thresholds for the decision to go to war, a failure to engage in moral
deliberation on such tricky moral issues as targeted assassination, and the
paradox of pushing us into even greater moral wrongs such as torture in order to
provide the precise intelligence needed for technology to be successful. Techne is
even a threat to democracy itself, insofar as it permits leaders to manipulate the
604 John Kaag and Whitley Kaufman
public with the promise of a perfectly just war due to modern intelligence and
smart weaponry. But moral judgement will always be difcult and controversial
in all circumstances, and above all in war, where the cost in human life and
welfare is so high and where collateral damage is inevitable. Technology has
great potential to make war less destructive and to avoid harming innocent
bystanders. Yet technology can never be a substitute for ethics itself; the decision
to go to war, and the means of ghting war, will always belong in human hands.
References
Adas, Michael (2006) Dominance by design: technological imperatives and Americas civilizing
mission (Cambridge, Massachusetts: Belknap Press of Harvard University Press)
Agence France Press (2006) Confession that formed the base for invasion of Iraq was
gathered under torture, 27 October, ,http://www.commondreams.org/headlines06/
1027-04.htm., accessed 23 January 2009
Aristotle (2002) The Nicomachean ethics, transl Saul Broadie and C Rowe (Oxford: Oxford
University Press)
Arkin, Ronald (2007) Governing legal behavior: embedding ethics in a hybrid
deliberative/reactive robot architecture, GVU Technical Report GIT-GVU-07-11,
College of Computing, Georgia Tech
Asaro, Peter (2006) What should we want from a robot ethic?, International Review of
Information Ethics, 6:2, 916
Augustine (1982) On the free choice of will, transl Thomas Williams (New York: Hackett)
Beiriger, Charles (1998) Churchill, munitions, and mechanical warfare (Ann Arbor: University
of Michigan Press)
Bellavia, David and John Bruning (2007) House to house (New York: Free Press)
Boot, Max (2003) The new American way of war, Foreign Affairs, 82:4, 4158
Bosquet, Antoine and Michael Dwyer (2009) Scientic way of warfare: order and chaos on the
battleelds of modernity (New York: Columbia University Press)
Carafano, James and Andrew Gudgel (2007) The Pentagons robots: arming the future,
Backgrounder, 19 December 2007, 5361
CBS News (2003) Todays bombs smarter, cheaper, 25 March 2003
Clarke, Roger (1993) Asimovs laws of robotics: implications for information technology,
IEEE Computer, December 1993
Clauswitz, Carl von (1980) Von krieg (Bonn: Dummler Press)
Coward, Martin (2009) Urbicide: the politics of urban destruction (New York: Routledge)
Damasio, Antonio (2005) Descartes error: emotion, reason, and the human brain (New York:
Penguin)
Daponte, Beth Osborne (1993) A case study in estimating casualties from war and its
aftermath: the 1991 Persian Gulf War, Medicine & Global Survival, 3:2, ,http://www.
ippnw.org/Resources/MGS/PSRQV3N2Daponte.html., accessed 24 January 2009
Flemming, Nic (2008) Robot wars will be a reality within 10 years, Daily Telegraph,
27 February 2008
Ghamari Tabrizi, Sharon (2005) The worlds of Herman Kahn (Cambridge, Massachusetts:
Harvard University Press)
Graham, Gordon (1997) Ethics and international relations (Cambridge, United Kingdom:
Blackwell)
Heidegger, Martin (1993) The question concerning technology in David Krell (ed) Martin
Heidegger basic writings (San Francisco: Harper), 307342
Heidegger, Martin (1998) Hegel and the Greeks pathmarks (Cambridge, UK: Cambridge
University Press)
Holland III, Edward (1992) Fighting with a conscience: the effects of an American sense of
morality in the evolution of strategic bombing campaigns, thesis presented to the US
Air Force School of Advanced Airpower Studies, Maxwell AFB, Alabama, May 1992,
,http://research.maxwell.af.mil/papers/saas/holland.pdf., accessed 13 December
2008, 117
Military frameworks 605
Kaag, John (2008) Another question concerning technology: the ethical implications of
homeland defense and security technologies, Homeland Security Affairs, 4:1
Kahn, Herman (1960) On thermonuclear war (Princeton, New Jersey: Princeton University
Press)
Mackie, J (1977) Ethics: inventing right and wrong (New York: Penguin)
Moravec, Hans (2000) Robot: from mere machine to transcendent mind (New York: Oxford
University Press)
Noble, David (1999) The religion of technology (New York: Penguin Books)
Nussbaum, Martha (2001) The fragility of goodness (Cambridge, UK: Cambridge University
Press)
Pike, John (2009) Coming to the battleeld: stone-cold robot killers, Washington Post,
4 January 2009.
Plato (1990) Protagoras, transl Walter Lamb (London: Loeb Classics)
Ramsey, Paul (2002) The just war: force and political responsibility (New York: Rowman &
Littleeld)
Russell, Frederick (1977) The just war in the Middle Ages (Cambridge, UK: Cambridge
University Press)
Russell, Frederick (1987) Love and hate in medieval warfare: the contribution of Saint
Augustine, Nottingham Medieval Studies, 31, 108124
Schiller, Friedrich (2004) Aesthetic education of man, trans. by R. Snell (New York: Courier
Publications)
Singer, Peter (2009) Wired for war: the robotics revolution and conict in the 21st century
(New York: Penguin Press)
Soloman, Lewis D. (2007) Paul D Wolfowitz: visionary intellectual, policymaker and strategist
(New York: Greenwood).
Spanos, William (1993) Heidegger and criticism (Minneapolis: University of Minnesota Press)
Sparrow, Robert (2007) Killer robots, Journal of Applied Philosophy, 24:1, 6277
US Department of Defense (2002) Deputy Secretary Wolfowitzs interview with the
New York Times, news transcript, 7 January, ,http://www.defenselink.mil/
transcripts/transcript.aspx?transcriptid2039., accessed 3 January 2009
Wallach, Wendell and Colin Allen (2008) Moral machines (New York: Oxford University
Press)
Watts, Barry (2004) Clauswitzian friction and future war (Washington: Institute for National
Strategic Studies)
Weiner, Tim (2005) New model army soldier rolls closer to battle, New York Times,
16 February 2005
Wright, Evan (2008) Generation kill (New York: Berkley Caliber)
606 John Kaag and Whitley Kaufman
Copyright of Cambridge Review of International Affairs is the property of Routledge and its content may not be
copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.

You might also like