First International Symposium on
ROBOETHICS
30th - 31st January 2004, Villa Nobel, Sanremo, Italy


Warfare Applications of Robotics and AI
Historical Debates and Epistemologically Motivated Concerns

Roberto Cordeschi* and Guglielmo Tamburrini**
*Dept. of Communication Sciences, University of Salerno, Italy
cordeschi@caspur.it
**Dept. of Philosophy, University of Pisa, Italy
gugt@fls.unipi.it

An "electric dog", the ancestor of phototropic self-directing robots, was designed in 1912, and then constructed by the USA researchers John Hammond, Jr. and Benjamin Miessner. Early discussions of this automatic device vividly illustrate the long-term connections between warfare technology and scientific investigations on the mechanistic modelling of adaptive and intelligent behaviours.

First, if the newborn, remote control radio-directed boats had been fitted with a device similar to an electric dog, these boats could automatically direct themselves toward enemy targets. Enthusiastic descriptions of this self-directing device were given in 1915 (during World War I), in connection with its possible applications as an "intelligent" weapon. Interestingly, the "intelligence" of this artifact, a so-called "dog of war", was chiefly attributed to the lack of emotional features hindering human operators, foreshadowing in this respect contemporary discussions about the alleged advantages of so-called "intelligent weapons".

Second, the epistemological significance of this self-directing robot was initially noted by biologist Jacques Loeb in 1918. He called the electric dog an "artificial heliotropic machine", and argued that the actual construction of such a machine supported his own theory of animal phototropism, insofar as a machine behaving like a living organism and organized as prescribed by a theory of that organism behaviour is a test for the theory's plausibility.

Epistemological and military motives of interest for automatic machines have been intertwined in major later developments too.

The method of testing behavioral theories through self-adapting machine models was called "synthetic method" by psychologist Kenneth Craik, who studied warfare applications of automatic machines in the early 1940's. The "synthetic method" has enjoyed increasing popularity in the explanation of animal and human behavior up to the present time. And the ethical implications of those machines were vigorously explored by founder of Cybernetics Norbert Wiener. It was another major conflict, World War II, "the deciding factor", as Wiener put it, for the development of cybernetic machines. Dissenting with AI pioneer Arthur Samuel, Wiener envisaged "disastrous consequences" of automatic and learning machines operating faster than human agents.

Similarly to what Wiener undertook in connection with cybernetic machinery, and physicists in connection with nuclear weapons, nowadays AI researchers and roboticists should go outside their technical communities, and make public opinion aware of dangers connected to warfare applications of their work, insofar as their specialized knowledge brings these dangers out in ways that are not evident for the general public. Actions of this sort are aptly illustrated by reference to machine learning, to what we call "AI-complete problems", and to the problem of ensuring normal functioning conditions for machines.

First, one would like to have a guarantee that after training on some tasks a robot will learn to behave as expected most of the time, without bringing about the "disastrous consequences" that Wiener contemplated in awe. But theoretical guarantees of this sort,-it is pointed out in connection with so-called supervised inductive learning-, are very hard to come.

Second, autonomous robotic agents envisaged in the framework of military research projects will have to solve "AI-complete problems", that is, problems whose correct solution paves the way to the solution of any other AI problem. Recognizing surrender gestures and telling bystanders apart from hostile agents are cases in point, as both require contextual disambiguation of gestures, understanding of emotional expressions, natural language interaction, real-time reasoning. Human-level performances on these tasks are a far cry from current AI research efforts.

Third, designers of AI and robotic mechanisms are careful to note that their systems are expected to work properly in normal task environments only. Tests are usually conducted in isolated experimental settings or by computer simulations based on theoretical models of task environments. The absence of unforeseen disturbing factors in complicated real environments, especially in erratic warfare scenarios, is often conjectured on this basis only, that is, without direct or extensive tests "in the wild".


Draft 18th Jan '04