Skip to main content

Book Review Killing machines have no moral compass

Kenneth Payne's book chillingly demonstrates how the military use of Artificial Intelligence weapons is becoming ever more dangerous, says ANDY HEDGECOCK

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
(Hurst,  £20)

IN A recent column for Forbes magazine, venture capitalist Mark Minevich suggested “ethics leaders” in organisations such as Deloitte, IBM and the US army will ensure artificial intelligence (AI) is leveraged for the benefit of humanity.

Kenneth Payne’s I, Warbot is the antidote to this glib optimism. It highlights subtle issues affecting the design and impact of military AI and attempts to craft an ethical framework appropriate to its development.

Payne’s starting point is the three laws of robotics proposed by Isaac Asimov in his science fiction classic I, Robot and he points out that rules predicated on protecting humans from harm and injury are irrelevant to machines designed for violence. And Asimov’s principles do not cover neural networks and other learning machines that exhibit an unanticipated repertoire of behaviours.

Payne links the theories of 19th-century Prussian general Carl von Clausewitz to the development and application of military AI. His analysis discriminates between military strategy and tactics and acknowledges the complex interaction of social, psychological and political variables in warfare.

AI is seen as a significant departure from other military technologies. The hand axe and thermonuclear warhead were tools but AIs are more dangerous, physically and morally, because they have agency in their flexible and autonomous responses to changing conditions.

Payne believes systems described as intelligent tend to be powerful calculators, undeserving of the terms “mind,” “understanding” and “thought.” The tendency to anthropomorphise machines, to perceive human qualities where none exist, leads us to exaggerate technological progress. Human intelligence is an emergent property of a specific biological system active in its physical environment. This makes intelligence tricky to understand and trickier to replicate.

In highlighting the physical risks and moral dilemmas we already face, Payne invites us to imagine confrontations between fleets of “highly integrated, extremely destructive warbots” and explores the prospect of editing DNA to produce augmented human beings. In whose interest would these “enhancements” be made? And to what end?

The book ends with a revised set of rules for warbots, based on the principle of accuracy and humanity in killing. The notion that AIs will kill only those they are intended to kill, understand instructions, act creatively and preserve themselves without compromising their mission, is far from reassuring. But this is not a book that sets out to reassure.

Payne’s focus is narrow and he is not concerned with the causes of conflict in this detailed, accessible and surprisingly entertaining book, whose relevance is illustrated by recent predictions that the global market in military AI will exceed £8 billion by 2025.

 

OWNED BY OUR READERS

We're a reader-owned co-operative, which means you can become part of the paper too by buying shares in the People’s Press Printing Society.

 

 

Become a supporter

Fighting fund

You've Raised:£ 11,501
We need:£ 6,499
6 Days remaining
Donate today