The Singularity Edited by Uziel Awret (Imprint Academic,£29.95)
THIS is a timely and provocative anthology on a theme that has fascinated scientists, philosophers and SF writers for decades. The “singularity” occurs when artificial intelligence (AI) exceeds that of humans and AIs design new technologies beyond our understanding.
Its realisation would bring massive and unpredictable changes to civilisation and the environment.
The book’s symposium structure has a “target” paper by philosopher David J Chalmers which elicits a range of articles in response and it concludes with a response from the same author, while the editor’s introduction establishes the history of the singularity concept, highlights the interdisciplinary debates it generates and provides an accessible way into the language and logical arguments employed in later, more challenging pieces.
Chalmers argues that human-level AI is likely to be created in a century or so, unless prevented by disaster, legislation or direct action. If AI is created, AI+ — representing greater than human intelligence — is likely to follow within decades and, a few years after, the world would see AI++ systems with much greater than human intelligence.
If there are obstacles, says Chalmers, they are of human motivation rather than technological capacity. He believes we can preserve human values in the world of AI++ by creating intelligent systems in “leakproof” virtual environments and by uploading human brains into machine-based hosts. What is not clear is the extent to which our sense of “identity” could be transferred.
Economist Robin Hanson shares Chalmers’s views on the inevitability of human level AI but attributes it to a historical drive for growth. He warns that AI methods such as direct programming may make the perpetual reinforcement of traditional values more likely: “Explicit and transparent encoding of values might make indoctrination easier and more reliable,” he observes.
Chalmers is challenged more robustly by neuroscientist Susan Greenfield, who attributes characteristics such as wisdom, understanding and values to human consciousness and suggests the non-computational nature of the brain makes machine emulation all but impossible.
Greenfield’s scepticism is shared by AI and neural systems researcher Igor Aleksander, who feels the quest for systems capable of designing more intelligent successors is the philosopher’s stone of AI.
The singularity, he says, is predicated on full analysis of the cognitive abilities of human beings but this has not been achieved and remains unachievable. For Aleksander, there are greater threats to a sustainable human future than a computational singularity.
Yet psychologist Susan Blackmore sees the singularity as a possibility and is unconcerned about the preservation of individual identity. After all, it can be argued that the fleeting version of Blackmore that existed at the time her article was written is not the one which exists while you read it.
Technologies shaping our world and determining the sustainability of human civilisation are commissioned by wealthy corporations. So uploaded human intelligence, machine learning and systems designed without human agency — and perhaps without human values — are ideas we all need to understand and influence.
Uziel Awret and colleagues have provided an excellent starting point to greater understanding of this vital theme.