The Technological Singularity - by Murray Shanahan
Shanahan is Professor of Cognitive Robotics at Imperial College London, and writes with authority and clarity about the possibility of a “singularity” in information technology, a sudden and disruptive increase in AI capability which would have unique and transformative effects on human history. “Our very understanding of what it means to be human – to be an individual, to be alive, to be conscious, to be part of the social order – all this would be thrown into question, not by detached philosophical reflection, but through force of circumstances, real and present.”
He provides a brief history of AI and current attempts to develop technologies that simulate human brain function (whole brain emulation) by detailed scanning of every individual neuron and connection within a mammalian brain. A computer simulation can then be built which emulates in precise detail the activity of the brain. One promising approach is neuromorphic hardware. Rather than using existing general-purpose computing technology, the idea here is to construct custom hardware that closely resembles the ‘wetware’ of the brain. The simulated brain could then be connected to mechanical hardware, to create an intelligent robot, or it could be connected to a virtual body and environment to explore. Based on fairly conservative philosophical, scientific, and technological assumptions Shanahan predicts that some form of artificial general intelligence which approximates the level of functioning of a mouse will become feasible in the near term. This could then be progressively upgraded towards human-level intelligence.
Shanahan argues that there are two opposing mistakes in discussions about AI. The first mistake is to give the impression that human-level artificial intelligence is just around the corner. The other mistake is to suppose that it will never happen. He explores some of the social and personal consequences of the development of a human-like general intelligence. Would it be capable of moral judgments, and should it therefore be held responsible for its actions? Would it be capable of suffering, and should it therefore have rights? How much freedom to act should it be afforded? What would be the consequences for society and for humanity as a whole of introducing such systems into the world. If their freedom to act were not curtailed, how and to what extent would they reshape our world? What impact would they have on our economy, on our social fabric, on our sense of what it means to be human? What kind of world would result?
Inevitably the narrative steers between realistic scientific forecasts and what seems to resemble science fantasy. “A brain-inspired human-level AI wouldn’t have to waste time finding food, preparing it, and eating it. Nor would it have to spend time (or as much time, in the case of whole brain emulation) unproductively asleep. The time duly saved could be devoted to work, and lead to a resulting increase in its effective workload…Of course, most humans would object to having their mealtimes and their sleep replaced by work. But the reward function of a designer brain could be tuned differently. A willing intellectual slave who never eats or sleeps and wants nothing more than to work would be many corporations’ idea of the perfect employee, especially if they don’t require wages.”
“Eliminating the need for food and sleep is one straightforward way to exploit the liberation from biology. Other relatively conservative techniques for getting the most out of brain-inspired AI are easy to imagine. Many humans enhance their cognitive performance using the tried-and-tested pharmaceutical trick of caffeine ingestion…In a simulated brain the effects of such drugs can themselves be simulated, without any unwanted side effects on the rest of the body. Moreover there’s no need to stick to pharmaceutically realistic interventions. With innumerable easily modified parameters, there would be myriad ways to beneficially modulate the activity of a simulated brain, and thereby to optimize it for a particular task…”.
Shanahan grapples with an old and complex conundrum. Would a brain-inspired AI be conscious? “…If a human-level AI were built that closely followed the organizational principles of the biological brain, would it not only act and think like its biological precursors, but also have feelings as they do? If it did, then how would it feel about the prospect of being copied, and of some of its copies eventually being destroyed?” He argues that economic and political factors as well as scientific curiosity are likely to lead to pressure to create human-level artificial intelligence. This would raise the possibility that personhood, and legal protections, rights and responsibilities would be conferred on a class of machines.
Shanahan writes from entirely naturalistic presuppositions and is well known for providing technical support for the science fiction movie “Ex Machina”. His writing reflects his optimism in the potential that AI has to overcome fundamental global challenges, including climate change, dwindling fossil fuels, ongoing conflicts, widespread poverty, and diseases that remain incurable like cancer and dementia.
“The best hope for tackling these problems is surely through scientific and technological advances, and the best way to accelerate science and technology is surely to recruit, train, and apply more brilliant minds. So the arrival of human-level artificial intelligence, perhaps with a pattern of intellectual strengths and weaknesses that complements human intelligence, should lead to more rapid progress.”
However he recognises the risks that a superintelligent AI might wreak havoc in order to maximise its reward function. The answer is to carefully tune the AI’s reward function, “…embedding something like moral constraints into the reward function, constraints that prevent the AI from doing harm….The reward function is designed in such a way that actions violating a moral constraint have an overwhelmingly negative value. A course of action that unnecessarily violated a moral constraint would then always be suboptimal, and the AI would never select it. Although this strategy sounds like a good idea, it turns out (again) to be surprisingly difficult to implement…..”
Shanahan recognises that the possibility of a technological singularity “….invites us to revisit some of the most profound questions we can ask, throwing them into a new light. How should we live? How should we confront death? What does it mean to be human? What is mind? What is consciousness? What is our potential as a species? Do we have a purpose, and if so, what is it? What is our ultimate destiny? Whatever the future actually holds, looking at these questions through the lens of the singularity is enlightening….”
“…..It’s all too easy to mock such views. But it should be born in mind that they are the outcome of a chain of reasoning that combines the rational extrapolation of existing technological trends with well-founded scientific knowledge and a small number of fairly conservative philosophical assumptions…”
Shanahan’s book provides a thoughtful and well-informed assessment of the future possibilities and challenges that might arise from the development of human-like and super-human level AI.
© 2020 John Wyatt