“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Isaac Asimov’s precept formed the moral underpinning of his futuristic fiction; but 75 years after he first articulated his three laws of robotics, the first and crucial principle is being overtaken by reality.
True, there are as yet no killer androids rampaging across the battlefield. But there are already defensive systems in place that can be programmed to detect and fire at threats — whether incoming missiles or approaching humans. The Pentagon has tested a swarm of miniature drones — raising the possibility that commanders could in future send clouds of skybots into enemy territory equipped to gather intelligence, block radar or — aided by face recognition technology — carry out assassinations. From China to Israel, Russia to Britain, many governments are keen to put rapid advances in artificial intelligence to military use.
This is a source of alarm to researchers and tech industry executives. Already under fire for the impact that disruptive technologies will have on society, they have no wish to see their commercial innovations adapted to devastating effect. Hence this week’s call from the founders of robotics and AI companies for the UN to take action to prevent an arms race in lethal autonomous weapons systems. In an open letter, they underline their concern that such technology could permit conflict “at a scale greater than ever”, could help repressive regimes quell dissent, or that weapons could be hacked “to behave in undesirable ways”.