How can we ensure that computers do what we want them to do when they are increasingly doing it for themselves?
That may sound like an abstract philosophical question, but it is also an urgent practical challenge, according to Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the world’s leading thinkers on artificial intelligence.
It is all too easy to imagine scenarios in which increasingly powerful autonomous computer systems cause terrible real-world damage, either through thoughtless misuse or deliberate abuse, he says. Suppose, for example, in the not-too-distant future that a care robot is looking after your children. You are running late and ask the robot to prepare a meal. The robot opens the fridge, finds no food, calculates the nutritional value of your cat and serves up a feline fricassee.