When the Chernobyl nuclear power plant exploded in 1986 it was a catastrophe for those who lived nearby in northern Ukraine. But the accident was also a disaster for a global industry pushing nuclear energy as the technology of the future. The net number of nuclear reactors has pretty much flatlined since as it was seen as unsafe. What would happen today if the AI industry suffered an equivalent accident?
That question was posed on the sidelines of this week’s AI Action Summit in Paris by Stuart Russell, a professor of computer science at the University of California, Berkeley. His answer was that it was a fallacy to believe there has to be a trade-off between safety and innovation. So those most excited by the promise of AI technology should still proceed carefully. “You cannot have innovation without safety,” he said.
Russell’s warning was echoed by some other AI experts in Paris. “We have to have minimum safety standards agreed globally. We need to have these in place before we have a major disaster,” Wendy Hall, director of the Web Science Institute at the University of Southampton, told me.